relative_path
stringclasses 812
values | section
stringclasses 339
values | filename
stringlengths 2
61
| text
stringlengths 6
1.76M
|
---|---|---|---|
TensorFlow/Detection/SSD/models/research/object_detection/predictors/heads | heads | keypoint_head | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Keypoint Head.
Contains Keypoint prediction head classes for different meta architectures.
All the keypoint prediction heads have a predict function that receives the
`features` as the first argument and returns `keypoint_predictions`.
Keypoints could be used to represent the human body joint locations as in
Mask RCNN paper. Or they could be used to represent different part locations of
objects.
"""
import tensorflow as tf
from object_detection.predictors.heads import head
slim = tf.contrib.slim
class MaskRCNNKeypointHead(head.Head):
"""Mask RCNN keypoint prediction head.
Please refer to Mask RCNN paper:
https://arxiv.org/abs/1703.06870
"""
def __init__(self,
num_keypoints=17,
conv_hyperparams_fn=None,
keypoint_heatmap_height=56,
keypoint_heatmap_width=56,
keypoint_prediction_num_conv_layers=8,
keypoint_prediction_conv_depth=512):
"""Constructor.
Args:
num_keypoints: (int scalar) number of keypoints.
conv_hyperparams_fn: A function to generate tf-slim arg_scope with
hyperparameters for convolution ops.
keypoint_heatmap_height: Desired output mask height. The default value
is 14.
keypoint_heatmap_width: Desired output mask width. The default value
is 14.
keypoint_prediction_num_conv_layers: Number of convolution layers applied
to the image_features in mask prediction branch.
keypoint_prediction_conv_depth: The depth for the first conv2d_transpose
op applied to the image_features in the mask prediction branch. If set
to 0, the depth of the convolution layers will be automatically chosen
based on the number of object classes and the number of channels in the
image features.
"""
super(MaskRCNNKeypointHead, self).__init__()
self._num_keypoints = num_keypoints
self._conv_hyperparams_fn = conv_hyperparams_fn
self._keypoint_heatmap_height = keypoint_heatmap_height
self._keypoint_heatmap_width = keypoint_heatmap_width
self._keypoint_prediction_num_conv_layers = (
keypoint_prediction_num_conv_layers)
self._keypoint_prediction_conv_depth = keypoint_prediction_conv_depth
def predict(self, features, num_predictions_per_location=1):
"""Performs keypoint prediction.
Args:
features: A float tensor of shape [batch_size, height, width,
channels] containing features for a batch of images.
num_predictions_per_location: Int containing number of predictions per
location.
Returns:
instance_masks: A float tensor of shape
[batch_size, 1, num_keypoints, heatmap_height, heatmap_width].
Raises:
ValueError: If num_predictions_per_location is not 1.
"""
if num_predictions_per_location != 1:
raise ValueError('Only num_predictions_per_location=1 is supported')
with slim.arg_scope(self._conv_hyperparams_fn()):
net = slim.conv2d(
features,
self._keypoint_prediction_conv_depth, [3, 3],
scope='conv_1')
for i in range(1, self._keypoint_prediction_num_conv_layers):
net = slim.conv2d(
net,
self._keypoint_prediction_conv_depth, [3, 3],
scope='conv_%d' % (i + 1))
net = slim.conv2d_transpose(
net, self._num_keypoints, [2, 2], scope='deconv1')
heatmaps_mask = tf.image.resize_bilinear(
net, [self._keypoint_heatmap_height, self._keypoint_heatmap_width],
align_corners=True,
name='upsample')
return tf.expand_dims(
tf.transpose(heatmaps_mask, perm=[0, 3, 1, 2]),
axis=1,
name='KeypointPredictor')
|
PyTorch/LanguageModeling/BERT/triton/runner/maintainer/docker | docker | container | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import pathlib
import docker
from docker.models.containers import ExecResult
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from ..container import Container
class DockerContainer(Container):
def __init__(self, name: str):
super().__init__(name)
self._container = None
self._docker_client = docker.from_env()
self._docker_api_client = docker.APIClient()
@abc.abstractmethod
def start(self):
"""
Start container
"""
pass
@abc.abstractmethod
def stop(self):
"""
Stop container
"""
@abc.abstractmethod
def run(self, command: str) -> ExecResult:
"""
Run command inside container
Args:
command: command to execute
Returns:
ExecResult
"""
pass
|
Tools/PyTorch/TimeSeriesPredictionPlatform/models/tft_pyt/triton/scripts/docker | docker | build | #!/usr/bin/env bash
# Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
docker build -t tft . -f Dockerfile
|
PyTorch/SpeechRecognition/QuartzNet | QuartzNet | .gitignore | __pycache__
*.pt
results/
datasets/
checkpoints/
*.swp
*.swo
*.swn
|
PyTorch/SpeechSynthesis/HiFiGAN/scripts | scripts | prepare_dataset | #!/usr/bin/env bash
set -e
: ${DATASET_PATH:=data/LJSpeech-1.1}
export DATASET_PATH
bash scripts/generate_filelists.sh
# Generate mel-spectrograms
python prepare_dataset.py \
--wav-text-filelists data/filelists/ljs_audio_text.txt \
--n-workers 16 \
--batch-size 1 \
--dataset-path $DATASET_PATH \
--extract-mels \
"$@"
|
TensorFlow/Detection/SSD/models/research/object_detection/builders | builders | optimizer_builder | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
#
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Functions to build DetectionModel training optimizers."""
import tensorflow as tf
import horovod.tensorflow as hvd
from object_detection.utils import learning_schedules
def build(optimizer_config):
"""Create optimizer based on config.
Args:
optimizer_config: A Optimizer proto message.
Returns:
An optimizer and a list of variables for summary.
Raises:
ValueError: when using an unsupported input data type.
"""
optimizer_type = optimizer_config.WhichOneof('optimizer')
optimizer = None
summary_vars = []
if optimizer_type == 'rms_prop_optimizer':
config = optimizer_config.rms_prop_optimizer
learning_rate = _create_learning_rate(config.learning_rate)
summary_vars.append(learning_rate)
optimizer = tf.train.RMSPropOptimizer(
learning_rate,
decay=config.decay,
momentum=config.momentum_optimizer_value,
epsilon=config.epsilon)
if optimizer_type == 'momentum_optimizer':
config = optimizer_config.momentum_optimizer
learning_rate = _create_learning_rate(config.learning_rate)
summary_vars.append(learning_rate)
optimizer = tf.train.MomentumOptimizer(
learning_rate,
momentum=config.momentum_optimizer_value)
if optimizer_type == 'adam_optimizer':
config = optimizer_config.adam_optimizer
learning_rate = _create_learning_rate(config.learning_rate)
summary_vars.append(learning_rate)
optimizer = tf.train.AdamOptimizer(learning_rate)
if optimizer is None:
raise ValueError('Optimizer %s not supported.' % optimizer_type)
optimizer = hvd.DistributedOptimizer(optimizer)
if optimizer_config.use_moving_average:
optimizer = tf.contrib.opt.MovingAverageOptimizer(
optimizer, average_decay=optimizer_config.moving_average_decay)
return optimizer, summary_vars
def _create_learning_rate(learning_rate_config):
"""Create optimizer learning rate based on config.
Args:
learning_rate_config: A LearningRate proto message.
Returns:
A learning rate.
Raises:
ValueError: when using an unsupported input data type.
"""
learning_rate = None
learning_rate_type = learning_rate_config.WhichOneof('learning_rate')
if learning_rate_type == 'constant_learning_rate':
config = learning_rate_config.constant_learning_rate
learning_rate = tf.constant(config.learning_rate, dtype=tf.float32,
name='learning_rate')
if learning_rate_type == 'exponential_decay_learning_rate':
config = learning_rate_config.exponential_decay_learning_rate
learning_rate = learning_schedules.exponential_decay_with_burnin(
tf.train.get_or_create_global_step(),
config.initial_learning_rate,
config.decay_steps,
config.decay_factor,
burnin_learning_rate=config.burnin_learning_rate,
burnin_steps=config.burnin_steps,
min_learning_rate=config.min_learning_rate,
staircase=config.staircase)
if learning_rate_type == 'manual_step_learning_rate':
config = learning_rate_config.manual_step_learning_rate
if not config.schedule:
raise ValueError('Empty learning rate schedule.')
learning_rate_step_boundaries = [x.step for x in config.schedule]
learning_rate_sequence = [config.initial_learning_rate]
learning_rate_sequence += [x.learning_rate for x in config.schedule]
learning_rate = learning_schedules.manual_stepping(
tf.train.get_or_create_global_step(), learning_rate_step_boundaries,
learning_rate_sequence, config.warmup)
if learning_rate_type == 'cosine_decay_learning_rate':
config = learning_rate_config.cosine_decay_learning_rate
learning_rate = learning_schedules.cosine_decay_with_warmup(
tf.train.get_or_create_global_step(),
config.learning_rate_base,
config.total_steps,
config.warmup_learning_rate,
config.warmup_steps,
config.hold_base_rate_steps)
if learning_rate is None:
raise ValueError('Learning_rate %s not supported.' % learning_rate_type)
return learning_rate
|
PyTorch/Detection/Efficientdet/scripts/D0 | D0 | train_AMP_8xV100-32G | #!/bin/bash
function get_dataloader_workers {
gpus=$(nvidia-smi -i 0 --query-gpu=count --format=csv,noheader)
core=$(nproc --all)
workers=$((core/gpus-2))
workers=$((workers>16?16:workers))
echo ${workers}
}
WORKERS=$(get_dataloader_workers)
./distributed_train.sh 8 /workspace/object_detection/datasets/coco --model efficientdet_d0 -b 60 --lr 0.65 --amp --opt fusedmomentum --warmup-epochs 20 --lr-noise 0.4 0.9 --output /model --worker ${WORKERS} --fill-color mean --model-ema --model-ema-decay 0.999 --eval-after 200 --epochs 300 --resume --smoothing 0.0 --pretrained-backbone-path /backbone_checkpoints/jocbackbone_statedict_B0.pth --memory-format nchw --sync-bn --fused-focal-loss --seed 12711
|
TensorFlow/Classification/ConvNets/model/layers | layers | activation | # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import tensorflow as tf
__all__ = ['relu', 'softmax', 'tanh', 'sigmoid']
def relu(inputs, name='relu'):
net = tf.nn.relu(inputs, name=name)
return net
def softmax(inputs, axis=None, name="softmax"):
net = tf.nn.softmax(
inputs,
axis=axis,
name=name,
)
return net
def tanh(inputs, name='tanh'):
net = tf.math.tanh(inputs, name=name)
return net
def sigmoid(inputs, name='sigmoid'):
net = tf.math.sigmoid(inputs, name=name)
return net
|
PyTorch/SpeechSynthesis/Tacotron2/notebooks/conversationalai/client/speech_ai_demo/utils/tacotron2/unidecoder | unidecoder | replacements | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# MIT License
#
# Copyright (c) Sindre Sorhus <[email protected]> (https://sindresorhus.com)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Based on:
# https://github.com/sindresorhus/transliterate/blob/main/replacements.js
#
replacements = [
# German umlauts
['ß', 'ss'],
['ẞ', 'Ss'],
['ä', 'ae'],
['Ä', 'Ae'],
['ö', 'oe'],
['Ö', 'Oe'],
['ü', 'ue'],
['Ü', 'Ue'],
# Latin
['À', 'A'],
['Á', 'A'],
['Â', 'A'],
['Ã', 'A'],
['Ä', 'Ae'],
['Å', 'A'],
['Æ', 'AE'],
['Ç', 'C'],
['È', 'E'],
['É', 'E'],
['Ê', 'E'],
['Ë', 'E'],
['Ì', 'I'],
['Í', 'I'],
['Î', 'I'],
['Ï', 'I'],
['Ð', 'D'],
['Ñ', 'N'],
['Ò', 'O'],
['Ó', 'O'],
['Ô', 'O'],
['Õ', 'O'],
['Ö', 'Oe'],
['Ő', 'O'],
['Ø', 'O'],
['Ù', 'U'],
['Ú', 'U'],
['Û', 'U'],
['Ü', 'Ue'],
['Ű', 'U'],
['Ý', 'Y'],
['Þ', 'TH'],
['ß', 'ss'],
['à', 'a'],
['á', 'a'],
['â', 'a'],
['ã', 'a'],
['ä', 'ae'],
['å', 'a'],
['æ', 'ae'],
['ç', 'c'],
['è', 'e'],
['é', 'e'],
['ê', 'e'],
['ë', 'e'],
['ì', 'i'],
['í', 'i'],
['î', 'i'],
['ï', 'i'],
['ð', 'd'],
['ñ', 'n'],
['ò', 'o'],
['ó', 'o'],
['ô', 'o'],
['õ', 'o'],
['ö', 'oe'],
['ő', 'o'],
['ø', 'o'],
['ù', 'u'],
['ú', 'u'],
['û', 'u'],
['ü', 'ue'],
['ű', 'u'],
['ý', 'y'],
['þ', 'th'],
['ÿ', 'y'],
['ẞ', 'SS'],
# Vietnamese
['à', 'a'],
['À', 'A'],
['á', 'a'],
['Á', 'A'],
['â', 'a'],
['Â', 'A'],
['ã', 'a'],
['Ã', 'A'],
['è', 'e'],
['È', 'E'],
['é', 'e'],
['É', 'E'],
['ê', 'e'],
['Ê', 'E'],
['ì', 'i'],
['Ì', 'I'],
['í', 'i'],
['Í', 'I'],
['ò', 'o'],
['Ò', 'O'],
['ó', 'o'],
['Ó', 'O'],
['ô', 'o'],
['Ô', 'O'],
['õ', 'o'],
['Õ', 'O'],
['ù', 'u'],
['Ù', 'U'],
['ú', 'u'],
['Ú', 'U'],
['ý', 'y'],
['Ý', 'Y'],
['ă', 'a'],
['Ă', 'A'],
['Đ', 'D'],
['đ', 'd'],
['ĩ', 'i'],
['Ĩ', 'I'],
['ũ', 'u'],
['Ũ', 'U'],
['ơ', 'o'],
['Ơ', 'O'],
['ư', 'u'],
['Ư', 'U'],
['ạ', 'a'],
['Ạ', 'A'],
['ả', 'a'],
['Ả', 'A'],
['ấ', 'a'],
['Ấ', 'A'],
['ầ', 'a'],
['Ầ', 'A'],
['ẩ', 'a'],
['Ẩ', 'A'],
['ẫ', 'a'],
['Ẫ', 'A'],
['ậ', 'a'],
['Ậ', 'A'],
['ắ', 'a'],
['Ắ', 'A'],
['ằ', 'a'],
['Ằ', 'A'],
['ẳ', 'a'],
['Ẳ', 'A'],
['ẵ', 'a'],
['Ẵ', 'A'],
['ặ', 'a'],
['Ặ', 'A'],
['ẹ', 'e'],
['Ẹ', 'E'],
['ẻ', 'e'],
['Ẻ', 'E'],
['ẽ', 'e'],
['Ẽ', 'E'],
['ế', 'e'],
['Ế', 'E'],
['ề', 'e'],
['Ề', 'E'],
['ể', 'e'],
['Ể', 'E'],
['ễ', 'e'],
['Ễ', 'E'],
['ệ', 'e'],
['Ệ', 'E'],
['ỉ', 'i'],
['Ỉ', 'I'],
['ị', 'i'],
['Ị', 'I'],
['ọ', 'o'],
['Ọ', 'O'],
['ỏ', 'o'],
['Ỏ', 'O'],
['ố', 'o'],
['Ố', 'O'],
['ồ', 'o'],
['Ồ', 'O'],
['ổ', 'o'],
['Ổ', 'O'],
['ỗ', 'o'],
['Ỗ', 'O'],
['ộ', 'o'],
['Ộ', 'O'],
['ớ', 'o'],
['Ớ', 'O'],
['ờ', 'o'],
['Ờ', 'O'],
['ở', 'o'],
['Ở', 'O'],
['ỡ', 'o'],
['Ỡ', 'O'],
['ợ', 'o'],
['Ợ', 'O'],
['ụ', 'u'],
['Ụ', 'U'],
['ủ', 'u'],
['Ủ', 'U'],
['ứ', 'u'],
['Ứ', 'U'],
['ừ', 'u'],
['Ừ', 'U'],
['ử', 'u'],
['Ử', 'U'],
['ữ', 'u'],
['Ữ', 'U'],
['ự', 'u'],
['Ự', 'U'],
['ỳ', 'y'],
['Ỳ', 'Y'],
['ỵ', 'y'],
['Ỵ', 'Y'],
['ỷ', 'y'],
['Ỷ', 'Y'],
['ỹ', 'y'],
['Ỹ', 'Y'],
# Arabic
['ء', 'e'],
['آ', 'a'],
['أ', 'a'],
['ؤ', 'w'],
['إ', 'i'],
['ئ', 'y'],
['ا', 'a'],
['ب', 'b'],
['ة', 't'],
['ت', 't'],
['ث', 'th'],
['ج', 'j'],
['ح', 'h'],
['خ', 'kh'],
['د', 'd'],
['ذ', 'dh'],
['ر', 'r'],
['ز', 'z'],
['س', 's'],
['ش', 'sh'],
['ص', 's'],
['ض', 'd'],
['ط', 't'],
['ظ', 'z'],
['ع', 'e'],
['غ', 'gh'],
['ـ', '_'],
['ف', 'f'],
['ق', 'q'],
['ك', 'k'],
['ل', 'l'],
['م', 'm'],
['ن', 'n'],
['ه', 'h'],
['و', 'w'],
['ى', 'a'],
['ي', 'y'],
['َ', 'a'],
['ُ', 'u'],
['ِ', 'i'],
['٠', '0'],
['١', '1'],
['٢', '2'],
['٣', '3'],
['٤', '4'],
['٥', '5'],
['٦', '6'],
['٧', '7'],
['٨', '8'],
['٩', '9'],
# Persian / Farsi
['چ', 'ch'],
['ک', 'k'],
['گ', 'g'],
['پ', 'p'],
['ژ', 'zh'],
['ی', 'y'],
['۰', '0'],
['۱', '1'],
['۲', '2'],
['۳', '3'],
['۴', '4'],
['۵', '5'],
['۶', '6'],
['۷', '7'],
['۸', '8'],
['۹', '9'],
# Pashto
['ټ', 'p'],
['ځ', 'z'],
['څ', 'c'],
['ډ', 'd'],
['ﺫ', 'd'],
['ﺭ', 'r'],
['ړ', 'r'],
['ﺯ', 'z'],
['ږ', 'g'],
['ښ', 'x'],
['ګ', 'g'],
['ڼ', 'n'],
['ۀ', 'e'],
['ې', 'e'],
['ۍ', 'ai'],
# Urdu
['ٹ', 't'],
['ڈ', 'd'],
['ڑ', 'r'],
['ں', 'n'],
['ہ', 'h'],
['ھ', 'h'],
['ے', 'e'],
# Russian
['А', 'A'],
['а', 'a'],
['Б', 'B'],
['б', 'b'],
['В', 'V'],
['в', 'v'],
['Г', 'G'],
['г', 'g'],
['Д', 'D'],
['д', 'd'],
['ъе', 'ye'],
['Ъе', 'Ye'],
['ъЕ', 'yE'],
['ЪЕ', 'YE'],
['Е', 'E'],
['е', 'e'],
['Ё', 'Yo'],
['ё', 'yo'],
['Ж', 'Zh'],
['ж', 'zh'],
['З', 'Z'],
['з', 'z'],
['И', 'I'],
['и', 'i'],
['ый', 'iy'],
['Ый', 'Iy'],
['ЫЙ', 'IY'],
['ыЙ', 'iY'],
['Й', 'Y'],
['й', 'y'],
['К', 'K'],
['к', 'k'],
['Л', 'L'],
['л', 'l'],
['М', 'M'],
['м', 'm'],
['Н', 'N'],
['н', 'n'],
['О', 'O'],
['о', 'o'],
['П', 'P'],
['п', 'p'],
['Р', 'R'],
['р', 'r'],
['С', 'S'],
['с', 's'],
['Т', 'T'],
['т', 't'],
['У', 'U'],
['у', 'u'],
['Ф', 'F'],
['ф', 'f'],
['Х', 'Kh'],
['х', 'kh'],
['Ц', 'Ts'],
['ц', 'ts'],
['Ч', 'Ch'],
['ч', 'ch'],
['Ш', 'Sh'],
['ш', 'sh'],
['Щ', 'Sch'],
['щ', 'sch'],
['Ъ', ''],
['ъ', ''],
['Ы', 'Y'],
['ы', 'y'],
['Ь', ''],
['ь', ''],
['Э', 'E'],
['э', 'e'],
['Ю', 'Yu'],
['ю', 'yu'],
['Я', 'Ya'],
['я', 'ya'],
# Romanian
['ă', 'a'],
['Ă', 'A'],
['ș', 's'],
['Ș', 'S'],
['ț', 't'],
['Ț', 'T'],
['ţ', 't'],
['Ţ', 'T'],
# Turkish
['ş', 's'],
['Ş', 'S'],
['ç', 'c'],
['Ç', 'C'],
['ğ', 'g'],
['Ğ', 'G'],
['ı', 'i'],
['İ', 'I'],
# Armenian
['ա', 'a'],
['Ա', 'A'],
['բ', 'b'],
['Բ', 'B'],
['գ', 'g'],
['Գ', 'G'],
['դ', 'd'],
['Դ', 'D'],
['ե', 'ye'],
['Ե', 'Ye'],
['զ', 'z'],
['Զ', 'Z'],
['է', 'e'],
['Է', 'E'],
['ը', 'y'],
['Ը', 'Y'],
['թ', 't'],
['Թ', 'T'],
['ժ', 'zh'],
['Ժ', 'Zh'],
['ի', 'i'],
['Ի', 'I'],
['լ', 'l'],
['Լ', 'L'],
['խ', 'kh'],
['Խ', 'Kh'],
['ծ', 'ts'],
['Ծ', 'Ts'],
['կ', 'k'],
['Կ', 'K'],
['հ', 'h'],
['Հ', 'H'],
['ձ', 'dz'],
['Ձ', 'Dz'],
['ղ', 'gh'],
['Ղ', 'Gh'],
['ճ', 'tch'],
['Ճ', 'Tch'],
['մ', 'm'],
['Մ', 'M'],
['յ', 'y'],
['Յ', 'Y'],
['ն', 'n'],
['Ն', 'N'],
['շ', 'sh'],
['Շ', 'Sh'],
['ո', 'vo'],
['Ո', 'Vo'],
['չ', 'ch'],
['Չ', 'Ch'],
['պ', 'p'],
['Պ', 'P'],
['ջ', 'j'],
['Ջ', 'J'],
['ռ', 'r'],
['Ռ', 'R'],
['ս', 's'],
['Ս', 'S'],
['վ', 'v'],
['Վ', 'V'],
['տ', 't'],
['Տ', 'T'],
['ր', 'r'],
['Ր', 'R'],
['ց', 'c'],
['Ց', 'C'],
['ու', 'u'],
['ՈՒ', 'U'],
['Ու', 'U'],
['փ', 'p'],
['Փ', 'P'],
['ք', 'q'],
['Ք', 'Q'],
['օ', 'o'],
['Օ', 'O'],
['ֆ', 'f'],
['Ֆ', 'F'],
['և', 'yev'],
# Georgian
['ა', 'a'],
['ბ', 'b'],
['გ', 'g'],
['დ', 'd'],
['ე', 'e'],
['ვ', 'v'],
['ზ', 'z'],
['თ', 't'],
['ი', 'i'],
['კ', 'k'],
['ლ', 'l'],
['მ', 'm'],
['ნ', 'n'],
['ო', 'o'],
['პ', 'p'],
['ჟ', 'zh'],
['რ', 'r'],
['ს', 's'],
['ტ', 't'],
['უ', 'u'],
['ფ', 'ph'],
['ქ', 'q'],
['ღ', 'gh'],
['ყ', 'k'],
['შ', 'sh'],
['ჩ', 'ch'],
['ც', 'ts'],
['ძ', 'dz'],
['წ', 'ts'],
['ჭ', 'tch'],
['ხ', 'kh'],
['ჯ', 'j'],
['ჰ', 'h'],
# Czech
['č', 'c'],
['ď', 'd'],
['ě', 'e'],
['ň', 'n'],
['ř', 'r'],
['š', 's'],
['ť', 't'],
['ů', 'u'],
['ž', 'z'],
['Č', 'C'],
['Ď', 'D'],
['Ě', 'E'],
['Ň', 'N'],
['Ř', 'R'],
['Š', 'S'],
['Ť', 'T'],
['Ů', 'U'],
['Ž', 'Z'],
# Dhivehi
['ހ', 'h'],
['ށ', 'sh'],
['ނ', 'n'],
['ރ', 'r'],
['ބ', 'b'],
['ޅ', 'lh'],
['ކ', 'k'],
['އ', 'a'],
['ވ', 'v'],
['މ', 'm'],
['ފ', 'f'],
['ދ', 'dh'],
['ތ', 'th'],
['ލ', 'l'],
['ގ', 'g'],
['ޏ', 'gn'],
['ސ', 's'],
['ޑ', 'd'],
['ޒ', 'z'],
['ޓ', 't'],
['ޔ', 'y'],
['ޕ', 'p'],
['ޖ', 'j'],
['ޗ', 'ch'],
['ޘ', 'tt'],
['ޙ', 'hh'],
['ޚ', 'kh'],
['ޛ', 'th'],
['ޜ', 'z'],
['ޝ', 'sh'],
['ޞ', 's'],
['ޟ', 'd'],
['ޠ', 't'],
['ޡ', 'z'],
['ޢ', 'a'],
['ޣ', 'gh'],
['ޤ', 'q'],
['ޥ', 'w'],
['ަ', 'a'],
['ާ', 'aa'],
['ި', 'i'],
['ީ', 'ee'],
['ު', 'u'],
['ޫ', 'oo'],
['ެ', 'e'],
['ޭ', 'ey'],
['ޮ', 'o'],
['ޯ', 'oa'],
['ް', ''],
# Greek
['α', 'a'],
['β', 'v'],
['γ', 'g'],
['δ', 'd'],
['ε', 'e'],
['ζ', 'z'],
['η', 'i'],
['θ', 'th'],
['ι', 'i'],
['κ', 'k'],
['λ', 'l'],
['μ', 'm'],
['ν', 'n'],
['ξ', 'ks'],
['ο', 'o'],
['π', 'p'],
['ρ', 'r'],
['σ', 's'],
['τ', 't'],
['υ', 'y'],
['φ', 'f'],
['χ', 'x'],
['ψ', 'ps'],
['ω', 'o'],
['ά', 'a'],
['έ', 'e'],
['ί', 'i'],
['ό', 'o'],
['ύ', 'y'],
['ή', 'i'],
['ώ', 'o'],
['ς', 's'],
['ϊ', 'i'],
['ΰ', 'y'],
['ϋ', 'y'],
['ΐ', 'i'],
['Α', 'A'],
['Β', 'B'],
['Γ', 'G'],
['Δ', 'D'],
['Ε', 'E'],
['Ζ', 'Z'],
['Η', 'I'],
['Θ', 'TH'],
['Ι', 'I'],
['Κ', 'K'],
['Λ', 'L'],
['Μ', 'M'],
['Ν', 'N'],
['Ξ', 'KS'],
['Ο', 'O'],
['Π', 'P'],
['Ρ', 'R'],
['Σ', 'S'],
['Τ', 'T'],
['Υ', 'Y'],
['Φ', 'F'],
['Χ', 'X'],
['Ψ', 'PS'],
['Ω', 'O'],
['Ά', 'A'],
['Έ', 'E'],
['Ί', 'I'],
['Ό', 'O'],
['Ύ', 'Y'],
['Ή', 'I'],
['Ώ', 'O'],
['Ϊ', 'I'],
['Ϋ', 'Y'],
# Disabled as it conflicts with German and Latin.
# Hungarian
# ['ä', 'a'],
# ['Ä', 'A'],
# ['ö', 'o'],
# ['Ö', 'O'],
# ['ü', 'u'],
# ['Ü', 'U'],
# ['ű', 'u'],
# ['Ű', 'U'],
# Latvian
['ā', 'a'],
['ē', 'e'],
['ģ', 'g'],
['ī', 'i'],
['ķ', 'k'],
['ļ', 'l'],
['ņ', 'n'],
['ū', 'u'],
['Ā', 'A'],
['Ē', 'E'],
['Ģ', 'G'],
['Ī', 'I'],
['Ķ', 'K'],
['Ļ', 'L'],
['Ņ', 'N'],
['Ū', 'U'],
['č', 'c'],
['š', 's'],
['ž', 'z'],
['Č', 'C'],
['Š', 'S'],
['Ž', 'Z'],
# Lithuanian
['ą', 'a'],
['č', 'c'],
['ę', 'e'],
['ė', 'e'],
['į', 'i'],
['š', 's'],
['ų', 'u'],
['ū', 'u'],
['ž', 'z'],
['Ą', 'A'],
['Č', 'C'],
['Ę', 'E'],
['Ė', 'E'],
['Į', 'I'],
['Š', 'S'],
['Ų', 'U'],
['Ū', 'U'],
# Macedonian
['Ќ', 'Kj'],
['ќ', 'kj'],
['Љ', 'Lj'],
['љ', 'lj'],
['Њ', 'Nj'],
['њ', 'nj'],
['Тс', 'Ts'],
['тс', 'ts'],
# Polish
['ą', 'a'],
['ć', 'c'],
['ę', 'e'],
['ł', 'l'],
['ń', 'n'],
['ś', 's'],
['ź', 'z'],
['ż', 'z'],
['Ą', 'A'],
['Ć', 'C'],
['Ę', 'E'],
['Ł', 'L'],
['Ń', 'N'],
['Ś', 'S'],
['Ź', 'Z'],
['Ż', 'Z'],
# Disabled as it conflicts with Vietnamese.
# Serbian
# ['љ', 'lj'],
# ['њ', 'nj'],
# ['Љ', 'Lj'],
# ['Њ', 'Nj'],
# ['đ', 'dj'],
# ['Đ', 'Dj'],
# ['ђ', 'dj'],
# ['ј', 'j'],
# ['ћ', 'c'],
# ['џ', 'dz'],
# ['Ђ', 'Dj'],
# ['Ј', 'j'],
# ['Ћ', 'C'],
# ['Џ', 'Dz'],
# Disabled as it conflicts with German and Latin.
# Slovak
# ['ä', 'a'],
# ['Ä', 'A'],
# ['ľ', 'l'],
# ['ĺ', 'l'],
# ['ŕ', 'r'],
# ['Ľ', 'L'],
# ['Ĺ', 'L'],
# ['Ŕ', 'R'],
# Disabled as it conflicts with German and Latin.
# Swedish
# ['å', 'o'],
# ['Å', 'o'],
# ['ä', 'a'],
# ['Ä', 'A'],
# ['ë', 'e'],
# ['Ë', 'E'],
# ['ö', 'o'],
# ['Ö', 'O'],
# Ukrainian
['Є', 'Ye'],
['І', 'I'],
['Ї', 'Yi'],
['Ґ', 'G'],
['є', 'ye'],
['і', 'i'],
['ї', 'yi'],
['ґ', 'g'],
# Dutch
['IJ', 'IJ'],
['ij', 'ij'],
# Danish
# ['Æ', 'Ae'],
# ['Ø', 'Oe'],
# ['Å', 'Aa'],
# ['æ', 'ae'],
# ['ø', 'oe'],
# ['å', 'aa']
# Currencies
['¢', 'c'],
['¥', 'Y'],
['߿', 'b'],
['৳', 't'],
['૱', 'Bo'],
['฿', 'B'],
['₠', 'CE'],
['₡', 'C'],
['₢', 'Cr'],
['₣', 'F'],
['₥', 'm'],
['₦', 'N'],
['₧', 'Pt'],
['₨', 'Rs'],
['₩', 'W'],
['₫', 's'],
['€', 'E'],
['₭', 'K'],
['₮', 'T'],
['₯', 'Dp'],
['₰', 'S'],
['₱', 'P'],
['₲', 'G'],
['₳', 'A'],
['₴', 'S'],
['₵', 'C'],
['₶', 'tt'],
['₷', 'S'],
['₸', 'T'],
['₹', 'R'],
['₺', 'L'],
['₽', 'P'],
['₿', 'B'],
['﹩', '$'],
['¢', 'c'],
['¥', 'Y'],
['₩', 'W'],
# Latin
['𝐀', 'A'],
['𝐁', 'B'],
['𝐂', 'C'],
['𝐃', 'D'],
['𝐄', 'E'],
['𝐅', 'F'],
['𝐆', 'G'],
['𝐇', 'H'],
['𝐈', 'I'],
['𝐉', 'J'],
['𝐊', 'K'],
['𝐋', 'L'],
['𝐌', 'M'],
['𝐍', 'N'],
['𝐎', 'O'],
['𝐏', 'P'],
['𝐐', 'Q'],
['𝐑', 'R'],
['𝐒', 'S'],
['𝐓', 'T'],
['𝐔', 'U'],
['𝐕', 'V'],
['𝐖', 'W'],
['𝐗', 'X'],
['𝐘', 'Y'],
['𝐙', 'Z'],
['𝐚', 'a'],
['𝐛', 'b'],
['𝐜', 'c'],
['𝐝', 'd'],
['𝐞', 'e'],
['𝐟', 'f'],
['𝐠', 'g'],
['𝐡', 'h'],
['𝐢', 'i'],
['𝐣', 'j'],
['𝐤', 'k'],
['𝐥', 'l'],
['𝐦', 'm'],
['𝐧', 'n'],
['𝐨', 'o'],
['𝐩', 'p'],
['𝐪', 'q'],
['𝐫', 'r'],
['𝐬', 's'],
['𝐭', 't'],
['𝐮', 'u'],
['𝐯', 'v'],
['𝐰', 'w'],
['𝐱', 'x'],
['𝐲', 'y'],
['𝐳', 'z'],
['𝐴', 'A'],
['𝐵', 'B'],
['𝐶', 'C'],
['𝐷', 'D'],
['𝐸', 'E'],
['𝐹', 'F'],
['𝐺', 'G'],
['𝐻', 'H'],
['𝐼', 'I'],
['𝐽', 'J'],
['𝐾', 'K'],
['𝐿', 'L'],
['𝑀', 'M'],
['𝑁', 'N'],
['𝑂', 'O'],
['𝑃', 'P'],
['𝑄', 'Q'],
['𝑅', 'R'],
['𝑆', 'S'],
['𝑇', 'T'],
['𝑈', 'U'],
['𝑉', 'V'],
['𝑊', 'W'],
['𝑋', 'X'],
['𝑌', 'Y'],
['𝑍', 'Z'],
['𝑎', 'a'],
['𝑏', 'b'],
['𝑐', 'c'],
['𝑑', 'd'],
['𝑒', 'e'],
['𝑓', 'f'],
['𝑔', 'g'],
['𝑖', 'i'],
['𝑗', 'j'],
['𝑘', 'k'],
['𝑙', 'l'],
['𝑚', 'm'],
['𝑛', 'n'],
['𝑜', 'o'],
['𝑝', 'p'],
['𝑞', 'q'],
['𝑟', 'r'],
['𝑠', 's'],
['𝑡', 't'],
['𝑢', 'u'],
['𝑣', 'v'],
['𝑤', 'w'],
['𝑥', 'x'],
['𝑦', 'y'],
['𝑧', 'z'],
['𝑨', 'A'],
['𝑩', 'B'],
['𝑪', 'C'],
['𝑫', 'D'],
['𝑬', 'E'],
['𝑭', 'F'],
['𝑮', 'G'],
['𝑯', 'H'],
['𝑰', 'I'],
['𝑱', 'J'],
['𝑲', 'K'],
['𝑳', 'L'],
['𝑴', 'M'],
['𝑵', 'N'],
['𝑶', 'O'],
['𝑷', 'P'],
['𝑸', 'Q'],
['𝑹', 'R'],
['𝑺', 'S'],
['𝑻', 'T'],
['𝑼', 'U'],
['𝑽', 'V'],
['𝑾', 'W'],
['𝑿', 'X'],
['𝒀', 'Y'],
['𝒁', 'Z'],
['𝒂', 'a'],
['𝒃', 'b'],
['𝒄', 'c'],
['𝒅', 'd'],
['𝒆', 'e'],
['𝒇', 'f'],
['𝒈', 'g'],
['𝒉', 'h'],
['𝒊', 'i'],
['𝒋', 'j'],
['𝒌', 'k'],
['𝒍', 'l'],
['𝒎', 'm'],
['𝒏', 'n'],
['𝒐', 'o'],
['𝒑', 'p'],
['𝒒', 'q'],
['𝒓', 'r'],
['𝒔', 's'],
['𝒕', 't'],
['𝒖', 'u'],
['𝒗', 'v'],
['𝒘', 'w'],
['𝒙', 'x'],
['𝒚', 'y'],
['𝒛', 'z'],
['𝒜', 'A'],
['𝒞', 'C'],
['𝒟', 'D'],
['𝒢', 'g'],
['𝒥', 'J'],
['𝒦', 'K'],
['𝒩', 'N'],
['𝒪', 'O'],
['𝒫', 'P'],
['𝒬', 'Q'],
['𝒮', 'S'],
['𝒯', 'T'],
['𝒰', 'U'],
['𝒱', 'V'],
['𝒲', 'W'],
['𝒳', 'X'],
['𝒴', 'Y'],
['𝒵', 'Z'],
['𝒶', 'a'],
['𝒷', 'b'],
['𝒸', 'c'],
['𝒹', 'd'],
['𝒻', 'f'],
['𝒽', 'h'],
['𝒾', 'i'],
['𝒿', 'j'],
['𝓀', 'h'],
['𝓁', 'l'],
['𝓂', 'm'],
['𝓃', 'n'],
['𝓅', 'p'],
['𝓆', 'q'],
['𝓇', 'r'],
['𝓈', 's'],
['𝓉', 't'],
['𝓊', 'u'],
['𝓋', 'v'],
['𝓌', 'w'],
['𝓍', 'x'],
['𝓎', 'y'],
['𝓏', 'z'],
['𝓐', 'A'],
['𝓑', 'B'],
['𝓒', 'C'],
['𝓓', 'D'],
['𝓔', 'E'],
['𝓕', 'F'],
['𝓖', 'G'],
['𝓗', 'H'],
['𝓘', 'I'],
['𝓙', 'J'],
['𝓚', 'K'],
['𝓛', 'L'],
['𝓜', 'M'],
['𝓝', 'N'],
['𝓞', 'O'],
['𝓟', 'P'],
['𝓠', 'Q'],
['𝓡', 'R'],
['𝓢', 'S'],
['𝓣', 'T'],
['𝓤', 'U'],
['𝓥', 'V'],
['𝓦', 'W'],
['𝓧', 'X'],
['𝓨', 'Y'],
['𝓩', 'Z'],
['𝓪', 'a'],
['𝓫', 'b'],
['𝓬', 'c'],
['𝓭', 'd'],
['𝓮', 'e'],
['𝓯', 'f'],
['𝓰', 'g'],
['𝓱', 'h'],
['𝓲', 'i'],
['𝓳', 'j'],
['𝓴', 'k'],
['𝓵', 'l'],
['𝓶', 'm'],
['𝓷', 'n'],
['𝓸', 'o'],
['𝓹', 'p'],
['𝓺', 'q'],
['𝓻', 'r'],
['𝓼', 's'],
['𝓽', 't'],
['𝓾', 'u'],
['𝓿', 'v'],
['𝔀', 'w'],
['𝔁', 'x'],
['𝔂', 'y'],
['𝔃', 'z'],
['𝔄', 'A'],
['𝔅', 'B'],
['𝔇', 'D'],
['𝔈', 'E'],
['𝔉', 'F'],
['𝔊', 'G'],
['𝔍', 'J'],
['𝔎', 'K'],
['𝔏', 'L'],
['𝔐', 'M'],
['𝔑', 'N'],
['𝔒', 'O'],
['𝔓', 'P'],
['𝔔', 'Q'],
['𝔖', 'S'],
['𝔗', 'T'],
['𝔘', 'U'],
['𝔙', 'V'],
['𝔚', 'W'],
['𝔛', 'X'],
['𝔜', 'Y'],
['𝔞', 'a'],
['𝔟', 'b'],
['𝔠', 'c'],
['𝔡', 'd'],
['𝔢', 'e'],
['𝔣', 'f'],
['𝔤', 'g'],
['𝔥', 'h'],
['𝔦', 'i'],
['𝔧', 'j'],
['𝔨', 'k'],
['𝔩', 'l'],
['𝔪', 'm'],
['𝔫', 'n'],
['𝔬', 'o'],
['𝔭', 'p'],
['𝔮', 'q'],
['𝔯', 'r'],
['𝔰', 's'],
['𝔱', 't'],
['𝔲', 'u'],
['𝔳', 'v'],
['𝔴', 'w'],
['𝔵', 'x'],
['𝔶', 'y'],
['𝔷', 'z'],
['𝔸', 'A'],
['𝔹', 'B'],
['𝔻', 'D'],
['𝔼', 'E'],
['𝔽', 'F'],
['𝔾', 'G'],
['𝕀', 'I'],
['𝕁', 'J'],
['𝕂', 'K'],
['𝕃', 'L'],
['𝕄', 'M'],
['𝕆', 'N'],
['𝕊', 'S'],
['𝕋', 'T'],
['𝕌', 'U'],
['𝕍', 'V'],
['𝕎', 'W'],
['𝕏', 'X'],
['𝕐', 'Y'],
['𝕒', 'a'],
['𝕓', 'b'],
['𝕔', 'c'],
['𝕕', 'd'],
['𝕖', 'e'],
['𝕗', 'f'],
['𝕘', 'g'],
['𝕙', 'h'],
['𝕚', 'i'],
['𝕛', 'j'],
['𝕜', 'k'],
['𝕝', 'l'],
['𝕞', 'm'],
['𝕟', 'n'],
['𝕠', 'o'],
['𝕡', 'p'],
['𝕢', 'q'],
['𝕣', 'r'],
['𝕤', 's'],
['𝕥', 't'],
['𝕦', 'u'],
['𝕧', 'v'],
['𝕨', 'w'],
['𝕩', 'x'],
['𝕪', 'y'],
['𝕫', 'z'],
['𝕬', 'A'],
['𝕭', 'B'],
['𝕮', 'C'],
['𝕯', 'D'],
['𝕰', 'E'],
['𝕱', 'F'],
['𝕲', 'G'],
['𝕳', 'H'],
['𝕴', 'I'],
['𝕵', 'J'],
['𝕶', 'K'],
['𝕷', 'L'],
['𝕸', 'M'],
['𝕹', 'N'],
['𝕺', 'O'],
['𝕻', 'P'],
['𝕼', 'Q'],
['𝕽', 'R'],
['𝕾', 'S'],
['𝕿', 'T'],
['𝖀', 'U'],
['𝖁', 'V'],
['𝖂', 'W'],
['𝖃', 'X'],
['𝖄', 'Y'],
['𝖅', 'Z'],
['𝖆', 'a'],
['𝖇', 'b'],
['𝖈', 'c'],
['𝖉', 'd'],
['𝖊', 'e'],
['𝖋', 'f'],
['𝖌', 'g'],
['𝖍', 'h'],
['𝖎', 'i'],
['𝖏', 'j'],
['𝖐', 'k'],
['𝖑', 'l'],
['𝖒', 'm'],
['𝖓', 'n'],
['𝖔', 'o'],
['𝖕', 'p'],
['𝖖', 'q'],
['𝖗', 'r'],
['𝖘', 's'],
['𝖙', 't'],
['𝖚', 'u'],
['𝖛', 'v'],
['𝖜', 'w'],
['𝖝', 'x'],
['𝖞', 'y'],
['𝖟', 'z'],
['𝖠', 'A'],
['𝖡', 'B'],
['𝖢', 'C'],
['𝖣', 'D'],
['𝖤', 'E'],
['𝖥', 'F'],
['𝖦', 'G'],
['𝖧', 'H'],
['𝖨', 'I'],
['𝖩', 'J'],
['𝖪', 'K'],
['𝖫', 'L'],
['𝖬', 'M'],
['𝖭', 'N'],
['𝖮', 'O'],
['𝖯', 'P'],
['𝖰', 'Q'],
['𝖱', 'R'],
['𝖲', 'S'],
['𝖳', 'T'],
['𝖴', 'U'],
['𝖵', 'V'],
['𝖶', 'W'],
['𝖷', 'X'],
['𝖸', 'Y'],
['𝖹', 'Z'],
['𝖺', 'a'],
['𝖻', 'b'],
['𝖼', 'c'],
['𝖽', 'd'],
['𝖾', 'e'],
['𝖿', 'f'],
['𝗀', 'g'],
['𝗁', 'h'],
['𝗂', 'i'],
['𝗃', 'j'],
['𝗄', 'k'],
['𝗅', 'l'],
['𝗆', 'm'],
['𝗇', 'n'],
['𝗈', 'o'],
['𝗉', 'p'],
['𝗊', 'q'],
['𝗋', 'r'],
['𝗌', 's'],
['𝗍', 't'],
['𝗎', 'u'],
['𝗏', 'v'],
['𝗐', 'w'],
['𝗑', 'x'],
['𝗒', 'y'],
['𝗓', 'z'],
['𝗔', 'A'],
['𝗕', 'B'],
['𝗖', 'C'],
['𝗗', 'D'],
['𝗘', 'E'],
['𝗙', 'F'],
['𝗚', 'G'],
['𝗛', 'H'],
['𝗜', 'I'],
['𝗝', 'J'],
['𝗞', 'K'],
['𝗟', 'L'],
['𝗠', 'M'],
['𝗡', 'N'],
['𝗢', 'O'],
['𝗣', 'P'],
['𝗤', 'Q'],
['𝗥', 'R'],
['𝗦', 'S'],
['𝗧', 'T'],
['𝗨', 'U'],
['𝗩', 'V'],
['𝗪', 'W'],
['𝗫', 'X'],
['𝗬', 'Y'],
['𝗭', 'Z'],
['𝗮', 'a'],
['𝗯', 'b'],
['𝗰', 'c'],
['𝗱', 'd'],
['𝗲', 'e'],
['𝗳', 'f'],
['𝗴', 'g'],
['𝗵', 'h'],
['𝗶', 'i'],
['𝗷', 'j'],
['𝗸', 'k'],
['𝗹', 'l'],
['𝗺', 'm'],
['𝗻', 'n'],
['𝗼', 'o'],
['𝗽', 'p'],
['𝗾', 'q'],
['𝗿', 'r'],
['𝘀', 's'],
['𝘁', 't'],
['𝘂', 'u'],
['𝘃', 'v'],
['𝘄', 'w'],
['𝘅', 'x'],
['𝘆', 'y'],
['𝘇', 'z'],
['𝘈', 'A'],
['𝘉', 'B'],
['𝘊', 'C'],
['𝘋', 'D'],
['𝘌', 'E'],
['𝘍', 'F'],
['𝘎', 'G'],
['𝘏', 'H'],
['𝘐', 'I'],
['𝘑', 'J'],
['𝘒', 'K'],
['𝘓', 'L'],
['𝘔', 'M'],
['𝘕', 'N'],
['𝘖', 'O'],
['𝘗', 'P'],
['𝘘', 'Q'],
['𝘙', 'R'],
['𝘚', 'S'],
['𝘛', 'T'],
['𝘜', 'U'],
['𝘝', 'V'],
['𝘞', 'W'],
['𝘟', 'X'],
['𝘠', 'Y'],
['𝘡', 'Z'],
['𝘢', 'a'],
['𝘣', 'b'],
['𝘤', 'c'],
['𝘥', 'd'],
['𝘦', 'e'],
['𝘧', 'f'],
['𝘨', 'g'],
['𝘩', 'h'],
['𝘪', 'i'],
['𝘫', 'j'],
['𝘬', 'k'],
['𝘭', 'l'],
['𝘮', 'm'],
['𝘯', 'n'],
['𝘰', 'o'],
['𝘱', 'p'],
['𝘲', 'q'],
['𝘳', 'r'],
['𝘴', 's'],
['𝘵', 't'],
['𝘶', 'u'],
['𝘷', 'v'],
['𝘸', 'w'],
['𝘹', 'x'],
['𝘺', 'y'],
['𝘻', 'z'],
['𝘼', 'A'],
['𝘽', 'B'],
['𝘾', 'C'],
['𝘿', 'D'],
['𝙀', 'E'],
['𝙁', 'F'],
['𝙂', 'G'],
['𝙃', 'H'],
['𝙄', 'I'],
['𝙅', 'J'],
['𝙆', 'K'],
['𝙇', 'L'],
['𝙈', 'M'],
['𝙉', 'N'],
['𝙊', 'O'],
['𝙋', 'P'],
['𝙌', 'Q'],
['𝙍', 'R'],
['𝙎', 'S'],
['𝙏', 'T'],
['𝙐', 'U'],
['𝙑', 'V'],
['𝙒', 'W'],
['𝙓', 'X'],
['𝙔', 'Y'],
['𝙕', 'Z'],
['𝙖', 'a'],
['𝙗', 'b'],
['𝙘', 'c'],
['𝙙', 'd'],
['𝙚', 'e'],
['𝙛', 'f'],
['𝙜', 'g'],
['𝙝', 'h'],
['𝙞', 'i'],
['𝙟', 'j'],
['𝙠', 'k'],
['𝙡', 'l'],
['𝙢', 'm'],
['𝙣', 'n'],
['𝙤', 'o'],
['𝙥', 'p'],
['𝙦', 'q'],
['𝙧', 'r'],
['𝙨', 's'],
['𝙩', 't'],
['𝙪', 'u'],
['𝙫', 'v'],
['𝙬', 'w'],
['𝙭', 'x'],
['𝙮', 'y'],
['𝙯', 'z'],
['𝙰', 'A'],
['𝙱', 'B'],
['𝙲', 'C'],
['𝙳', 'D'],
['𝙴', 'E'],
['𝙵', 'F'],
['𝙶', 'G'],
['𝙷', 'H'],
['𝙸', 'I'],
['𝙹', 'J'],
['𝙺', 'K'],
['𝙻', 'L'],
['𝙼', 'M'],
['𝙽', 'N'],
['𝙾', 'O'],
['𝙿', 'P'],
['𝚀', 'Q'],
['𝚁', 'R'],
['𝚂', 'S'],
['𝚃', 'T'],
['𝚄', 'U'],
['𝚅', 'V'],
['𝚆', 'W'],
['𝚇', 'X'],
['𝚈', 'Y'],
['𝚉', 'Z'],
['𝚊', 'a'],
['𝚋', 'b'],
['𝚌', 'c'],
['𝚍', 'd'],
['𝚎', 'e'],
['𝚏', 'f'],
['𝚐', 'g'],
['𝚑', 'h'],
['𝚒', 'i'],
['𝚓', 'j'],
['𝚔', 'k'],
['𝚕', 'l'],
['𝚖', 'm'],
['𝚗', 'n'],
['𝚘', 'o'],
['𝚙', 'p'],
['𝚚', 'q'],
['𝚛', 'r'],
['𝚜', 's'],
['𝚝', 't'],
['𝚞', 'u'],
['𝚟', 'v'],
['𝚠', 'w'],
['𝚡', 'x'],
['𝚢', 'y'],
['𝚣', 'z'],
# Dotless letters
['𝚤', 'l'],
['𝚥', 'j'],
# Greek
['𝛢', 'A'],
['𝛣', 'B'],
['𝛤', 'G'],
['𝛥', 'D'],
['𝛦', 'E'],
['𝛧', 'Z'],
['𝛨', 'I'],
['𝛩', 'TH'],
['𝛪', 'I'],
['𝛫', 'K'],
['𝛬', 'L'],
['𝛭', 'M'],
['𝛮', 'N'],
['𝛯', 'KS'],
['𝛰', 'O'],
['𝛱', 'P'],
['𝛲', 'R'],
['𝛳', 'TH'],
['𝛴', 'S'],
['𝛵', 'T'],
['𝛶', 'Y'],
['𝛷', 'F'],
['𝛸', 'x'],
['𝛹', 'PS'],
['𝛺', 'O'],
['𝛻', 'D'],
['𝛼', 'a'],
['𝛽', 'b'],
['𝛾', 'g'],
['𝛿', 'd'],
['𝜀', 'e'],
['𝜁', 'z'],
['𝜂', 'i'],
['𝜃', 'th'],
['𝜄', 'i'],
['𝜅', 'k'],
['𝜆', 'l'],
['𝜇', 'm'],
['𝜈', 'n'],
['𝜉', 'ks'],
['𝜊', 'o'],
['𝜋', 'p'],
['𝜌', 'r'],
['𝜍', 's'],
['𝜎', 's'],
['𝜏', 't'],
['𝜐', 'y'],
['𝜑', 'f'],
['𝜒', 'x'],
['𝜓', 'ps'],
['𝜔', 'o'],
['𝜕', 'd'],
['𝜖', 'E'],
['𝜗', 'TH'],
['𝜘', 'K'],
['𝜙', 'f'],
['𝜚', 'r'],
['𝜛', 'p'],
['𝜜', 'A'],
['𝜝', 'V'],
['𝜞', 'G'],
['𝜟', 'D'],
['𝜠', 'E'],
['𝜡', 'Z'],
['𝜢', 'I'],
['𝜣', 'TH'],
['𝜤', 'I'],
['𝜥', 'K'],
['𝜦', 'L'],
['𝜧', 'M'],
['𝜨', 'N'],
['𝜩', 'KS'],
['𝜪', 'O'],
['𝜫', 'P'],
['𝜬', 'S'],
['𝜭', 'TH'],
['𝜮', 'S'],
['𝜯', 'T'],
['𝜰', 'Y'],
['𝜱', 'F'],
['𝜲', 'X'],
['𝜳', 'PS'],
['𝜴', 'O'],
['𝜵', 'D'],
['𝜶', 'a'],
['𝜷', 'v'],
['𝜸', 'g'],
['𝜹', 'd'],
['𝜺', 'e'],
['𝜻', 'z'],
['𝜼', 'i'],
['𝜽', 'th'],
['𝜾', 'i'],
['𝜿', 'k'],
['𝝀', 'l'],
['𝝁', 'm'],
['𝝂', 'n'],
['𝝃', 'ks'],
['𝝄', 'o'],
['𝝅', 'p'],
['𝝆', 'r'],
['𝝇', 's'],
['𝝈', 's'],
['𝝉', 't'],
['𝝊', 'y'],
['𝝋', 'f'],
['𝝌', 'x'],
['𝝍', 'ps'],
['𝝎', 'o'],
['𝝏', 'a'],
['𝝐', 'e'],
['𝝑', 'i'],
['𝝒', 'k'],
['𝝓', 'f'],
['𝝔', 'r'],
['𝝕', 'p'],
['𝝖', 'A'],
['𝝗', 'B'],
['𝝘', 'G'],
['𝝙', 'D'],
['𝝚', 'E'],
['𝝛', 'Z'],
['𝝜', 'I'],
['𝝝', 'TH'],
['𝝞', 'I'],
['𝝟', 'K'],
['𝝠', 'L'],
['𝝡', 'M'],
['𝝢', 'N'],
['𝝣', 'KS'],
['𝝤', 'O'],
['𝝥', 'P'],
['𝝦', 'R'],
['𝝧', 'TH'],
['𝝨', 'S'],
['𝝩', 'T'],
['𝝪', 'Y'],
['𝝫', 'F'],
['𝝬', 'X'],
['𝝭', 'PS'],
['𝝮', 'O'],
['𝝯', 'D'],
['𝝰', 'a'],
['𝝱', 'v'],
['𝝲', 'g'],
['𝝳', 'd'],
['𝝴', 'e'],
['𝝵', 'z'],
['𝝶', 'i'],
['𝝷', 'th'],
['𝝸', 'i'],
['𝝹', 'k'],
['𝝺', 'l'],
['𝝻', 'm'],
['𝝼', 'n'],
['𝝽', 'ks'],
['𝝾', 'o'],
['𝝿', 'p'],
['𝞀', 'r'],
['𝞁', 's'],
['𝞂', 's'],
['𝞃', 't'],
['𝞄', 'y'],
['𝞅', 'f'],
['𝞆', 'x'],
['𝞇', 'ps'],
['𝞈', 'o'],
['𝞉', 'a'],
['𝞊', 'e'],
['𝞋', 'i'],
['𝞌', 'k'],
['𝞍', 'f'],
['𝞎', 'r'],
['𝞏', 'p'],
['𝞐', 'A'],
['𝞑', 'V'],
['𝞒', 'G'],
['𝞓', 'D'],
['𝞔', 'E'],
['𝞕', 'Z'],
['𝞖', 'I'],
['𝞗', 'TH'],
['𝞘', 'I'],
['𝞙', 'K'],
['𝞚', 'L'],
['𝞛', 'M'],
['𝞜', 'N'],
['𝞝', 'KS'],
['𝞞', 'O'],
['𝞟', 'P'],
['𝞠', 'S'],
['𝞡', 'TH'],
['𝞢', 'S'],
['𝞣', 'T'],
['𝞤', 'Y'],
['𝞥', 'F'],
['𝞦', 'X'],
['𝞧', 'PS'],
['𝞨', 'O'],
['𝞩', 'D'],
['𝞪', 'av'],
['𝞫', 'g'],
['𝞬', 'd'],
['𝞭', 'e'],
['𝞮', 'z'],
['𝞯', 'i'],
['𝞰', 'i'],
['𝞱', 'th'],
['𝞲', 'i'],
['𝞳', 'k'],
['𝞴', 'l'],
['𝞵', 'm'],
['𝞶', 'n'],
['𝞷', 'ks'],
['𝞸', 'o'],
['𝞹', 'p'],
['𝞺', 'r'],
['𝞻', 's'],
['𝞼', 's'],
['𝞽', 't'],
['𝞾', 'y'],
['𝞿', 'f'],
['𝟀', 'x'],
['𝟁', 'ps'],
['𝟂', 'o'],
['𝟃', 'a'],
['𝟄', 'e'],
['𝟅', 'i'],
['𝟆', 'k'],
['𝟇', 'f'],
['𝟈', 'r'],
['𝟉', 'p'],
['𝟊', 'F'],
['𝟋', 'f'],
['⒜', '(a)'],
['⒝', '(b)'],
['⒞', '(c)'],
['⒟', '(d)'],
['⒠', '(e)'],
['⒡', '(f)'],
['⒢', '(g)'],
['⒣', '(h)'],
['⒤', '(i)'],
['⒥', '(j)'],
['⒦', '(k)'],
['⒧', '(l)'],
['⒨', '(m)'],
['⒩', '(n)'],
['⒪', '(o)'],
['⒫', '(p)'],
['⒬', '(q)'],
['⒭', '(r)'],
['⒮', '(s)'],
['⒯', '(t)'],
['⒰', '(u)'],
['⒱', '(v)'],
['⒲', '(w)'],
['⒳', '(x)'],
['⒴', '(y)'],
['⒵', '(z)'],
['Ⓐ', '(A)'],
['Ⓑ', '(B)'],
['Ⓒ', '(C)'],
['Ⓓ', '(D)'],
['Ⓔ', '(E)'],
['Ⓕ', '(F)'],
['Ⓖ', '(G)'],
['Ⓗ', '(H)'],
['Ⓘ', '(I)'],
['Ⓙ', '(J)'],
['Ⓚ', '(K)'],
['Ⓛ', '(L)'],
['Ⓝ', '(N)'],
['Ⓞ', '(O)'],
['Ⓟ', '(P)'],
['Ⓠ', '(Q)'],
['Ⓡ', '(R)'],
['Ⓢ', '(S)'],
['Ⓣ', '(T)'],
['Ⓤ', '(U)'],
['Ⓥ', '(V)'],
['Ⓦ', '(W)'],
['Ⓧ', '(X)'],
['Ⓨ', '(Y)'],
['Ⓩ', '(Z)'],
['ⓐ', '(a)'],
['ⓑ', '(b)'],
['ⓒ', '(b)'],
['ⓓ', '(c)'],
['ⓔ', '(e)'],
['ⓕ', '(f)'],
['ⓖ', '(g)'],
['ⓗ', '(h)'],
['ⓘ', '(i)'],
['ⓙ', '(j)'],
['ⓚ', '(k)'],
['ⓛ', '(l)'],
['ⓜ', '(m)'],
['ⓝ', '(n)'],
['ⓞ', '(o)'],
['ⓟ', '(p)'],
['ⓠ', '(q)'],
['ⓡ', '(r)'],
['ⓢ', '(s)'],
['ⓣ', '(t)'],
['ⓤ', '(u)'],
['ⓥ', '(v)'],
['ⓦ', '(w)'],
['ⓧ', '(x)'],
['ⓨ', '(y)'],
['ⓩ', '(z)'],
# Numbers
['𝟎', '0'],
['𝟏', '1'],
['𝟐', '2'],
['𝟑', '3'],
['𝟒', '4'],
['𝟓', '5'],
['𝟔', '6'],
['𝟕', '7'],
['𝟖', '8'],
['𝟗', '9'],
['𝟘', '0'],
['𝟙', '1'],
['𝟚', '2'],
['𝟛', '3'],
['𝟜', '4'],
['𝟝', '5'],
['𝟞', '6'],
['𝟟', '7'],
['𝟠', '8'],
['𝟡', '9'],
['𝟢', '0'],
['𝟣', '1'],
['𝟤', '2'],
['𝟥', '3'],
['𝟦', '4'],
['𝟧', '5'],
['𝟨', '6'],
['𝟩', '7'],
['𝟪', '8'],
['𝟫', '9'],
['𝟬', '0'],
['𝟭', '1'],
['𝟮', '2'],
['𝟯', '3'],
['𝟰', '4'],
['𝟱', '5'],
['𝟲', '6'],
['𝟳', '7'],
['𝟴', '8'],
['𝟵', '9'],
['𝟶', '0'],
['𝟷', '1'],
['𝟸', '2'],
['𝟹', '3'],
['𝟺', '4'],
['𝟻', '5'],
['𝟼', '6'],
['𝟽', '7'],
['𝟾', '8'],
['𝟿', '9'],
['①', '1'],
['②', '2'],
['③', '3'],
['④', '4'],
['⑤', '5'],
['⑥', '6'],
['⑦', '7'],
['⑧', '8'],
['⑨', '9'],
['⑩', '10'],
['⑪', '11'],
['⑫', '12'],
['⑬', '13'],
['⑭', '14'],
['⑮', '15'],
['⑯', '16'],
['⑰', '17'],
['⑱', '18'],
['⑲', '19'],
['⑳', '20'],
['⑴', '1'],
['⑵', '2'],
['⑶', '3'],
['⑷', '4'],
['⑸', '5'],
['⑹', '6'],
['⑺', '7'],
['⑻', '8'],
['⑼', '9'],
['⑽', '10'],
['⑾', '11'],
['⑿', '12'],
['⒀', '13'],
['⒁', '14'],
['⒂', '15'],
['⒃', '16'],
['⒄', '17'],
['⒅', '18'],
['⒆', '19'],
['⒇', '20'],
['⒈', '1.'],
['⒉', '2.'],
['⒊', '3.'],
['⒋', '4.'],
['⒌', '5.'],
['⒍', '6.'],
['⒎', '7.'],
['⒏', '8.'],
['⒐', '9.'],
['⒑', '10.'],
['⒒', '11.'],
['⒓', '12.'],
['⒔', '13.'],
['⒕', '14.'],
['⒖', '15.'],
['⒗', '16.'],
['⒘', '17.'],
['⒙', '18.'],
['⒚', '19.'],
['⒛', '20.'],
['⓪', '0'],
['⓫', '11'],
['⓬', '12'],
['⓭', '13'],
['⓮', '14'],
['⓯', '15'],
['⓰', '16'],
['⓱', '17'],
['⓲', '18'],
['⓳', '19'],
['⓴', '20'],
['⓵', '1'],
['⓶', '2'],
['⓷', '3'],
['⓸', '4'],
['⓹', '5'],
['⓺', '6'],
['⓻', '7'],
['⓼', '8'],
['⓽', '9'],
['⓾', '10'],
['⓿', '0'],
# Punctuation
['🙰', '&'],
['🙱', '&'],
['🙲', '&'],
['🙳', '&'],
['🙴', '&'],
['🙵', '&'],
['🙶', '"'],
['🙷', '"'],
['🙸', '"'],
['‽', '?!'],
['🙹', '?!'],
['🙺', '?!'],
['🙻', '?!'],
['🙼', '/'],
['🙽', '\\'],
# Alchemy
['🜇', 'AR'],
['🜈', 'V'],
['🜉', 'V'],
['🜆', 'VR'],
['🜅', 'VF'],
['🜩', '2'],
['🜪', '5'],
['🝡', 'f'],
['🝢', 'W'],
['🝣', 'U'],
['🝧', 'V'],
['🝨', 'T'],
['🝪', 'V'],
['🝫', 'MB'],
['🝬', 'VB'],
['🝲', '3B'],
['🝳', '3B'],
# Emojis
['💯', '100'],
['🔙', 'BACK'],
['🔚', 'END'],
['🔛', 'ON!'],
['🔜', 'SOON'],
['🔝', 'TOP'],
['🔞', '18'],
['🔤', 'abc'],
['🔠', 'ABCD'],
['🔡', 'abcd'],
['🔢', '1234'],
['🔣', 'T&@%'],
['#️⃣', '#'],
['*️⃣', '*'],
['0️⃣', '0'],
['1️⃣', '1'],
['2️⃣', '2'],
['3️⃣', '3'],
['4️⃣', '4'],
['5️⃣', '5'],
['6️⃣', '6'],
['7️⃣', '7'],
['8️⃣', '8'],
['9️⃣', '9'],
['🔟', '10'],
['🅰️', 'A'],
['🅱️', 'B'],
['🆎', 'AB'],
['🆑', 'CL'],
['🅾️', 'O'],
['🅿', 'P'],
['🆘', 'SOS'],
['🅲', 'C'],
['🅳', 'D'],
['🅴', 'E'],
['🅵', 'F'],
['🅶', 'G'],
['🅷', 'H'],
['🅸', 'I'],
['🅹', 'J'],
['🅺', 'K'],
['🅻', 'L'],
['🅼', 'M'],
['🅽', 'N'],
['🆀', 'Q'],
['🆁', 'R'],
['🆂', 'S'],
['🆃', 'T'],
['🆄', 'U'],
['🆅', 'V'],
['🆆', 'W'],
['🆇', 'X'],
['🆈', 'Y'],
['🆉', 'Z'],
]
|
TensorFlow2/Recommendation/WideAndDeep/scripts | scripts | evaluating_benchmark | #!/bin/bash
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
usage() {
cat <<EOF
Usage: bash scripts/evaluating_benchmark.sh -g gpu
-g | --gpu (Required) Number of gpus
-b | --bs (Optional) Global batch size, default 131072
-a | --amp (Optional) Use amp
-x | --xla (Optional) Use xla
EOF
}
if [ ! -d "scripts" ] || [ ! "$(ls -A 'scripts')" ]; then
echo "You are probably calling this script from wrong directory"
usage
exit 1
fi
amp=
xla=
gpu=
bs=131072
while [ "$1" != "" ]; do
case $1 in
-g | --gpu)
shift
gpu="$1"
;;
-b | --bs)
shift
bs="$1"
;;
-a | --amp)
amp="--amp"
;;
-x | --xla)
xla="--xla"
;;
*)
usage
exit 1
;;
esac
shift
done
if [ -z "$gpu" ]; then
echo "Missing number of gpus param"
usage
exit 1
fi
if ! [ "$bs" -ge 0 ] 2>/dev/null; then
echo "Expected global batch size (${bs}) to be positive integer"
usage
exit 1
fi
if ! [ "$gpu" -ge 0 ] || [[ ! "$gpu" =~ ^(1|4|8)$ ]] 2>/dev/null; then
echo "Expected number of gpus (${gpu}) to be equal 1, 4 or 8"
usage
exit 1
fi
cmd="horovodrun -np ${gpu} sh hvd_wrapper.sh \
python main.py \
--evaluate \
--benchmark \
--benchmark_warmup_steps 500 \
--benchmark_steps 1000 \
--eval_batch_size ${bs} \
${amp} \
${xla}"
set -x
$cmd
|
PyTorch/Segmentation/MaskRCNN/pytorch/maskrcnn_benchmark/data/samplers | samplers | grouped_batch_sampler | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
import itertools
import torch
from torch.utils.data.sampler import BatchSampler
from torch.utils.data.sampler import Sampler
class GroupedBatchSampler(BatchSampler):
"""
Wraps another sampler to yield a mini-batch of indices.
It enforces that elements from the same group should appear in groups of batch_size.
It also tries to provide mini-batches which follows an ordering which is
as close as possible to the ordering from the original sampler.
Arguments:
sampler (Sampler): Base sampler.
batch_size (int): Size of mini-batch.
drop_uneven (bool): If ``True``, the sampler will drop the batches whose
size is less than ``batch_size``
"""
def __init__(self, sampler, group_ids, batch_size, drop_uneven=False):
if not isinstance(sampler, Sampler):
raise ValueError(
"sampler should be an instance of "
"torch.utils.data.Sampler, but got sampler={}".format(sampler)
)
self.sampler = sampler
self.group_ids = torch.as_tensor(group_ids)
assert self.group_ids.dim() == 1
self.batch_size = batch_size
self.drop_uneven = drop_uneven
self.groups = torch.unique(self.group_ids).sort(0)[0]
self._can_reuse_batches = False
def _prepare_batches(self):
dataset_size = len(self.group_ids)
# get the sampled indices from the sampler
sampled_ids = torch.as_tensor(list(self.sampler))
# potentially not all elements of the dataset were sampled
# by the sampler (e.g., DistributedSampler).
# construct a tensor which contains -1 if the element was
# not sampled, and a non-negative number indicating the
# order where the element was sampled.
# for example. if sampled_ids = [3, 1] and dataset_size = 5,
# the order is [-1, 1, -1, 0, -1]
order = torch.full((dataset_size,), -1, dtype=torch.int64)
order[sampled_ids] = torch.arange(len(sampled_ids))
# get a mask with the elements that were sampled
mask = order >= 0
# find the elements that belong to each individual cluster
clusters = [(self.group_ids == i) & mask for i in self.groups]
# get relative order of the elements inside each cluster
# that follows the order from the sampler
relative_order = [order[cluster] for cluster in clusters]
# with the relative order, find the absolute order in the
# sampled space
permutation_ids = [s[s.sort()[1]] for s in relative_order]
# permute each cluster so that they follow the order from
# the sampler
permuted_clusters = [sampled_ids[idx] for idx in permutation_ids]
# splits each cluster in batch_size, and merge as a list of tensors
splits = [c.split(self.batch_size) for c in permuted_clusters]
merged = tuple(itertools.chain.from_iterable(splits))
# now each batch internally has the right order, but
# they are grouped by clusters. Find the permutation between
# different batches that brings them as close as possible to
# the order that we have in the sampler. For that, we will consider the
# ordering as coming from the first element of each batch, and sort
# correspondingly
first_element_of_batch = [t[0].item() for t in merged]
# get and inverse mapping from sampled indices and the position where
# they occur (as returned by the sampler)
inv_sampled_ids_map = {v: k for k, v in enumerate(sampled_ids.tolist())}
# from the first element in each batch, get a relative ordering
first_index_of_batch = torch.as_tensor(
[inv_sampled_ids_map[s] for s in first_element_of_batch]
)
# permute the batches so that they approximately follow the order
# from the sampler
permutation_order = first_index_of_batch.sort(0)[1].tolist()
# finally, permute the batches
batches = [merged[i].tolist() for i in permutation_order]
if self.drop_uneven:
kept = []
for batch in batches:
if len(batch) == self.batch_size:
kept.append(batch)
batches = kept
return batches
def __iter__(self):
if self._can_reuse_batches:
batches = self._batches
self._can_reuse_batches = False
else:
batches = self._prepare_batches()
self._batches = batches
return iter(batches)
def __len__(self):
if not hasattr(self, "_batches"):
self._batches = self._prepare_batches()
self._can_reuse_batches = True
return len(self._batches)
|
PyTorch/Translation/GNMT/seq2seq/inference | inference | tables | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import collections
import itertools
import numpy as np
from pytablewriter import MarkdownTableWriter
def interleave(*args):
return list(itertools.chain(*zip(*args)))
class AccuracyTable:
def __init__(self, unit):
self.data = collections.defaultdict(dict)
self.unit = unit
def add(self, key, data):
self.data[key].update(data)
def write(self, title, write_math):
writer = MarkdownTableWriter()
writer.table_name = f'{title}'
main_header = ['**Batch Size**', '**Beam Size**']
data_header = []
if 'fp32' in write_math:
data_header += [f'**Accuracy - FP32 ({self.unit})**']
if 'tf32' in write_math:
data_header += [f'**Accuracy - TF32 ({self.unit})**']
if 'fp16' in write_math:
data_header += [f'**Accuracy - FP16 ({self.unit})**']
writer.headers = main_header + data_header
writer.value_matrix = []
for k, v in self.data.items():
batch_size, beam_size = k
row = [batch_size, beam_size]
if 'fp32' in write_math:
row.append(v['fp32'])
if 'tf32' in write_math:
row.append(v['tf32'])
if 'fp16' in write_math:
row.append(v['fp16'])
writer.value_matrix.append(row)
writer.write_table()
class PerformanceTable:
def __init__(self, percentiles, unit, reverse_percentiles=False):
self.percentiles = percentiles
self.data = collections.defaultdict(dict)
self.unit = unit
self.reverse_percentiles = reverse_percentiles
def add(self, key, value):
math, value = next(iter(value.items()))
value = np.array(value)
if self.reverse_percentiles:
percentiles = [100 - p for p in self.percentiles]
else:
percentiles = self.percentiles
stats = []
for p in percentiles:
val = np.percentile(value, p)
stats.append(val * self.unit_convert[self.unit])
avg = value.mean() * self.unit_convert[self.unit]
self.data[key].update({math: (avg, stats)})
def write(self, title, math, relative=None, reverse_speedup=False):
writer = MarkdownTableWriter()
writer.table_name = f'{title} - {math.upper()}'
main_header = ['**Batch Size**', '**Beam Size**']
data_header = [f'**Avg ({self.unit})**']
data_header += [f'**{p}% ({self.unit})**' for p in self.percentiles]
if relative:
speedup_header = ['**Speedup**'] * len(data_header)
data_header = interleave(data_header, speedup_header)
writer.headers = main_header + data_header
writer.value_matrix = []
for k, v in self.data.items():
batch_size, beam_size = k
avg, res_percentiles = v[math]
main = [batch_size, beam_size]
data = [avg, *res_percentiles]
if relative:
rel = self.data[k][relative]
rel_avg, rel_res_percentiles = rel
rel = [rel_avg, *rel_res_percentiles]
speedup = [d / r for (r, d) in zip(rel, data)]
if reverse_speedup:
speedup = [1 / s for s in speedup]
data = interleave(data, speedup)
writer.value_matrix.append(main + data)
writer.write_table()
class LatencyTable(PerformanceTable):
def __init__(self, percentiles, unit='ms'):
super().__init__(percentiles, unit)
self.unit_convert = {'s': 1, 'ms': 1e3, 'us': 1e6}
class ThroughputTable(PerformanceTable):
def __init__(self, percentiles, unit='tok/s', reverse_percentiles=True):
super().__init__(percentiles, unit, reverse_percentiles)
self.unit_convert = {'tok/s': 1}
|
TensorFlow/LanguageModeling/BERT/utils | utils | create_glue_data | # Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import json
import math
import os
import random
import modeling
import optimization
import tokenization
import six
import tensorflow as tf
import horovod.tensorflow as hvd
import time
import csv
flags = tf.flags
FLAGS = None
def extract_flags():
## Required parameters
flags.DEFINE_string(
"data_dir", None,
"The input data dir. Should contain the .tsv files (or other data files) "
"for the task.")
flags.DEFINE_string("task_name", None, "The name of the task to train.")
flags.DEFINE_string("vocab_file", None,
"The vocabulary file that the BERT model was trained on.")
flags.DEFINE_bool(
"do_lower_case", True,
"Whether to lower case the input text. Should be True for uncased "
"models and False for cased models.")
flags.DEFINE_integer(
"max_seq_length", 128,
"The maximum total input sequence length after WordPiece tokenization. "
"Sequences longer than this will be truncated, and sequences shorter "
"than this will be padded.")
flags.DEFINE_bool(
"verbose_logging", False,
"If true, all of the warnings related to data processing will be printed. "
"A number of warnings are expected for a normal SQuAD evaluation.")
flags.mark_flag_as_required("data_dir")
flags.mark_flag_as_required("task_name")
flags.mark_flag_as_required("vocab_file")
return flags.FLAGS
class InputExample(object):
"""A single training/test example for simple sequence classification."""
def __init__(self, guid, text_a, text_b=None, label=None):
"""Constructs a InputExample.
Args:
guid: Unique id for the example.
text_a: string. The untokenized text of the first sequence. For single
sequence tasks, only this sequence must be specified.
text_b: (Optional) string. The untokenized text of the second sequence.
Only must be specified for sequence pair tasks.
label: (Optional) string. The label of the example. This should be
specified for train and dev examples, but not for test examples.
"""
self.guid = guid
self.text_a = text_a
self.text_b = text_b
self.label = label
class PaddingInputExample(object):
"""Fake example so the num input examples is a multiple of the batch size.
When running eval/predict on the TPU, we need to pad the number of examples
to be a multiple of the batch size, because the TPU requires a fixed batch
size. The alternative is to drop the last batch, which is bad because it means
the entire output data won't be generated.
We use this class instead of `None` because treating `None` as padding
battches could cause silent errors.
"""
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self,
input_ids,
input_mask,
segment_ids,
label_id,
is_real_example=True):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.label_id = label_id
self.is_real_example = is_real_example
class DataProcessor(object):
"""Base class for data converters for sequence classification data sets."""
def get_train_examples(self, data_dir):
"""Gets a collection of `InputExample`s for the train set."""
raise NotImplementedError()
def get_dev_examples(self, data_dir):
"""Gets a collection of `InputExample`s for the dev set."""
raise NotImplementedError()
def get_test_examples(self, data_dir):
"""Gets a collection of `InputExample`s for prediction."""
raise NotImplementedError()
def get_labels(self):
"""Gets the list of labels for this data set."""
raise NotImplementedError()
@classmethod
def _read_tsv(cls, input_file, quotechar=None):
"""Reads a tab separated value file."""
with tf.gfile.Open(input_file, "r") as f:
reader = csv.reader(f, delimiter="\t", quotechar=quotechar)
lines = []
for line in reader:
lines.append(line)
return lines
class XnliProcessor(DataProcessor):
"""Processor for the XNLI data set."""
def __init__(self):
self.language = "zh"
def get_train_examples(self, data_dir):
"""See base class."""
lines = self._read_tsv(
os.path.join(data_dir, "multinli",
"multinli.train.%s.tsv" % self.language))
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "train-%d" % (i)
text_a = tokenization.convert_to_unicode(line[0])
text_b = tokenization.convert_to_unicode(line[1])
label = tokenization.convert_to_unicode(line[2])
if label == tokenization.convert_to_unicode("contradictory"):
label = tokenization.convert_to_unicode("contradiction")
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
def get_dev_examples(self, data_dir):
"""See base class."""
lines = self._read_tsv(os.path.join(data_dir, "xnli.dev.tsv"))
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "dev-%d" % (i)
language = tokenization.convert_to_unicode(line[0])
if language != tokenization.convert_to_unicode(self.language):
continue
text_a = tokenization.convert_to_unicode(line[6])
text_b = tokenization.convert_to_unicode(line[7])
label = tokenization.convert_to_unicode(line[1])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
def get_labels(self):
"""See base class."""
return ["contradiction", "entailment", "neutral"]
class MnliProcessor(DataProcessor):
"""Processor for the MultiNLI data set (GLUE version)."""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")),
"dev_matched")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "test_matched.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["contradiction", "entailment", "neutral"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "%s-%s" % (set_type, tokenization.convert_to_unicode(line[0]))
text_a = tokenization.convert_to_unicode(line[8])
text_b = tokenization.convert_to_unicode(line[9])
if set_type == "test":
label = "contradiction"
else:
label = tokenization.convert_to_unicode(line[-1])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
class MrpcProcessor(DataProcessor):
"""Processor for the MRPC data set (GLUE version)."""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "%s-%s" % (set_type, i)
text_a = tokenization.convert_to_unicode(line[3])
text_b = tokenization.convert_to_unicode(line[4])
if set_type == "test":
label = "0"
else:
label = tokenization.convert_to_unicode(line[0])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
class ColaProcessor(DataProcessor):
"""Processor for the CoLA data set (GLUE version)."""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
# Only the test set has a header
if set_type == "test" and i == 0:
continue
guid = "%s-%s" % (set_type, i)
if set_type == "test":
text_a = tokenization.convert_to_unicode(line[1])
label = "0"
else:
text_a = tokenization.convert_to_unicode(line[3])
label = tokenization.convert_to_unicode(line[1])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
return examples
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
def convert_single_example(ex_index, example, label_list, max_seq_length,
tokenizer, verbose_logging=False):
"""Converts a single `InputExample` into a single `InputFeatures`."""
if isinstance(example, PaddingInputExample):
return InputFeatures(
input_ids=[0] * max_seq_length,
input_mask=[0] * max_seq_length,
segment_ids=[0] * max_seq_length,
label_id=0,
is_real_example=False)
label_map = {}
for (i, label) in enumerate(label_list):
label_map[label] = i
tokens_a = tokenizer.tokenize(example.text_a)
tokens_b = None
if example.text_b:
tokens_b = tokenizer.tokenize(example.text_b)
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
_truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
else:
# Account for [CLS] and [SEP] with "- 2"
if len(tokens_a) > max_seq_length - 2:
tokens_a = tokens_a[0:(max_seq_length - 2)]
# The convention in BERT is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambiguously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in tokens_a:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
if tokens_b:
for token in tokens_b:
tokens.append(token)
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
label_id = label_map[example.label]
if ex_index < 5 and verbose_logging:
tf.compat.v1.logging.info("*** Example ***")
tf.compat.v1.logging.info("guid: %s" % (example.guid))
tf.compat.v1.logging.info("tokens: %s" % " ".join(
[tokenization.printable_text(x) for x in tokens]))
tf.compat.v1.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
tf.compat.v1.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
tf.compat.v1.logging.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
tf.compat.v1.logging.info("label: %s (id = %d)" % (example.label, label_id))
feature = InputFeatures(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
label_id=label_id,
is_real_example=True)
return feature
# This function is not used by this file but is still used by the Colab and
# people who depend on it.
def convert_examples_to_features(examples, label_list, max_seq_length,
tokenizer):
"""Convert a set of `InputExample`s to a list of `InputFeatures`."""
features = []
for (ex_index, example) in enumerate(examples):
if ex_index % 10000 == 0:
tf.compat.v1.logging.info("Writing example %d of %d" % (ex_index, len(examples)))
feature = convert_single_example(ex_index, example, label_list,
max_seq_length, tokenizer, FLAGS.verbose_logging)
features.append(feature)
return features
def file_based_convert_examples_to_features(
examples, label_list, max_seq_length, tokenizer, output_file):
"""Convert a set of `InputExample`s to a TFRecord file."""
writer = tf.python_io.TFRecordWriter(output_file)
for (ex_index, example) in enumerate(examples):
if ex_index % 10000 == 0:
tf.compat.v1.logging.info("Writing example %d of %d" % (ex_index, len(examples)))
feature = convert_single_example(ex_index, example, label_list,
max_seq_length, tokenizer)
def create_int_feature(values):
f = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
return f
features = collections.OrderedDict()
features["input_ids"] = create_int_feature(feature.input_ids)
features["input_mask"] = create_int_feature(feature.input_mask)
features["segment_ids"] = create_int_feature(feature.segment_ids)
features["label_ids"] = create_int_feature([feature.label_id])
features["is_real_example"] = create_int_feature(
[int(feature.is_real_example)])
tf_example = tf.train.Example(features=tf.train.Features(feature=features))
writer.write(tf_example.SerializeToString())
writer.close()
def main():
processors = {
"cola": ColaProcessor,
"mnli": MnliProcessor,
"mrpc": MrpcProcessor,
"xnli": XnliProcessor,
}
task_name = FLAGS.task_name.lower()
if task_name not in processors:
raise ValueError("Task not found: %s" % (task_name))
processor = processors[task_name]()
label_list = processor.get_labels()
tokenizer = tokenization.FullTokenizer(
vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
tf.gfile.MakeDirs(FLAGS.data_dir + "final_tfrecords_sharded")
train_examples = processor.get_train_examples(FLAGS.data_dir)
train_file = os.path.join(FLAGS.data_dir, "final_tfrecords_sharded/" + task_name + "train.tf_record")
file_based_convert_examples_to_features(
train_examples, label_list, FLAGS.max_seq_length, tokenizer, train_file)
eval_examples = processor.get_dev_examples(FLAGS.data_dir)
eval_file = os.path.join(FLAGS.data_dir, "final_tfrecords_sharded/" + task_name + "eval.tf_record")
file_based_convert_examples_to_features(
eval_examples, label_list, FLAGS.max_seq_length, tokenizer, eval_file)
predict_examples = processor.get_test_examples(FLAGS.data_dir)
predict_file = os.path.join(FLAGS.data_dir, "final_tfrecords_sharded/" + task_name + "predict.tf_record")
file_based_convert_examples_to_features(predict_examples, label_list,
FLAGS.max_seq_length, tokenizer,
predict_file)
if __name__ == "__main__":
main() |
TensorFlow/LanguageModeling/BERT/data | data | create_biobert_datasets_from_start | #!/bin/bash
# Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
export BERT_PREP_WORKING_DIR="${BERT_PREP_WORKING_DIR}"
# Download
python3 ${BERT_PREP_WORKING_DIR}/bertPrep.py --action download --dataset pubmed_baseline
python3 ${BERT_PREP_WORKING_DIR}/bertPrep.py --action download --dataset google_pretrained_weights # Includes vocab
# Properly format the text files
python3 ${BERT_PREP_WORKING_DIR}/bertPrep.py --action text_formatting --dataset pubmed_baseline
# Shard the text files
python3 ${BERT_PREP_WORKING_DIR}/bertPrep.py --action sharding --dataset pubmed_baseline
### BERT BASE
## UNCASED
# Create TFRecord files Phase 1
python3 ${BERT_PREP_WORKING_DIR}/bertPrep.py --action create_tfrecord_files --dataset pubmed_baseline --max_seq_length 128 \
--max_predictions_per_seq 20 --vocab_file ${BERT_PREP_WORKING_DIR}/download/google_pretrained_weights/uncased_L-12_H-768_A-12/vocab.txt
# Create TFRecord files Phase 2
python3 ${BERT_PREP_WORKING_DIR}/bertPrep.py --action create_tfrecord_files --dataset pubmed_baseline --max_seq_length 512 \
--max_predictions_per_seq 80 --vocab_file ${BERT_PREP_WORKING_DIR}/download/google_pretrained_weights/uncased_L-12_H-768_A-12/vocab.txt
## CASED
# Create TFRecord files Phase 1
python3 ${BERT_PREP_WORKING_DIR}/bertPrep.py --action create_tfrecord_files --dataset pubmed_baseline --max_seq_length 128 \
--max_predictions_per_seq 20 --vocab_file ${BERT_PREP_WORKING_DIR}/download/google_pretrained_weights/cased_L-12_H-768_A-12/vocab.txt \
--do_lower_case=0
# Create TFRecord files Phase 2
python3 ${BERT_PREP_WORKING_DIR}/bertPrep.py --action create_tfrecord_files --dataset pubmed_baseline --max_seq_length 512 \
--max_predictions_per_seq 80 --vocab_file ${BERT_PREP_WORKING_DIR}/download/google_pretrained_weights/cased_L-12_H-768_A-12/vocab.txt \
--do_lower_case=0
|
PyTorch/SpeechSynthesis/Tacotron2/platform | platform | DGXA100_tacotron2_TF32_1NGPU_train | mkdir -p output
python train.py -m Tacotron2 -o output/ -lr 1e-3 --epochs 1501 -bs 128 --weight-decay 1e-6 --grad-clip-thresh 1.0 --cudnn-enabled --load-mel-from-disk --training-files=filelists/ljs_mel_text_train_filelist.txt --validation-files=filelists/ljs_mel_text_val_filelist.txt --log-file nvlog.json --anneal-steps 500 1000 1500 --anneal-factor 0.1
|
TensorFlow2/Recommendation/DLRM_and_DCNv2 | DLRM_and_DCNv2 | README | # DLRM and DCNv2 for TensorFlow 2
This repository provides recipes to train and deploy two ranking models – DLRM and DCNv2.
This document provides instructions on how to run those models and a description of the features implemented.
Detailed instructions for reproducing, as well as benchmark results and descriptions of the respective architectures, can be found in:
* [doc/DLRM.md](doc/DLRM.md) for DLRM
* [doc/DCNv2.md](doc/DCNv2.md) for DCNv2
## Table Of Contents
* [Overview](#overview)
* [Default configuration](#default-configuration)
* [Feature support matrix](#feature-support-matrix)
* [Features](#features)
* [Mixed precision training](#mixed-precision-training)
* [Enabling mixed precision](#enabling-mixed-precision)
* [Enabling TF32](#enabling-tf32)
* [Hybrid-parallel training with Merlin Distributed Embeddings](#hybrid-parallel-training-with-merlin-distributed-embeddings)
* [Training very large embedding tables](#training-very-large-embedding-tables)
* [Multi-node training](#multi-node-training)
* [Preprocessing on GPU with Spark 3](#preprocessing-on-gpu-with-spark-3)
* [BYO dataset functionality overview](#byo-dataset-functionality-overview)
* [Setup](#setup)
* [Requirements](#requirements)
* [Advanced](#advanced)
* [Scripts and sample code](#scripts-and-sample-code)
* [Parameters](#parameters)
* [Command-line options](#command-line-options)
* [Getting the Data](#getting-the-data)
* [Inference deployment](#inference-deployment)
* [Release notes](#release-notes)
* [Changelog](#changelog)
## Overview
This directory contains Deep Learning Recommendation Model (DLRM) and Deep Cross Network version 2 (DCNv2).
Both are recommendation models designed to use categorical and numerical inputs.
Using the scripts provided here, you can efficiently train models too large to fit into a single GPU.
This is because we use a hybrid-parallel approach, which combines model parallelism with data parallelism for
different parts of the neural network.
This is explained in detail in the [next section](#hybrid-parallel-training-with-merlin-distributed-embeddings).
Using DLRM or DCNv2, you can train a high-quality general model for recommendations.
Both models in this directory are trained with mixed precision using Tensor Cores on NVIDIA Volta, NVIDIA Turing, and NVIDIA Ampere GPU architectures.
Therefore, researchers can get results 2x faster than training without Tensor Cores while experiencing the
benefits of mixed precision training. This model is tested against each NGC monthly container
release to ensure consistent accuracy and performance over time.
### Default configuration
The following features were implemented:
- general
- static loss scaling for Tensor Cores (mixed precision) training
- hybrid-parallel multi-GPU training using Merlin Distributed Embeddings
- inference
- inference using Merlin HPS, Triton ensembles and TensorRT
- preprocessing
- dataset preprocessing using Spark 3 on GPUs
### Feature support matrix
The following features are supported by this model:
| Feature | DLRM and DCNv2
|----------------------|--------------------------
|Hybrid-parallel training with Merlin Distributed Embeddings | Yes
|Multi-node training | Yes
|Triton inference with TensorRT and Merlin Hierarchical Parameter Server | Yes
|Automatic mixed precision (AMP) | Yes
|XLA | Yes
|Preprocessing on GPU with Spark 3| Yes
|Inference using NVIDIA Triton | Yes
#### Features
**Automatic Mixed Precision (AMP)**
Enables mixed precision training without any changes to the code-base by performing automatic graph rewrites and loss scaling controlled by an environmental variable.
**XLA**
The training script supports a `--xla` flag. It can be used to enable XLA JIT compilation. Currently, we use [XLA Lite](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-user-guide/index.html#xla-lite). It delivers a steady 10-30% performance boost depending on your hardware platform, precision, and the number of GPUs. It is turned off by default.
**Horovod**
Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information about how to get started with Horovod, refer tothe Horovod [official repository](https://github.com/horovod/horovod).
**Hybrid-parallel training with Merlin Distributed Embeddings**
Our model uses Merlin Distributed Embeddings to implement efficient multi-GPU training.
For details, refer to the example sources in this repository or refer to the TensorFlow tutorial.
For a detailed description of our multi-GPU approach, visit this [section](#hybrid-parallel-training-with-merlin-distributed-embeddings).
**Multi-node training**
This repository supports multi-node training. For more information, refer to the [multinode section](#multi-node-training)
**Merlin Hierarchical Parameter server (HPS)**
This repository supports inference with Merlin HPS. For more information, refer to [doc/inference.md](doc/inference.md).
### Mixed precision training
Mixed precision is the combined use of different numerical precisions in a computational method. [Mixed precision](https://arxiv.org/abs/1710.03740) training offers significant computational speedup by performing operations in half-precision format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of [Tensor Cores](https://developer.nvidia.com/tensor-cores) in NVIDIA Volta, and following with both the NVIDIA Turing and NVIDIA Ampere architectures, significant training speedups are experienced by switching to mixed precision – up to 3.4x overall speedup on the most arithmetically intense model architectures. Using mixed precision training requires two steps:
1. Porting the model to use the FP16 data type where appropriate.
2. Adding loss scaling to preserve small gradient values.
The ability to train deep learning networks with lower precision was introduced in the Pascal architecture and first supported in [CUDA 8](https://devblogs.nvidia.com/parallelforall/tag/fp16/) in the NVIDIA Deep Learning SDK.
For information about:
- How to train using mixed precision, refer to the [Mixed Precision Training](https://arxiv.org/abs/1710.03740) paper and [Training With Mixed Precision](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html) documentation.
- Techniques used for mixed precision training, refer to the [Mixed-Precision Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/) blog.
#### Enabling mixed precision
Mixed precision training is turned off by default. To turn it on, issue the `--amp` flag to the `dlrm.py` or `dcnv2.py` script.
#### Enabling TF32
TensorFloat-32 (TF32) is the new math mode in [NVIDIA A100](https://www.nvidia.com/en-us/data-center/a100/) GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.
TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations.
For more information, refer to the [TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) blog post.
TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.
### Hybrid-parallel training with Merlin Distributed Embeddings
Many recommendation models contain very large embedding tables. As a result, the model is often too large to fit onto a single device.
This could be easily solved by training in a model-parallel way, using either the CPU or other GPUs as "memory donors."
However, this approach is suboptimal as the "memory donor" devices' compute is not utilized.
In this repository, we use the model-parallel approach for the Embedding Tables while employing a usual data-parallel approach
for the more compute-intensive MLPs and Dot Interaction layer. This way, we can train models much larger than what would normally fit into
a single GPU while at the same time making the training faster by using multiple GPUs. We call this approach hybrid-parallel training.
To implement this approach, we use the [Merlin Distributed Embeddings](https://github.com/NVIDIA-Merlin/distributed-embeddings) library.
It provides a scalable model parallel wrapper called `distributed_embeddings.dist_model_parallel`. This wrapper automatically distributes embedding tables to multiple GPUs.
This way, embeddings can be scaled beyond a single GPU’s memory capacity without
complex code to handle cross-worker communication.
Under the hood, Merlin Distributed Embeddings uses a
specific multi-GPU communication pattern called
[all-2-all](https://en.wikipedia.org/wiki/All-to-all_\(parallel_pattern\)) to transition from model-parallel to data-parallel
paradigm. In the [original DLRM whitepaper](https://arxiv.org/abs/1906.00091), this is referred to as "butterfly shuffle."
An example model using Hybrid Parallelism is shown in Figure 2. The compute-intensive dense layers are run in data-parallel
mode. The smaller embedding tables are run model-parallel, so each smaller table is placed entirely on a single device.
This is not suitable for larger tables that need more memory than can be provided by a single device. Therefore,
those large tables are split into multiple parts and each part is run on a different GPU.
<p align="center">
<img width="100%" src="./doc/img/hybrid_parallel.svg" />
<br>
Figure 2. Hybrid parallelism with Merlin Distributed Embeddings.
</p>
In this repository, for both DLRM and DCNv2,
we train models of three sizes: "small" (15.6 GiB), "large" (84.9 GiB), and "extra large" (421 GiB).
The "small" model can be trained on a single V100-32GB GPU. The "large" model needs at least 8xV100-32GB GPUs,
but each table can fit on a single GPU.
The "extra large" model, on the other hand, contains tables that do not fit into a single device and will be automatically
split and stored across multiple GPUs by Merlin Distributed Embeddings.
#### Training very large embedding tables
We tested this approach by training a DLRM model on the Criteo Terabyte dataset with the frequency limiting option turned off (set to zero).
The weights of the resulting model take 421 GiB. The largest table weighs 140 GiB.
Here are the commands you can use to reproduce this:
```
# build and run the preprocessing container as in the Quick Start Guide
# then when preprocessing set the frequency limit to 0:
./prepare_dataset.sh DGX2 0
# build and run the training container same as in the Quick Start Guide
# then append options necessary for training very large embedding tables:
horovodrun -np 8 -H localhost:8 --mpi-args=--oversubscribe numactl --interleave=all -- python -u dlrm.py --dataset_path /data/dlrm/ --amp --xla
```
When using this method on a DGX A100 with 8 A100-80GB GPUs and a large-enough dataset, it is possible to train a single embedding table of up to 600 GB. You can also use multi-node training (described below) to train even larger recommender systems.
#### Multi-node training
Multi-node training is supported. Depending on the exact interconnect hardware and model configuration,
you might experience only a modest speedup with multi-node.
Multi-node training can also be used to train larger models.
For example, to train a 1.68 TB variant of DLRM on multi-node, you can run:
```
cmd='numactl --interleave=all -- python -u dlrm.py --dataset_path /data/dlrm/full_criteo_data --amp --xla\
--embedding_dim 512 --bottom_mlp_dims 512,256,512' \
srun_flags='--mpi=pmix' \
cont=nvidia_dlrm_tf \
mounts=/data/dlrm:/data/dlrm \
sbatch -n 32 -N 4 -t 00:20:00 slurm_multinode.sh
```
### Preprocessing on GPU with Spark 3
Refer to the [preprocessing documentation](doc/criteo_dataset.md#advanced) for a detailed description of the Spark 3 GPU functionality.
### BYO dataset functionality overview
Refer to the [BYO Dataset summary](doc/multidataset.md) for details.
### Inference using NVIDIA Triton
The [deployment](deployment) directory contains two examples of deploying recommender models larger than single GPU memory. Both use the NVIDIA Triton Inference Server.
1. For the example with Merlin Hierarchical Parameter Server and TensorRT,
refer to [detailed documentation](doc/merlin_hps_inference.md)
2. For the example with TensorFlow SavedModel and TensorRT
3. Refer to [detailed documentation](doc/tensorflow_inference.md)
## Setup
The following section lists the requirements for training DLRM and DCNv2.
### Requirements
This repository contains Dockerfile that extends the TensorFlow 2 NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components:
- [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
- [TensorFlow 2 23.02-py3](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow/tags) NGC container
- Supported GPUs:
- [NVIDIA Volta architecture](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/)
- [NVIDIA Turing architecture](https://www.nvidia.com/en-us/geforce/turing/)
- [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/)
For more information about how to get started with NGC containers, refer to the following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning Documentation:
- [Getting Started Using NVIDIA GPU Cloud](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html)
- [Accessing And Pulling From The NGC Container Registry](https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html#accessing_registry)
- [Running TensorFlow](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/running.html#running)
For those unable to use the TensorFlow NGC container, to set up the required environment or create your own container, refer to the versioned [NVIDIA Container Support Matrix](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html).
## Advanced
The following sections provide more details of the dataset, running training and inference, and the training results.
### Scripts and sample code
These are the important modules in this repository:
- `dlrm.py` - The script for training DLRM. Wrapper around `main.py`.
- `dcnv2.py` - The script for training DCNv2. Wrapper around `main.py`.
- `main.py` - Contains common code for training and evaluating DLRM and DCNv2 (e.g., the training loop)
- `Dockerfile` - defines the docker image used for training DLRM and DCNv2.
- `nn/model.py` - Contains the definition of the full neural network, which can be used to create DLRM and DCNv2.
- `nn/dense_model.py` - Defines the "dense" part of DLRM and DCNv2 (Bottom MLP, Interaction, Top MLP).
- `nn/sparse_model.py` - Defines the "sparse" part of DLRM and DCNv2 (Embedding layers).
- `nn/trainer.py` - Defines a single training step (forward, backward, weight update).
- `nn/embedding.py` - Implementations of the embedding layers.
- `nn/lr_scheduler.py` - Defines a TensorFlow learning rate scheduler that supports learning rate warmup and polynomial decay.
- `deployment/deploy.py` - The script used for creating the Triton model store for inference.
- `deployment/evaluate_latency.py` - The script used to evaluate the latency of deployed Triton DLRM and DCNv2 models.
- `deployment/evaluate_accuracy.py` - The script used to evaluate the accuracy of deployed Triton DLRM and DCNv2 models.
- `dataloading/dataloader.py` - Handles defining the dataset objects based on command-line flags.
- `dataloading/datasets.py` - Defines the `TfRawBinaryDataset` class responsible for storing and loading the training data.
- `preproc` - directory containing source code for preprocessing the Criteo 1TB Dataset.
- `slurm_multinode.sh` - Example batch script for multi-node training on SLURM clusters.
- `tensorflow-dot-based-interact` - A directory with a set of custom CUDA kernels. They provide fast implementations of the dot-interaction operation for various precisions and hardware platforms.
- `utils.py` - General utilities, such as a timer used for taking performance measurements.
### Parameters
The table below lists the most important command-line parameters of the `main.py` script.
| Scope| parameter| Comment| Default Value |
| ----- | --- | ---- | ---- |
|datasets|dataset_path|Path to the JSON file with the sizes of embedding tables|
|function|mode| Choose "train" to train the model, "inference" to benchmark inference and "eval" to run validation| train|
|optimizations|amp| Enable automatic mixed precision| False
|optimizations|xla| Enable XLA| False|
|hyperparameters|batch_size| Batch size used for training|65536|
|hyperparameters|epochs| Number of epochs to train for|1|
|hyperparameters|optimizer| Optimization algorithm for training |SGD|
|hyperparameters|evals_per_epoch| Number of evaluations per epoch|1|
|hyperparameters|valid_batch_size| Batch size used for validation|65536|
|hyperparameters|max_steps| Stop the training/inference after this many optimization steps|-1|
|checkpointing|restore_checkpoint_path| Path from which to restore a checkpoint before training|None|
|checkpointing|save_checkpoint_path| Path to which to save a checkpoint file at the end of the training|None|
|debugging|run_eagerly| Disable all tf.function decorators for debugging|False|
|debugging|print_freq| Number of steps between debug prints|1000|
|debugging|max_steps| Exit early after performing a prescribed number of steps|None|
### Command-line options
The training script supports a number of command-line flags.
You can get the descriptions of those, for example, by running `python dlrm.py --help`.
### Getting the Data
Refer to:
* [doc/criteo_dataset.md](doc/criteo_dataset.md) for information on how to run on the Criteo 1TB dataset.
* [doc/multidataset.md](doc/multidataset.md) for information on training with your own dataset.
## Release notes
We’re constantly refining and improving our performance on AI and HPC workloads, even on the same hardware, with frequent updates to our software stack. For our latest performance data, refer to these pages for [AI](https://developer.nvidia.com/deep-learning-performance-training-inference) and [HPC](https://developer.nvidia.com/hpc-application-performance) benchmarks.
### Changelog
June 2023
- Support and performance numbers for DCNv2
- Support inference deployment using NVIDIA Merlin HPS, NVIDIA Triton, and NVIDIA TensorRT for DLRM and DCNv2
- Major refactoring and usability improvements
July 2022
- Start using Merlin Distributed Embeddings
March 2022
- Major performance improvements
- Support for BYO dataset
March 2021
- Initial release
|
PaddlePaddle/LanguageModeling/BERT | BERT | __init__ | # Copyright (c) 2022 NVIDIA Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
TensorFlow2/Recommendation/WideAndDeep/triton/runner | runner | exceptions | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class RunnerException(Exception):
"""
Runner Exception
"""
def __init__(self, message: str):
self._message = message
def __str__(self):
return self._message
@property
def message(self):
"""Get the exception message.
Returns
-------
str
The message associated with this exception, or None if no message.
"""
return self._message
|
TensorFlow/Detection/SSD/models/research/object_detection/predictors/heads | heads | box_head | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Box Head.
Contains Box prediction head classes for different meta architectures.
All the box prediction heads have a predict function that receives the
`features` as the first argument and returns `box_encodings`.
"""
import functools
import tensorflow as tf
from object_detection.predictors.heads import head
slim = tf.contrib.slim
class MaskRCNNBoxHead(head.Head):
"""Box prediction head.
Please refer to Mask RCNN paper:
https://arxiv.org/abs/1703.06870
"""
def __init__(self,
is_training,
num_classes,
fc_hyperparams_fn,
use_dropout,
dropout_keep_prob,
box_code_size,
share_box_across_classes=False):
"""Constructor.
Args:
is_training: Indicates whether the BoxPredictor is in training mode.
num_classes: number of classes. Note that num_classes *does not*
include the background category, so if groundtruth labels take values
in {0, 1, .., K-1}, num_classes=K (and not K+1, even though the
assigned classification targets can range from {0,... K}).
fc_hyperparams_fn: A function to generate tf-slim arg_scope with
hyperparameters for fully connected ops.
use_dropout: Option to use dropout or not. Note that a single dropout
op is applied here prior to both box and class predictions, which stands
in contrast to the ConvolutionalBoxPredictor below.
dropout_keep_prob: Keep probability for dropout.
This is only used if use_dropout is True.
box_code_size: Size of encoding for each box.
share_box_across_classes: Whether to share boxes across classes rather
than use a different box for each class.
"""
super(MaskRCNNBoxHead, self).__init__()
self._is_training = is_training
self._num_classes = num_classes
self._fc_hyperparams_fn = fc_hyperparams_fn
self._use_dropout = use_dropout
self._dropout_keep_prob = dropout_keep_prob
self._box_code_size = box_code_size
self._share_box_across_classes = share_box_across_classes
def predict(self, features, num_predictions_per_location=1):
"""Predicts boxes.
Args:
features: A float tensor of shape [batch_size, height, width,
channels] containing features for a batch of images.
num_predictions_per_location: Int containing number of predictions per
location.
Returns:
box_encodings: A float tensor of shape
[batch_size, 1, num_classes, code_size] representing the location of the
objects.
Raises:
ValueError: If num_predictions_per_location is not 1.
"""
if num_predictions_per_location != 1:
raise ValueError('Only num_predictions_per_location=1 is supported')
spatial_averaged_roi_pooled_features = tf.reduce_mean(
features, [1, 2], keep_dims=True, name='AvgPool')
flattened_roi_pooled_features = slim.flatten(
spatial_averaged_roi_pooled_features)
if self._use_dropout:
flattened_roi_pooled_features = slim.dropout(
flattened_roi_pooled_features,
keep_prob=self._dropout_keep_prob,
is_training=self._is_training)
number_of_boxes = 1
if not self._share_box_across_classes:
number_of_boxes = self._num_classes
with slim.arg_scope(self._fc_hyperparams_fn()):
box_encodings = slim.fully_connected(
flattened_roi_pooled_features,
number_of_boxes * self._box_code_size,
activation_fn=None,
scope='BoxEncodingPredictor')
box_encodings = tf.reshape(box_encodings,
[-1, 1, number_of_boxes, self._box_code_size])
return box_encodings
class ConvolutionalBoxHead(head.Head):
"""Convolutional box prediction head."""
def __init__(self,
is_training,
box_code_size,
kernel_size,
use_depthwise=False):
"""Constructor.
Args:
is_training: Indicates whether the BoxPredictor is in training mode.
box_code_size: Size of encoding for each box.
kernel_size: Size of final convolution kernel. If the
spatial resolution of the feature map is smaller than the kernel size,
then the kernel size is automatically set to be
min(feature_width, feature_height).
use_depthwise: Whether to use depthwise convolutions for prediction
steps. Default is False.
Raises:
ValueError: if min_depth > max_depth.
"""
super(ConvolutionalBoxHead, self).__init__()
self._is_training = is_training
self._box_code_size = box_code_size
self._kernel_size = kernel_size
self._use_depthwise = use_depthwise
def predict(self, features, num_predictions_per_location):
"""Predicts boxes.
Args:
features: A float tensor of shape [batch_size, height, width, channels]
containing image features.
num_predictions_per_location: Number of box predictions to be made per
spatial location. Int specifying number of boxes per location.
Returns:
box_encodings: A float tensors of shape
[batch_size, num_anchors, q, code_size] representing the location of
the objects, where q is 1 or the number of classes.
"""
net = features
if self._use_depthwise:
box_encodings = slim.separable_conv2d(
net, None, [self._kernel_size, self._kernel_size],
padding='SAME', depth_multiplier=1, stride=1,
rate=1, scope='BoxEncodingPredictor_depthwise')
box_encodings = slim.conv2d(
box_encodings,
num_predictions_per_location * self._box_code_size, [1, 1],
activation_fn=None,
normalizer_fn=None,
normalizer_params=None,
scope='BoxEncodingPredictor')
else:
box_encodings = slim.conv2d(
net, num_predictions_per_location * self._box_code_size,
[self._kernel_size, self._kernel_size],
activation_fn=None,
normalizer_fn=None,
normalizer_params=None,
scope='BoxEncodingPredictor')
batch_size = features.get_shape().as_list()[0]
if batch_size is None:
batch_size = tf.shape(features)[0]
box_encodings = tf.reshape(box_encodings,
[batch_size, -1, 1, self._box_code_size])
return box_encodings
# TODO(alirezafathi): See if possible to unify Weight Shared with regular
# convolutional box head.
class WeightSharedConvolutionalBoxHead(head.Head):
"""Weight shared convolutional box prediction head.
This head allows sharing the same set of parameters (weights) when called more
then once on different feature maps.
"""
def __init__(self,
box_code_size,
kernel_size=3,
use_depthwise=False,
box_encodings_clip_range=None):
"""Constructor.
Args:
box_code_size: Size of encoding for each box.
kernel_size: Size of final convolution kernel.
use_depthwise: Whether to use depthwise convolutions for prediction steps.
Default is False.
box_encodings_clip_range: Min and max values for clipping box_encodings.
"""
super(WeightSharedConvolutionalBoxHead, self).__init__()
self._box_code_size = box_code_size
self._kernel_size = kernel_size
self._use_depthwise = use_depthwise
self._box_encodings_clip_range = box_encodings_clip_range
def predict(self, features, num_predictions_per_location):
"""Predicts boxes.
Args:
features: A float tensor of shape [batch_size, height, width, channels]
containing image features.
num_predictions_per_location: Number of box predictions to be made per
spatial location.
Returns:
box_encodings: A float tensor of shape
[batch_size, num_anchors, code_size] representing the location of
the objects.
"""
box_encodings_net = features
if self._use_depthwise:
conv_op = functools.partial(slim.separable_conv2d, depth_multiplier=1)
else:
conv_op = slim.conv2d
box_encodings = conv_op(
box_encodings_net,
num_predictions_per_location * self._box_code_size,
[self._kernel_size, self._kernel_size],
activation_fn=None, stride=1, padding='SAME',
normalizer_fn=None,
scope='BoxPredictor')
batch_size = features.get_shape().as_list()[0]
if batch_size is None:
batch_size = tf.shape(features)[0]
# Clipping the box encodings to make the inference graph TPU friendly.
if self._box_encodings_clip_range is not None:
box_encodings = tf.clip_by_value(
box_encodings, self._box_encodings_clip_range.min,
self._box_encodings_clip_range.max)
box_encodings = tf.reshape(box_encodings,
[batch_size, -1, self._box_code_size])
return box_encodings
|
PyTorch/SpeechRecognition/Jasper/triton/pytorch | pytorch | utils | # *****************************************************************************
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# *****************************************************************************
import tensorrt as trt
import torch
from collections import Counter
import json
import logging
triton_type_to_torch_type = {
'TYPE_BOOL': torch.bool,
'TYPE_INT8': torch.int8,
'TYPE_INT16': torch.int16,
'TYPE_INT32': torch.int32,
'TYPE_INT64': torch.int64,
'TYPE_UINT8': torch.uint8,
'TYPE_FP16': torch.float16,
'TYPE_FP32': torch.float32,
'TYPE_FP64': torch.float64
}
torch_type_to_triton_type = {
torch.bool: 'TYPE_BOOL',
torch.int8: 'TYPE_INT8',
torch.int16: 'TYPE_INT16',
torch.int32: 'TYPE_INT32',
torch.int64: 'TYPE_INT64',
torch.uint8: 'TYPE_UINT8',
torch.float16: 'TYPE_FP16',
torch.float32: 'TYPE_FP32',
torch.float64: 'TYPE_FP64'
}
def build_tensorrt_engine(model_file, shapes, max_workspace_size,
max_batch_size, fp16_mode):
''' takes a path to an onnx file, and shape information, returns a tensorrt engine
:: model_file :: path to an onnx model
:: shapes :: dictionary containing min shape, max shape, opt shape for the tensorrt engine
'''
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
builder = trt.Builder(TRT_LOGGER)
builder.fp16_mode = fp16_mode
builder.max_batch_size = max_batch_size
#
config = builder.create_builder_config()
config.max_workspace_size = max_workspace_size
if fp16_mode:
config.flags |= 1 << int(trt.BuilderFlag.FP16)
profile = builder.create_optimization_profile()
for s in shapes:
profile.set_shape(s['name'], min=s['min'], opt=s['opt'], max=s['max'])
config.add_optimization_profile(profile)
explicit_batch = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
network = builder.create_network(explicit_batch)
#
with trt.OnnxParser(network, TRT_LOGGER) as parser:
with open(model_file, 'rb') as model:
parser.parse(model.read())
for i in range(parser.num_errors):
print("[Converter error]: OnnxParser:", parser.get_error(i))
engine = builder.build_engine(network, config=config)
return engine
def get_inputs(dataloader, device, precision):
''' load sample inputs to device '''
inputs = []
logging.info("Loading sample inputs to device.")
for idx, batch in enumerate(dataloader):
if idx % (len(dataloader)//100) == 0:
logging.info(f"{idx}/{len(dataloader)}")
if type(batch) is torch.Tensor:
batch_d = batch.to(device)
if batch_d.is_floating_point() and precision == 'fp16':
batch_d = batch_d.to(torch.float16)
batch_d = (batch_d,)
inputs.append(batch_d)
else:
batch_d = []
for x in batch:
assert type(x) is torch.Tensor, "input is not a tensor"
x = x.to(device)
if x.is_floating_point() and precision == 'fp16':
x = x.to(torch.float16)
batch_d.append(x)
batch_d = tuple(batch_d)
inputs.append(batch_d)
logging.info("Finished loading sample inputs to device.")
return inputs
def get_list_of_shapes(l, fun):
''' returns the list of min/max shapes, depending on fun
:: l :: list of tuples of tensors
:: fun :: min or max
'''
tensor_tuple = l[0]
shapes = [list(x.shape) for x in tensor_tuple]
for tensor_tuple in l:
assert len(tensor_tuple) == len(shapes), "tensors with varying shape lengths are not supported"
for i,x in enumerate(tensor_tuple):
for j in range(len(x.shape)):
shapes[i][j] = fun(shapes[i][j], x.shape[j])
return shapes # a list of shapes
def get_min_shapes(l):
''' returns the tuple of min shapes
:: l :: list of tuples of tensors '''
shapes = get_list_of_shapes(l, min)
min_batch = 1
shapes = [[min_batch,*shape[1:]] for shape in shapes]
shapes = tuple(shapes)
return shapes # tuple of min shapes
def get_max_shapes(l):
''' returns the tuple of max shapes
:: l :: list of tuples of tensors '''
shapes = get_list_of_shapes(l, max)
max_batch = max(1,shapes[0][0])
shapes = [[max_batch,*shape[1:]] for shape in shapes]
shapes = tuple(shapes)
return shapes # tuple of max shapes
def get_opt_shapes(l):
''' returns the tuple of opt shapes
:: l :: list of tuples of tensors '''
counter = Counter()
for tensor_tuple in l:
shapes = [tuple(x.shape) for x in tensor_tuple]
shapes = tuple(shapes)
counter[shapes] += 1
shapes = counter.most_common(1)[0][0]
return shapes # tuple of most common occuring shapes
def get_shapes(l, max_batch_size):
''' returns a tuple of dynamic shapes: variable tensor dimensions
(for ex. batch size) occur as -1 in the tuple
:: l :: list of tuples of tensors '''
tensor_tuple = l[0]
shapes = [list(x.shape) for x in tensor_tuple]
for tensor_tuple in l:
err_msg = "tensors with varying shape lengths are not supported"
assert len(tensor_tuple) == len(shapes), err_msg
for i,x in enumerate(tensor_tuple):
for j in range(len(x.shape)):
if shapes[i][j] != x.shape[j] or j == 0 and max_batch_size > 1:
shapes[i][j] = -1
shapes = tuple(shapes)
return shapes # tuple of dynamic shapes
def get_io_properties(inputs, outputs, max_batch_size):
# generate input shapes - dynamic tensor shape support
input_shapes = get_shapes(inputs, max_batch_size)
# generate output shapes - dynamic tensor shape support
output_shapes = get_shapes(outputs, max_batch_size)
# generate input types
input_types = [torch_type_to_triton_type[x.dtype] for x in inputs[0]]
# generate output types
output_types = [torch_type_to_triton_type[x.dtype] for x in outputs[0]]
# get input names
rng = range(len(input_types))
input_names = ["input__" + str(num) for num in rng]
# get output names
rng = range(len(output_types))
output_names = ["output__" + str(num) for num in rng]
# get indices of dynamic input and output shapes
dynamic_axes = {}
for input_name,input_shape in zip(input_names,input_shapes):
dynamic_axes[input_name] = [i for i,x in enumerate(input_shape) if x == -1]
for output_name,output_shape in zip(output_names,output_shapes):
dynamic_axes[output_name] = [i for i,x in enumerate(output_shape) if x == -1]
# min, opt, max shapes for TensorRT
min_shapes = get_min_shapes(inputs)
opt_shapes = get_opt_shapes(inputs)
max_shapes = get_max_shapes(inputs)
res = {"input_shapes": input_shapes,
"output_shapes": output_shapes,
"input_types": input_types,
"output_types": output_types,
"input_names": input_names,
"output_names": output_names,
"dynamic_axes": dynamic_axes,
"min_shapes": min_shapes,
"opt_shapes": opt_shapes,
"max_shapes": max_shapes}
return res
def extract_io_props(model, dataloader, device, precision, max_batch_size):
# prepare inputs
inputs = get_inputs(dataloader, device, precision)
# generate outputs
outputs = []
for input in inputs:
with torch.no_grad():
output = model(*input)
if type(output) is torch.Tensor:
output = [output]
outputs.append(output)
# prepare input/output properties
io_props = get_io_properties(inputs, outputs, max_batch_size)
return io_props
def save_io_props(io_props, io_props_path):
with open(io_props_path, "w") as f:
f.write(json.dumps(io_props))
def load_io_props(io_props_path):
with open(io_props_path, "r") as f:
data = json.loads(f.read())
if "dynamic_axes" not in data.keys():
return data
return data
|
PyTorch/Recommendation/DLRM/dlrm/cuda_src/sparse_gather | sparse_gather | common | // Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#ifndef COMMON_H_
#define COMMON_H_
using ULLInt = unsigned long long int;
// Use to compute things like number of blocks
#define CEIL_DIV_INT(a, b) ((a + b - 1) / b)
#define CUDA_CHECK(cmd) \
do { \
cudaError_t e = cmd; \
if (e != cudaSuccess) { \
printf("Failed: Cuda error %s:%d '%s'\n", __FILE__, __LINE__, cudaGetErrorString(e)); \
exit(EXIT_FAILURE); \
} \
} while (0)
#endif // COMMON_H_
|
PaddlePaddle/Classification/RN50v1.5/models | models | resnet | # Copyright (c) 2022 NVIDIA Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import paddle
from paddle import ParamAttr
import paddle.nn as nn
from paddle.nn import Conv2D, BatchNorm, Linear
from paddle.nn import AdaptiveAvgPool2D, MaxPool2D, AvgPool2D
from paddle.nn.initializer import Uniform, Constant, KaimingNormal
MODELS = ["ResNet50"]
__all__ = MODELS
class ConvBNLayer(nn.Layer):
def __init__(self,
num_channels,
num_filters,
filter_size,
stride=1,
groups=1,
act=None,
lr_mult=1.0,
data_format="NCHW",
bn_weight_decay=True):
super().__init__()
self.act = act
self.avg_pool = AvgPool2D(
kernel_size=2, stride=2, padding=0, ceil_mode=True)
self.conv = Conv2D(
in_channels=num_channels,
out_channels=num_filters,
kernel_size=filter_size,
stride=stride,
padding=(filter_size - 1) // 2,
groups=groups,
weight_attr=ParamAttr(
learning_rate=lr_mult, initializer=KaimingNormal()),
bias_attr=False,
data_format=data_format)
self.bn = BatchNorm(
num_filters,
param_attr=ParamAttr(
learning_rate=lr_mult,
regularizer=None
if bn_weight_decay else paddle.regularizer.L2Decay(0.0),
initializer=Constant(1.0)),
bias_attr=ParamAttr(
learning_rate=lr_mult,
regularizer=None
if bn_weight_decay else paddle.regularizer.L2Decay(0.0),
initializer=Constant(0.0)),
data_layout=data_format)
self.relu = nn.ReLU()
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
if self.act:
x = self.relu(x)
return x
class BottleneckBlock(nn.Layer):
def __init__(self,
num_channels,
num_filters,
stride,
shortcut=True,
lr_mult=1.0,
data_format="NCHW",
bn_weight_decay=True):
super().__init__()
self.conv0 = ConvBNLayer(
num_channels=num_channels,
num_filters=num_filters,
filter_size=1,
act="relu",
lr_mult=lr_mult,
data_format=data_format,
bn_weight_decay=bn_weight_decay)
self.conv1 = ConvBNLayer(
num_channels=num_filters,
num_filters=num_filters,
filter_size=3,
stride=stride,
act="relu",
lr_mult=lr_mult,
data_format=data_format,
bn_weight_decay=bn_weight_decay)
self.conv2 = ConvBNLayer(
num_channels=num_filters,
num_filters=num_filters * 4,
filter_size=1,
act=None,
lr_mult=lr_mult,
data_format=data_format,
bn_weight_decay=bn_weight_decay)
if not shortcut:
self.short = ConvBNLayer(
num_channels=num_channels,
num_filters=num_filters * 4,
filter_size=1,
stride=stride,
lr_mult=lr_mult,
data_format=data_format,
bn_weight_decay=bn_weight_decay)
self.relu = nn.ReLU()
self.shortcut = shortcut
def forward(self, x):
identity = x
x = self.conv0(x)
x = self.conv1(x)
x = self.conv2(x)
if self.shortcut:
short = identity
else:
short = self.short(identity)
x = paddle.add(x=x, y=short)
x = self.relu(x)
return x
class ResNet(nn.Layer):
def __init__(self,
class_num=1000,
data_format="NCHW",
input_image_channel=3,
use_pure_fp16=False,
bn_weight_decay=True):
super().__init__()
self.class_num = class_num
self.num_filters = [64, 128, 256, 512]
self.block_depth = [3, 4, 6, 3]
self.num_channels = [64, 256, 512, 1024]
self.channels_mult = 1 if self.num_channels[-1] == 256 else 4
self.use_pure_fp16 = use_pure_fp16
self.stem_cfg = {
#num_channels, num_filters, filter_size, stride
"vb": [[input_image_channel, 64, 7, 2]],
}
self.stem = nn.Sequential(* [
ConvBNLayer(
num_channels=in_c,
num_filters=out_c,
filter_size=k,
stride=s,
act="relu",
data_format=data_format,
bn_weight_decay=bn_weight_decay)
for in_c, out_c, k, s in self.stem_cfg['vb']
])
self.max_pool = MaxPool2D(
kernel_size=3, stride=2, padding=1, data_format=data_format)
block_list = []
for block_idx in range(len(self.block_depth)):
shortcut = False
for i in range(self.block_depth[block_idx]):
block_list.append(
BottleneckBlock(
num_channels=self.num_channels[block_idx] if i == 0
else self.num_filters[block_idx] * self.channels_mult,
num_filters=self.num_filters[block_idx],
stride=2 if i == 0 and block_idx != 0 else 1,
shortcut=shortcut,
data_format=data_format,
bn_weight_decay=bn_weight_decay))
shortcut = True
self.blocks = nn.Sequential(*block_list)
self.avg_pool = AdaptiveAvgPool2D(1, data_format=data_format)
self.flatten = nn.Flatten()
self.avg_pool_channels = self.num_channels[-1] * 2
stdv = 1.0 / math.sqrt(self.avg_pool_channels * 1.0)
self.fc = Linear(
self.avg_pool_channels,
self.class_num,
weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)))
def forward(self, x):
if self.use_pure_fp16:
with paddle.static.amp.fp16_guard():
x = self.stem(x)
x = self.max_pool(x)
x = self.blocks(x)
x = self.avg_pool(x)
x = self.flatten(x)
x = self.fc(x)
else:
x = self.stem(x)
x = self.max_pool(x)
x = self.blocks(x)
x = self.avg_pool(x)
x = self.flatten(x)
x = self.fc(x)
return x
def ResNet50(**kwargs):
model = ResNet(**kwargs)
return model
|
CUDA-Optimized/FastSpeech/fastspeech | fastspeech | perf_infer | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import pprint
import sys
import time
import fire
import torch
from tqdm import tqdm
from fastspeech import DEFAULT_DEVICE
from fastspeech import hparam as hp
from fastspeech.data_load import PadDataLoader
from fastspeech.dataset.ljspeech_dataset import LJSpeechDataset
from fastspeech.model.fastspeech import Fastspeech
from fastspeech.utils.logging import tprint
from fastspeech.utils.pytorch import to_cpu_numpy, to_device_async
from fastspeech.infer import get_inferencer
from fastspeech.inferencer.waveglow_inferencer import WaveGlowInferencer
from contextlib import ExitStack
from fastspeech.dataset.text_dataset import TextDataset
import numpy as np
try:
from apex import amp
except ImportError:
ImportError('Required to install apex.')
pp = pprint.PrettyPrinter(indent=4, width=1000)
SAMPLE_TEXT = "The forms of printed letters should be beautiful, and that their arrangement on the page should be reasonable and a help to the shapeliness of the letters themselves. The forms of printed letters should be beautiful, and that their arrangement on the page should be reasonable and a help to the shapeliness of the letters themselves."
INPUT_LEN = 128
INPUT_TEXT = SAMPLE_TEXT[:INPUT_LEN]
WARMUP_ITERS = 3
def perf_inference(hparam="infer.yaml",
with_vocoder=False,
n_iters=None,
device=DEFAULT_DEVICE,
**kwargs):
"""The script for estimating inference performance.
By default, this script assumes to load parameters in the default config file, fastspeech/hparams/infer.yaml.
Besides the flags, you can also set parameters in the config file via the command-line. For examples,
--dataset_path=DATASET_PATH
Path to dataset directory.
--checkpoint_path=CHECKPOINT_PATH
Path to checkpoint directory. The latest checkpoint will be loaded.
--batch_size=BATCH_SIZE
Batch size to use. Defaults to 1.
Refer to fastspeech/hparams/infer.yaml to see more parameters.
Args:
hparam (str, optional): Path to default config file. Defaults to "infer.yaml".
with_vocoder (bool, optional): Whether or not to estimate with a vocoder. Defaults to False.
n_iters (int, optional): Number of batches to estimate. Defaults to None (an epoch).
device (str, optional): Device to use. Defaults to "cuda" if avaiable, or "cpu".
"""
hp.set_hparam(hparam, kwargs)
tprint("Hparams:\n{}".format(pp.pformat(hp)))
tprint("Device count: {}".format(torch.cuda.device_count()))
model = Fastspeech(
max_seq_len=hp.max_seq_len,
d_model=hp.d_model,
phoneme_side_n_layer=hp.phoneme_side_n_layer,
phoneme_side_head=hp.phoneme_side_head,
phoneme_side_conv1d_filter_size=hp.phoneme_side_conv1d_filter_size,
phoneme_side_output_size=hp.phoneme_side_output_size,
mel_side_n_layer=hp.mel_side_n_layer,
mel_side_head=hp.mel_side_head,
mel_side_conv1d_filter_size=hp.mel_side_conv1d_filter_size,
mel_side_output_size=hp.mel_side_output_size,
duration_predictor_filter_size=hp.duration_predictor_filter_size,
duration_predictor_kernel_size=hp.duration_predictor_kernel_size,
fft_conv1d_kernel=hp.fft_conv1d_kernel,
fft_conv1d_padding=hp.fft_conv1d_padding,
dropout=hp.dropout,
n_mels=hp.num_mels,
fused_layernorm=hp.fused_layernorm
)
dataset_size = hp.batch_size * (n_iters if n_iters else 1)
tprint("Dataset size: {}".format(dataset_size))
dataset = TextDataset([INPUT_TEXT] * (dataset_size + (WARMUP_ITERS * hp.batch_size)))
data_loader = PadDataLoader(dataset,
batch_size=hp.batch_size,
num_workers=hp.n_workers,
shuffle=False if hp.use_trt and hp.trt_multi_engine else True,
drop_last=True,
)
fs_inferencer = get_inferencer(model, data_loader, device)
if with_vocoder:
if hp.use_trt:
from fastspeech.trt.waveglow_trt_inferencer import WaveGlowTRTInferencer
wb_inferencer = WaveGlowTRTInferencer(ckpt_file=hp.waveglow_path, engine_file=hp.waveglow_engine_path, use_fp16=hp.use_fp16)
else:
wb_inferencer = WaveGlowInferencer(ckpt_file=hp.waveglow_path, device=device, use_fp16=hp.use_fp16)
with fs_inferencer, wb_inferencer if with_vocoder else ExitStack():
tprint("Perf started. Batch size={}.".format(hp.batch_size))
latencies = []
throughputs = []
for i in tqdm(range(len(data_loader))):
start = time.time()
outputs = fs_inferencer.infer()
mels = outputs['mel']
mel_masks = outputs['mel_mask']
assert(mels.is_cuda)
if with_vocoder:
# remove padding
max_len = mel_masks.sum(axis=1).max()
mels = mels[..., :max_len]
mel_masks = mel_masks[..., :max_len]
with torch.no_grad():
wavs = wb_inferencer.infer(mels)
wavs = to_cpu_numpy(wavs)
else:
# include time for DtoH copy
to_cpu_numpy(mels)
to_cpu_numpy(mel_masks)
end = time.time()
if i > WARMUP_ITERS-1:
time_elapsed = end - start
generated_samples = len(mel_masks.nonzero()) * hp.hop_len
throughput = generated_samples / time_elapsed
latencies.append(time_elapsed)
throughputs.append(throughput)
latencies.sort()
avg_latency = np.mean(latencies)
std_latency = np.std(latencies)
latency_90 = max(latencies[:int(len(latencies)*0.90)]) if n_iters > 1 else 0
latency_95 = max(latencies[:int(len(latencies)*0.95)]) if n_iters > 1 else 0
latency_99 = max(latencies[:int(len(latencies)*0.99)]) if n_iters > 1 else 0
throughput = np.mean(throughputs)
rtf = throughput / (hp.sr * hp.batch_size)
tprint("Batch size\tPrecision\tAvg Latency(s)\tStd Latency(s)\tLatency 90%(s)\tLatency 95%(s)\tLatency 99%(s)\tThroughput(samples/s)\tAvg RTF\n\
{}\t{}\t{:.4f}\t{:.4f}\t{:.4f}\t{:.4f}\t{:.4f}\t{}\t{:.2f}".format(
hp.batch_size,
"FP16" if hp.use_fp16 else "FP32",
avg_latency,
std_latency,
latency_90,
latency_95,
latency_99,
int(throughput),
rtf))
if __name__ == '__main__':
fire.Fire(perf_inference)
|
TensorFlow/Segmentation/UNet_Industrial/scripts | scripts | UNet_AMP_EVAL | #!/usr/bin/env bash
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script launches UNet evaluation in FP32-AMP on 1 GPUs using 16 batch size
# Usage ./UNet_AMP_EVAL_XLA.sh <path to result repository> <path to dataset> <dagm classID (1-10)>
BASEDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
export TF_CPP_MIN_LOG_LEVEL=3
python "${BASEDIR}/../main.py" \
--unet_variant='tinyUNet' \
--activation_fn='relu' \
--exec_mode='evaluate' \
--iter_unit='epoch' \
--num_iter=1 \
--batch_size=16 \
--warmup_step=10 \
--results_dir="${1}" \
--data_dir="${2}" \
--dataset_name='DAGM2007' \
--dataset_classID="${3}" \
--data_format='NCHW' \
--use_auto_loss_scaling \
--amp \
--xla \
--learning_rate=1e-4 \
--learning_rate_decay_factor=0.8 \
--learning_rate_decay_steps=500 \
--rmsprop_decay=0.9 \
--rmsprop_momentum=0.8 \
--loss_fn_name='adaptive_loss' \
--weight_decay=1e-5 \
--weight_init_method='he_uniform' \
--augment_data \
--display_every=50 \
--debug_verbosity=0
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/util | util | normalDistribution | /*
* Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the NVIDIA CORPORATION nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef TT2I_RANDOMTENSOR_H
#define TT2I_RANDOMTENSOR_H
#include "random.h"
#include "cuda_runtime.h"
#include <stdint.h>
namespace tts
{
class NormalDistribution
{
public:
/**
* @brief Create a new NormalDistribution with the given seed.
*
* @param numStates The number of states in the distribution.
* @param seed The seed to initialize with.
*/
NormalDistribution(int numStates, uint32_t seed);
/**
* @brief Set the seed the distribution asynchronously.
*
* @param seed The seed.
* @param stream The stream to operate on.
*/
void setSeed(uint32_t seed, cudaStream_t stream);
/**
* @brief Generate a normal distribution (0.0, 1.0).
*
* @param outValues The output values.
* @param numValues The number of values.
* @param stream The stream to use.
*/
void generate(float* outValues, int numValues, cudaStream_t stream);
private:
Random mRand;
};
} // namespace tts
#endif
|
TensorFlow2/LanguageModeling/BERT/scripts | scripts | run_inference_benchmark_seq128 | #!/usr/bin/env bash
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
echo "Container nvidia build = " $NVIDIA_BUILD_ID
bert_model=${1:-"large"}
batch_size=${2:-"8"}
precision=${3:-"fp16"}
use_xla=${4:-"true"}
squad_version="1.1"
if [ "$bert_model" = "large" ] ; then
export BERT_DIR=data/download/google_pretrained_weights/uncased_L-24_H-1024_A-16
else
export BERT_DIR=data/download/google_pretrained_weights/uncased_L-12_H-768_A-12
fi
init_checkpoint=$BERT_DIR/bert_model.ckpt
export SQUAD_DIR=data/download/squad/v${squad_version}
export SQUAD_VERSION=v$squad_version
if [ "$squad_version" = "1.1" ] ; then
version_2_with_negative="False"
else
version_2_with_negative="True"
fi
echo "Squad directory set as " $SQUAD_DIR " BERT directory set as " $BERT_DIR
echo "Results directory set as " $RESULTS_DIR
use_fp16=""
if [ "$precision" = "fp16" ] ; then
echo "fp16 activated!"
use_fp16="--use_fp16"
fi
if [ "$use_xla" = "true" ] ; then
use_xla_tag="--enable_xla"
echo "XLA activated"
else
use_xla_tag=""
fi
ckpt_str=${init_checkpoint//\//-}
printf -v TAG "tf_bert_finetuning_squad_%s_inf_%s_gbs%d_ckpt_%s" "$bert_model" "$precision" $batch_size "$ckpt_str"
DATESTAMP=`date +'%y%m%d%H%M%S'`
#Edit to save logs & checkpoints in a different directory
RESULTS_DIR=/results
LOGFILE=$RESULTS_DIR/$TAG.$DATESTAMP.log
printf "Logs written to %s\n" "$LOGFILE"
mkdir -p $RESULTS_DIR
#Check if all necessary files are available before training
for DIR_or_file in $SQUAD_DIR $RESULTS_DIR $BERT_DIR/vocab.txt $BERT_DIR/bert_config.json; do
if [ ! -d "$DIR_or_file" ] && [ ! -f "$DIR_or_file" ]; then
echo "Error! $DIR_or_file directory missing. Please mount correctly"
exit -1
fi
done
python run_squad.py \
--mode=predict \
--input_meta_data_path=${SQUAD_DIR}/seq-128/squad_${SQUAD_VERSION}_meta_data \
--vocab_file=$BERT_DIR/vocab.txt \
--bert_config_file=$BERT_DIR/bert_config.json \
--init_checkpoint=$init_checkpoint \
--predict_file=$SQUAD_DIR/dev-v${squad_version}.json \
--predict_batch_size=$batch_size \
--model_dir=$RESULTS_DIR \
--benchmark \
$use_fp16 $use_xla_tag
|
PyTorch/LanguageModeling/BERT/triton/dist4l/scripts/docker | docker | triton_inference_server | #!/usr/bin/env bash
# Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES:=all}
docker run --rm -d \
-p 8000:8000 \
-p 8001:8001 \
-p 8002:8002 \
--runtime=nvidia \
-e NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES} \
-e ORT_TENSORRT_FP16_ENABLE=1 \
-v ${MODEL_REPOSITORY_PATH}:${MODEL_REPOSITORY_PATH} \
--ipc=host \
--shm-size=1g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
nvcr.io/nvidia/tritonserver:21.10-py3 tritonserver \
--model-store=${MODEL_REPOSITORY_PATH} \
--strict-model-config=false \
--exit-on-error=true \
--model-control-mode=explicit |
PyTorch/Segmentation/MaskRCNN/pytorch/demo | demo | Mask_R-CNN_demo | #!/usr/bin/env python
# coding: utf-8
# # Mask R-CNN demo
#
# This notebook illustrates one possible way of using `maskrcnn_benchmark` for computing predictions on images from an arbitrary URL.
#
# Let's start with a few standard imports
# In[1]:
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import requests
from io import BytesIO
from PIL import Image
import numpy as np
# In[2]:
# this makes our figures bigger
pylab.rcParams['figure.figsize'] = 20, 12
# Those are the relevant imports for the detection model
# In[3]:
from maskrcnn_benchmark.config import cfg
from predictor import COCODemo
# We provide a helper class `COCODemo`, which loads a model from the config file, and performs pre-processing, model prediction and post-processing for us.
#
# We can configure several model options by overriding the config options.
# In here, we make the model run on the CPU
# In[4]:
config_file = "../configs/caffe2/e2e_mask_rcnn_R_50_FPN_1x_caffe2.yaml"
# update the config options with the config file
cfg.merge_from_file(config_file)
# manual override some options
cfg.merge_from_list(["MODEL.DEVICE", "cpu"])
# Now we create the `COCODemo` object. It contains a few extra options for conveniency, such as the confidence threshold for detections to be shown.
# In[5]:
coco_demo = COCODemo(
cfg,
min_image_size=800,
confidence_threshold=0.7,
)
# Let's define a few helper functions for loading images from a URL
# In[6]:
def load(url):
"""
Given an url of an image, downloads the image and
returns a PIL image
"""
response = requests.get(url)
pil_image = Image.open(BytesIO(response.content)).convert("RGB")
# convert to BGR format
image = np.array(pil_image)[:, :, [2, 1, 0]]
return image
def imshow(img):
plt.imshow(img[:, :, [2, 1, 0]])
plt.axis("off")
# Let's now load an image from the COCO dataset. It's reference is in the comment
# In[7]:
# from http://cocodataset.org/#explore?id=345434
image = load("http://farm3.staticflickr.com/2469/3915380994_2e611b1779_z.jpg")
imshow(image)
# ### Computing the predictions
#
# We provide a `run_on_opencv_image` function, which takes an image as it was loaded by OpenCV (in `BGR` format), and computes the predictions on them, returning an image with the predictions overlayed on the image.
# In[8]:
# compute predictions
predictions = coco_demo.run_on_opencv_image(image)
imshow(predictions)
|
TensorFlow/LanguageModeling/BERT | BERT | requirements | tensorflow >= 1.11.0 # CPU Version of TensorFlow.
# tensorflow-gpu >= 1.11.0 # GPU version of TensorFlow.
toposort
networkx
pytest
nltk
tqdm
progressbar
|
PyTorch/Classification/ConvNets/triton | triton | run_inference_on_triton | #!/usr/bin/env python3
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
To infer the model deployed on Triton, you can use `run_inference_on_triton.py` script.
It sends a request with data obtained from pointed data loader and dumps received data into npz files.
Those files are stored in directory pointed by `--output-dir` argument.
Currently, the client communicates with the Triton server asynchronously using GRPC protocol.
Example call:
```shell script
python ./triton/run_inference_on_triton.py \
--server-url localhost:8001 \
--model-name ResNet50 \
--model-version 1 \
--dump-labels \
--output-dir /results/dump_triton
```
"""
import argparse
import functools
import logging
import queue
import threading
import time
from pathlib import Path
from typing import Optional
from tqdm import tqdm
# pytype: disable=import-error
try:
from tritonclient import utils as client_utils # noqa: F401
from tritonclient.grpc import (
InferenceServerClient,
InferInput,
InferRequestedOutput,
)
except ImportError:
import tritongrpcclient as grpc_client
from tritongrpcclient import (
InferenceServerClient,
InferInput,
InferRequestedOutput,
)
# pytype: enable=import-error
# method from PEP-366 to support relative import in executed modules
if __package__ is None:
__package__ = Path(__file__).parent.name
from .deployment_toolkit.args import ArgParserGenerator
from .deployment_toolkit.core import DATALOADER_FN_NAME, load_from_file
from .deployment_toolkit.dump import NpzWriter
LOGGER = logging.getLogger("run_inference_on_triton")
class AsyncGRPCTritonRunner:
DEFAULT_MAX_RESP_WAIT_S = 120
DEFAULT_MAX_UNRESP_REQS = 128
DEFAULT_MAX_FINISH_WAIT_S = 900 # 15min
def __init__(
self,
server_url: str,
model_name: str,
model_version: str,
*,
dataloader,
verbose=False,
resp_wait_s: Optional[float] = None,
max_unresponded_reqs: Optional[int] = None,
):
self._server_url = server_url
self._model_name = model_name
self._model_version = model_version
self._dataloader = dataloader
self._verbose = verbose
self._response_wait_t = self.DEFAULT_MAX_RESP_WAIT_S if resp_wait_s is None else resp_wait_s
self._max_unresp_reqs = self.DEFAULT_MAX_UNRESP_REQS if max_unresponded_reqs is None else max_unresponded_reqs
self._results = queue.Queue()
self._processed_all = False
self._errors = []
self._num_waiting_for = 0
self._sync = threading.Condition()
self._req_thread = threading.Thread(target=self.req_loop, daemon=True)
def __iter__(self):
self._req_thread.start()
timeout_s = 0.050 # check flags processed_all and error flags every 50ms
while True:
try:
ids, x, y_pred, y_real = self._results.get(timeout=timeout_s)
yield ids, x, y_pred, y_real
except queue.Empty:
shall_stop = self._processed_all or self._errors
if shall_stop:
break
LOGGER.debug("Waiting for request thread to stop")
self._req_thread.join()
if self._errors:
error_msg = "\n".join(map(str, self._errors))
raise RuntimeError(error_msg)
def _on_result(self, ids, x, y_real, output_names, result, error):
with self._sync:
if error:
self._errors.append(error)
else:
y_pred = {name: result.as_numpy(name) for name in output_names}
self._results.put((ids, x, y_pred, y_real))
self._num_waiting_for -= 1
self._sync.notify_all()
def req_loop(self):
client = InferenceServerClient(self._server_url, verbose=self._verbose)
self._errors = self._verify_triton_state(client)
if self._errors:
return
LOGGER.debug(
f"Triton server {self._server_url} and model {self._model_name}:{self._model_version} " f"are up and ready!"
)
model_config = client.get_model_config(self._model_name, self._model_version)
model_metadata = client.get_model_metadata(self._model_name, self._model_version)
LOGGER.info(f"Model config {model_config}")
LOGGER.info(f"Model metadata {model_metadata}")
inputs = {tm.name: tm for tm in model_metadata.inputs}
outputs = {tm.name: tm for tm in model_metadata.outputs}
output_names = list(outputs)
outputs_req = [InferRequestedOutput(name) for name in outputs]
self._num_waiting_for = 0
for ids, x, y_real in self._dataloader:
infer_inputs = []
for name in inputs:
data = x[name]
infer_input = InferInput(name, data.shape, inputs[name].datatype)
target_np_dtype = client_utils.triton_to_np_dtype(inputs[name].datatype)
data = data.astype(target_np_dtype)
infer_input.set_data_from_numpy(data)
infer_inputs.append(infer_input)
with self._sync:
def _check_can_send():
return self._num_waiting_for < self._max_unresp_reqs
can_send = self._sync.wait_for(_check_can_send, timeout=self._response_wait_t)
if not can_send:
error_msg = f"Runner could not send new requests for {self._response_wait_t}s"
self._errors.append(error_msg)
break
callback = functools.partial(AsyncGRPCTritonRunner._on_result, self, ids, x, y_real, output_names)
client.async_infer(
model_name=self._model_name,
model_version=self._model_version,
inputs=infer_inputs,
outputs=outputs_req,
callback=callback,
)
self._num_waiting_for += 1
# wait till receive all requested data
with self._sync:
def _all_processed():
LOGGER.debug(f"wait for {self._num_waiting_for} unprocessed jobs")
return self._num_waiting_for == 0
self._processed_all = self._sync.wait_for(_all_processed, self.DEFAULT_MAX_FINISH_WAIT_S)
if not self._processed_all:
error_msg = f"Runner {self._response_wait_t}s timeout received while waiting for results from server"
self._errors.append(error_msg)
LOGGER.debug("Finished request thread")
def _verify_triton_state(self, triton_client):
errors = []
if not triton_client.is_server_live():
errors.append(f"Triton server {self._server_url} is not live")
elif not triton_client.is_server_ready():
errors.append(f"Triton server {self._server_url} is not ready")
elif not triton_client.is_model_ready(self._model_name, self._model_version):
errors.append(f"Model {self._model_name}:{self._model_version} is not ready")
return errors
def _parse_args():
parser = argparse.ArgumentParser(description="Infer model on Triton server", allow_abbrev=False)
parser.add_argument(
"--server-url", type=str, default="localhost:8001", help="Inference server URL (default localhost:8001)"
)
parser.add_argument("--model-name", help="The name of the model used for inference.", required=True)
parser.add_argument("--model-version", help="The version of the model used for inference.", required=True)
parser.add_argument("--dataloader", help="Path to python file containing dataloader.", required=True)
parser.add_argument("--dump-labels", help="Dump labels to output dir", action="store_true", default=False)
parser.add_argument("--dump-inputs", help="Dump inputs to output dir", action="store_true", default=False)
parser.add_argument("-v", "--verbose", help="Verbose logs", action="store_true", default=False)
parser.add_argument("--output-dir", required=True, help="Path to directory where outputs will be saved")
parser.add_argument("--response-wait-time", required=False, help="Maximal time to wait for response", default=120)
parser.add_argument(
"--max-unresponded-requests", required=False, help="Maximal number of unresponded requests", default=128
)
args, *_ = parser.parse_known_args()
get_dataloader_fn = load_from_file(args.dataloader, label="dataloader", target=DATALOADER_FN_NAME)
ArgParserGenerator(get_dataloader_fn).update_argparser(parser)
args = parser.parse_args()
return args
def main():
args = _parse_args()
log_format = "%(asctime)s %(levelname)s %(name)s %(message)s"
log_level = logging.INFO if not args.verbose else logging.DEBUG
logging.basicConfig(level=log_level, format=log_format)
LOGGER.info(f"args:")
for key, value in vars(args).items():
LOGGER.info(f" {key} = {value}")
get_dataloader_fn = load_from_file(args.dataloader, label="dataloader", target=DATALOADER_FN_NAME)
dataloader_fn = ArgParserGenerator(get_dataloader_fn).from_args(args)
runner = AsyncGRPCTritonRunner(
args.server_url,
args.model_name,
args.model_version,
dataloader=dataloader_fn(),
verbose=False,
resp_wait_s=args.response_wait_time,
max_unresponded_reqs=args.max_unresponded_requests,
)
with NpzWriter(output_dir=args.output_dir) as writer:
start = time.time()
for ids, x, y_pred, y_real in tqdm(runner, unit="batch", mininterval=10):
data = _verify_and_format_dump(args, ids, x, y_pred, y_real)
writer.write(**data)
stop = time.time()
LOGGER.info(f"\nThe inference took {stop - start:0.3f}s")
def _verify_and_format_dump(args, ids, x, y_pred, y_real):
data = {"outputs": y_pred, "ids": {"ids": ids}}
if args.dump_inputs:
data["inputs"] = x
if args.dump_labels:
if not y_real:
raise ValueError(
"Found empty label values. Please provide labels in dataloader_fn or do not use --dump-labels argument"
)
data["labels"] = y_real
return data
if __name__ == "__main__":
main()
|
TensorFlow/Detection/SSD/models/research/slim/nets | nets | vgg_test | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for slim.nets.vgg."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from nets import vgg
slim = tf.contrib.slim
class VGGATest(tf.test.TestCase):
def testBuild(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_a(inputs, num_classes)
self.assertEquals(logits.op.name, 'vgg_a/fc8/squeezed')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
def testFullyConvolutional(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_a(inputs, num_classes, spatial_squeeze=False)
self.assertEquals(logits.op.name, 'vgg_a/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 2, 2, num_classes])
def testGlobalPool(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_a(inputs, num_classes, spatial_squeeze=False,
global_pool=True)
self.assertEquals(logits.op.name, 'vgg_a/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 1, 1, num_classes])
def testEndPoints(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
_, end_points = vgg.vgg_a(inputs, num_classes)
expected_names = ['vgg_a/conv1/conv1_1',
'vgg_a/pool1',
'vgg_a/conv2/conv2_1',
'vgg_a/pool2',
'vgg_a/conv3/conv3_1',
'vgg_a/conv3/conv3_2',
'vgg_a/pool3',
'vgg_a/conv4/conv4_1',
'vgg_a/conv4/conv4_2',
'vgg_a/pool4',
'vgg_a/conv5/conv5_1',
'vgg_a/conv5/conv5_2',
'vgg_a/pool5',
'vgg_a/fc6',
'vgg_a/fc7',
'vgg_a/fc8'
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
def testNoClasses(self):
batch_size = 5
height, width = 224, 224
num_classes = None
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
net, end_points = vgg.vgg_a(inputs, num_classes)
expected_names = ['vgg_a/conv1/conv1_1',
'vgg_a/pool1',
'vgg_a/conv2/conv2_1',
'vgg_a/pool2',
'vgg_a/conv3/conv3_1',
'vgg_a/conv3/conv3_2',
'vgg_a/pool3',
'vgg_a/conv4/conv4_1',
'vgg_a/conv4/conv4_2',
'vgg_a/pool4',
'vgg_a/conv5/conv5_1',
'vgg_a/conv5/conv5_2',
'vgg_a/pool5',
'vgg_a/fc6',
'vgg_a/fc7',
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
self.assertTrue(net.op.name.startswith('vgg_a/fc7'))
def testModelVariables(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
vgg.vgg_a(inputs, num_classes)
expected_names = ['vgg_a/conv1/conv1_1/weights',
'vgg_a/conv1/conv1_1/biases',
'vgg_a/conv2/conv2_1/weights',
'vgg_a/conv2/conv2_1/biases',
'vgg_a/conv3/conv3_1/weights',
'vgg_a/conv3/conv3_1/biases',
'vgg_a/conv3/conv3_2/weights',
'vgg_a/conv3/conv3_2/biases',
'vgg_a/conv4/conv4_1/weights',
'vgg_a/conv4/conv4_1/biases',
'vgg_a/conv4/conv4_2/weights',
'vgg_a/conv4/conv4_2/biases',
'vgg_a/conv5/conv5_1/weights',
'vgg_a/conv5/conv5_1/biases',
'vgg_a/conv5/conv5_2/weights',
'vgg_a/conv5/conv5_2/biases',
'vgg_a/fc6/weights',
'vgg_a/fc6/biases',
'vgg_a/fc7/weights',
'vgg_a/fc7/biases',
'vgg_a/fc8/weights',
'vgg_a/fc8/biases',
]
model_variables = [v.op.name for v in slim.get_model_variables()]
self.assertSetEqual(set(model_variables), set(expected_names))
def testEvaluation(self):
batch_size = 2
height, width = 224, 224
num_classes = 1000
with self.test_session():
eval_inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_a(eval_inputs, is_training=False)
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
predictions = tf.argmax(logits, 1)
self.assertListEqual(predictions.get_shape().as_list(), [batch_size])
def testTrainEvalWithReuse(self):
train_batch_size = 2
eval_batch_size = 1
train_height, train_width = 224, 224
eval_height, eval_width = 256, 256
num_classes = 1000
with self.test_session():
train_inputs = tf.random_uniform(
(train_batch_size, train_height, train_width, 3))
logits, _ = vgg.vgg_a(train_inputs)
self.assertListEqual(logits.get_shape().as_list(),
[train_batch_size, num_classes])
tf.get_variable_scope().reuse_variables()
eval_inputs = tf.random_uniform(
(eval_batch_size, eval_height, eval_width, 3))
logits, _ = vgg.vgg_a(eval_inputs, is_training=False,
spatial_squeeze=False)
self.assertListEqual(logits.get_shape().as_list(),
[eval_batch_size, 2, 2, num_classes])
logits = tf.reduce_mean(logits, [1, 2])
predictions = tf.argmax(logits, 1)
self.assertEquals(predictions.get_shape().as_list(), [eval_batch_size])
def testForward(self):
batch_size = 1
height, width = 224, 224
with self.test_session() as sess:
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_a(inputs)
sess.run(tf.global_variables_initializer())
output = sess.run(logits)
self.assertTrue(output.any())
class VGG16Test(tf.test.TestCase):
def testBuild(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_16(inputs, num_classes)
self.assertEquals(logits.op.name, 'vgg_16/fc8/squeezed')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
def testFullyConvolutional(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_16(inputs, num_classes, spatial_squeeze=False)
self.assertEquals(logits.op.name, 'vgg_16/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 2, 2, num_classes])
def testGlobalPool(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_16(inputs, num_classes, spatial_squeeze=False,
global_pool=True)
self.assertEquals(logits.op.name, 'vgg_16/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 1, 1, num_classes])
def testEndPoints(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
_, end_points = vgg.vgg_16(inputs, num_classes)
expected_names = ['vgg_16/conv1/conv1_1',
'vgg_16/conv1/conv1_2',
'vgg_16/pool1',
'vgg_16/conv2/conv2_1',
'vgg_16/conv2/conv2_2',
'vgg_16/pool2',
'vgg_16/conv3/conv3_1',
'vgg_16/conv3/conv3_2',
'vgg_16/conv3/conv3_3',
'vgg_16/pool3',
'vgg_16/conv4/conv4_1',
'vgg_16/conv4/conv4_2',
'vgg_16/conv4/conv4_3',
'vgg_16/pool4',
'vgg_16/conv5/conv5_1',
'vgg_16/conv5/conv5_2',
'vgg_16/conv5/conv5_3',
'vgg_16/pool5',
'vgg_16/fc6',
'vgg_16/fc7',
'vgg_16/fc8'
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
def testNoClasses(self):
batch_size = 5
height, width = 224, 224
num_classes = None
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
net, end_points = vgg.vgg_16(inputs, num_classes)
expected_names = ['vgg_16/conv1/conv1_1',
'vgg_16/conv1/conv1_2',
'vgg_16/pool1',
'vgg_16/conv2/conv2_1',
'vgg_16/conv2/conv2_2',
'vgg_16/pool2',
'vgg_16/conv3/conv3_1',
'vgg_16/conv3/conv3_2',
'vgg_16/conv3/conv3_3',
'vgg_16/pool3',
'vgg_16/conv4/conv4_1',
'vgg_16/conv4/conv4_2',
'vgg_16/conv4/conv4_3',
'vgg_16/pool4',
'vgg_16/conv5/conv5_1',
'vgg_16/conv5/conv5_2',
'vgg_16/conv5/conv5_3',
'vgg_16/pool5',
'vgg_16/fc6',
'vgg_16/fc7',
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
self.assertTrue(net.op.name.startswith('vgg_16/fc7'))
def testModelVariables(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
vgg.vgg_16(inputs, num_classes)
expected_names = ['vgg_16/conv1/conv1_1/weights',
'vgg_16/conv1/conv1_1/biases',
'vgg_16/conv1/conv1_2/weights',
'vgg_16/conv1/conv1_2/biases',
'vgg_16/conv2/conv2_1/weights',
'vgg_16/conv2/conv2_1/biases',
'vgg_16/conv2/conv2_2/weights',
'vgg_16/conv2/conv2_2/biases',
'vgg_16/conv3/conv3_1/weights',
'vgg_16/conv3/conv3_1/biases',
'vgg_16/conv3/conv3_2/weights',
'vgg_16/conv3/conv3_2/biases',
'vgg_16/conv3/conv3_3/weights',
'vgg_16/conv3/conv3_3/biases',
'vgg_16/conv4/conv4_1/weights',
'vgg_16/conv4/conv4_1/biases',
'vgg_16/conv4/conv4_2/weights',
'vgg_16/conv4/conv4_2/biases',
'vgg_16/conv4/conv4_3/weights',
'vgg_16/conv4/conv4_3/biases',
'vgg_16/conv5/conv5_1/weights',
'vgg_16/conv5/conv5_1/biases',
'vgg_16/conv5/conv5_2/weights',
'vgg_16/conv5/conv5_2/biases',
'vgg_16/conv5/conv5_3/weights',
'vgg_16/conv5/conv5_3/biases',
'vgg_16/fc6/weights',
'vgg_16/fc6/biases',
'vgg_16/fc7/weights',
'vgg_16/fc7/biases',
'vgg_16/fc8/weights',
'vgg_16/fc8/biases',
]
model_variables = [v.op.name for v in slim.get_model_variables()]
self.assertSetEqual(set(model_variables), set(expected_names))
def testEvaluation(self):
batch_size = 2
height, width = 224, 224
num_classes = 1000
with self.test_session():
eval_inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_16(eval_inputs, is_training=False)
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
predictions = tf.argmax(logits, 1)
self.assertListEqual(predictions.get_shape().as_list(), [batch_size])
def testTrainEvalWithReuse(self):
train_batch_size = 2
eval_batch_size = 1
train_height, train_width = 224, 224
eval_height, eval_width = 256, 256
num_classes = 1000
with self.test_session():
train_inputs = tf.random_uniform(
(train_batch_size, train_height, train_width, 3))
logits, _ = vgg.vgg_16(train_inputs)
self.assertListEqual(logits.get_shape().as_list(),
[train_batch_size, num_classes])
tf.get_variable_scope().reuse_variables()
eval_inputs = tf.random_uniform(
(eval_batch_size, eval_height, eval_width, 3))
logits, _ = vgg.vgg_16(eval_inputs, is_training=False,
spatial_squeeze=False)
self.assertListEqual(logits.get_shape().as_list(),
[eval_batch_size, 2, 2, num_classes])
logits = tf.reduce_mean(logits, [1, 2])
predictions = tf.argmax(logits, 1)
self.assertEquals(predictions.get_shape().as_list(), [eval_batch_size])
def testForward(self):
batch_size = 1
height, width = 224, 224
with self.test_session() as sess:
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_16(inputs)
sess.run(tf.global_variables_initializer())
output = sess.run(logits)
self.assertTrue(output.any())
class VGG19Test(tf.test.TestCase):
def testBuild(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_19(inputs, num_classes)
self.assertEquals(logits.op.name, 'vgg_19/fc8/squeezed')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
def testFullyConvolutional(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_19(inputs, num_classes, spatial_squeeze=False)
self.assertEquals(logits.op.name, 'vgg_19/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 2, 2, num_classes])
def testGlobalPool(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_19(inputs, num_classes, spatial_squeeze=False,
global_pool=True)
self.assertEquals(logits.op.name, 'vgg_19/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 1, 1, num_classes])
def testEndPoints(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
_, end_points = vgg.vgg_19(inputs, num_classes)
expected_names = [
'vgg_19/conv1/conv1_1',
'vgg_19/conv1/conv1_2',
'vgg_19/pool1',
'vgg_19/conv2/conv2_1',
'vgg_19/conv2/conv2_2',
'vgg_19/pool2',
'vgg_19/conv3/conv3_1',
'vgg_19/conv3/conv3_2',
'vgg_19/conv3/conv3_3',
'vgg_19/conv3/conv3_4',
'vgg_19/pool3',
'vgg_19/conv4/conv4_1',
'vgg_19/conv4/conv4_2',
'vgg_19/conv4/conv4_3',
'vgg_19/conv4/conv4_4',
'vgg_19/pool4',
'vgg_19/conv5/conv5_1',
'vgg_19/conv5/conv5_2',
'vgg_19/conv5/conv5_3',
'vgg_19/conv5/conv5_4',
'vgg_19/pool5',
'vgg_19/fc6',
'vgg_19/fc7',
'vgg_19/fc8'
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
def testNoClasses(self):
batch_size = 5
height, width = 224, 224
num_classes = None
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
net, end_points = vgg.vgg_19(inputs, num_classes)
expected_names = [
'vgg_19/conv1/conv1_1',
'vgg_19/conv1/conv1_2',
'vgg_19/pool1',
'vgg_19/conv2/conv2_1',
'vgg_19/conv2/conv2_2',
'vgg_19/pool2',
'vgg_19/conv3/conv3_1',
'vgg_19/conv3/conv3_2',
'vgg_19/conv3/conv3_3',
'vgg_19/conv3/conv3_4',
'vgg_19/pool3',
'vgg_19/conv4/conv4_1',
'vgg_19/conv4/conv4_2',
'vgg_19/conv4/conv4_3',
'vgg_19/conv4/conv4_4',
'vgg_19/pool4',
'vgg_19/conv5/conv5_1',
'vgg_19/conv5/conv5_2',
'vgg_19/conv5/conv5_3',
'vgg_19/conv5/conv5_4',
'vgg_19/pool5',
'vgg_19/fc6',
'vgg_19/fc7',
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
self.assertTrue(net.op.name.startswith('vgg_19/fc7'))
def testModelVariables(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random_uniform((batch_size, height, width, 3))
vgg.vgg_19(inputs, num_classes)
expected_names = [
'vgg_19/conv1/conv1_1/weights',
'vgg_19/conv1/conv1_1/biases',
'vgg_19/conv1/conv1_2/weights',
'vgg_19/conv1/conv1_2/biases',
'vgg_19/conv2/conv2_1/weights',
'vgg_19/conv2/conv2_1/biases',
'vgg_19/conv2/conv2_2/weights',
'vgg_19/conv2/conv2_2/biases',
'vgg_19/conv3/conv3_1/weights',
'vgg_19/conv3/conv3_1/biases',
'vgg_19/conv3/conv3_2/weights',
'vgg_19/conv3/conv3_2/biases',
'vgg_19/conv3/conv3_3/weights',
'vgg_19/conv3/conv3_3/biases',
'vgg_19/conv3/conv3_4/weights',
'vgg_19/conv3/conv3_4/biases',
'vgg_19/conv4/conv4_1/weights',
'vgg_19/conv4/conv4_1/biases',
'vgg_19/conv4/conv4_2/weights',
'vgg_19/conv4/conv4_2/biases',
'vgg_19/conv4/conv4_3/weights',
'vgg_19/conv4/conv4_3/biases',
'vgg_19/conv4/conv4_4/weights',
'vgg_19/conv4/conv4_4/biases',
'vgg_19/conv5/conv5_1/weights',
'vgg_19/conv5/conv5_1/biases',
'vgg_19/conv5/conv5_2/weights',
'vgg_19/conv5/conv5_2/biases',
'vgg_19/conv5/conv5_3/weights',
'vgg_19/conv5/conv5_3/biases',
'vgg_19/conv5/conv5_4/weights',
'vgg_19/conv5/conv5_4/biases',
'vgg_19/fc6/weights',
'vgg_19/fc6/biases',
'vgg_19/fc7/weights',
'vgg_19/fc7/biases',
'vgg_19/fc8/weights',
'vgg_19/fc8/biases',
]
model_variables = [v.op.name for v in slim.get_model_variables()]
self.assertSetEqual(set(model_variables), set(expected_names))
def testEvaluation(self):
batch_size = 2
height, width = 224, 224
num_classes = 1000
with self.test_session():
eval_inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_19(eval_inputs, is_training=False)
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
predictions = tf.argmax(logits, 1)
self.assertListEqual(predictions.get_shape().as_list(), [batch_size])
def testTrainEvalWithReuse(self):
train_batch_size = 2
eval_batch_size = 1
train_height, train_width = 224, 224
eval_height, eval_width = 256, 256
num_classes = 1000
with self.test_session():
train_inputs = tf.random_uniform(
(train_batch_size, train_height, train_width, 3))
logits, _ = vgg.vgg_19(train_inputs)
self.assertListEqual(logits.get_shape().as_list(),
[train_batch_size, num_classes])
tf.get_variable_scope().reuse_variables()
eval_inputs = tf.random_uniform(
(eval_batch_size, eval_height, eval_width, 3))
logits, _ = vgg.vgg_19(eval_inputs, is_training=False,
spatial_squeeze=False)
self.assertListEqual(logits.get_shape().as_list(),
[eval_batch_size, 2, 2, num_classes])
logits = tf.reduce_mean(logits, [1, 2])
predictions = tf.argmax(logits, 1)
self.assertEquals(predictions.get_shape().as_list(), [eval_batch_size])
def testForward(self):
batch_size = 1
height, width = 224, 224
with self.test_session() as sess:
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_19(inputs)
sess.run(tf.global_variables_initializer())
output = sess.run(logits)
self.assertTrue(output.any())
if __name__ == '__main__':
tf.test.main()
|
TensorFlow2/Recommendation/WideAndDeep/triton/runner | runner | logger | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import pathlib
import coloredlogs
class Logger(logging.Logger):
def __init__(self, name, level=logging.NOTSET):
super().__init__(name, level=level)
self._file_path = None
def initialize(self, file_path: pathlib.Path):
self._file_path = file_path
def write(self, log: str):
if not self._file_path:
return
with open(self._file_path, "+a") as file:
file.write(log)
LOGGER = Logger("runner")
log_format = "%(asctime)s %(levelname)s %(name)s %(message)s"
logging.basicConfig(format=log_format)
coloredlogs.install(
level=logging.INFO,
fmt=log_format,
logger=LOGGER,
field_styles={
"asctime": {"color": "green"},
"hostname": {"color": "magenta"},
"levelname": {"bold": True, "color": "blue"},
"name": {"color": "blue"},
"programname": {"color": "cyan"},
"username": {"color": "yellow"},
},
reconfigure=True,
)
|
TensorFlow/Detection/SSD/models/research/object_detection/g3doc | g3doc | configuring_jobs | # Configuring the Object Detection Training Pipeline
## Overview
The Tensorflow Object Detection API uses protobuf files to configure the
training and evaluation process. The schema for the training pipeline can be
found in object_detection/protos/pipeline.proto. At a high level, the config
file is split into 5 parts:
1. The `model` configuration. This defines what type of model will be trained
(ie. meta-architecture, feature extractor).
2. The `train_config`, which decides what parameters should be used to train
model parameters (ie. SGD parameters, input preprocessing and feature extractor
initialization values).
3. The `eval_config`, which determines what set of metrics will be reported for
evaluation.
4. The `train_input_config`, which defines what dataset the model should be
trained on.
5. The `eval_input_config`, which defines what dataset the model will be
evaluated on. Typically this should be different than the training input
dataset.
A skeleton configuration file is shown below:
```
model {
(... Add model config here...)
}
train_config : {
(... Add train_config here...)
}
train_input_reader: {
(... Add train_input configuration here...)
}
eval_config: {
}
eval_input_reader: {
(... Add eval_input configuration here...)
}
```
## Picking Model Parameters
There are a large number of model parameters to configure. The best settings
will depend on your given application. Faster R-CNN models are better suited to
cases where high accuracy is desired and latency is of lower priority.
Conversely, if processing time is the most important factor, SSD models are
recommended. Read [our paper](https://arxiv.org/abs/1611.10012) for a more
detailed discussion on the speed vs accuracy tradeoff.
To help new users get started, sample model configurations have been provided
in the object_detection/samples/configs folder. The contents of these
configuration files can be pasted into `model` field of the skeleton
configuration. Users should note that the `num_classes` field should be changed
to a value suited for the dataset the user is training on.
## Defining Inputs
The Tensorflow Object Detection API accepts inputs in the TFRecord file format.
Users must specify the locations of both the training and evaluation files.
Additionally, users should also specify a label map, which define the mapping
between a class id and class name. The label map should be identical between
training and evaluation datasets.
An example input configuration looks as follows:
```
tf_record_input_reader {
input_path: "/usr/home/username/data/train.record"
}
label_map_path: "/usr/home/username/data/label_map.pbtxt"
```
Users should substitute the `input_path` and `label_map_path` arguments and
insert the input configuration into the `train_input_reader` and
`eval_input_reader` fields in the skeleton configuration. Note that the paths
can also point to Google Cloud Storage buckets (ie.
"gs://project_bucket/train.record") for use on Google Cloud.
## Configuring the Trainer
The `train_config` defines parts of the training process:
1. Model parameter initialization.
2. Input preprocessing.
3. SGD parameters.
A sample `train_config` is below:
```
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0002
schedule {
step: 0
learning_rate: .0002
}
schedule {
step: 900000
learning_rate: .00002
}
schedule {
step: 1200000
learning_rate: .000002
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
fine_tune_checkpoint: "/usr/home/username/tmp/model.ckpt-#####"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
gradient_clipping_by_norm: 10.0
data_augmentation_options {
random_horizontal_flip {
}
}
```
### Model Parameter Initialization
While optional, it is highly recommended that users utilize other object
detection checkpoints. Training an object detector from scratch can take days.
To speed up the training process, it is recommended that users re-use the
feature extractor parameters from a pre-existing image classification or
object detection checkpoint. `train_config` provides two fields to specify
pre-existing checkpoints: `fine_tune_checkpoint` and
`from_detection_checkpoint`. `fine_tune_checkpoint` should provide a path to
the pre-existing checkpoint
(ie:"/usr/home/username/checkpoint/model.ckpt-#####").
`from_detection_checkpoint` is a boolean value. If false, it assumes the
checkpoint was from an object classification checkpoint. Note that starting
from a detection checkpoint will usually result in a faster training job than
a classification checkpoint.
The list of provided checkpoints can be found [here](detection_model_zoo.md).
### Input Preprocessing
The `data_augmentation_options` in `train_config` can be used to specify
how training data can be modified. This field is optional.
### SGD Parameters
The remainings parameters in `train_config` are hyperparameters for gradient
descent. Please note that the optimal learning rates provided in these
configuration files may depend on the specifics of the training setup (e.g.
number of workers, gpu type).
## Configuring the Evaluator
The main components to set in `eval_config` are `num_examples` and
`metrics_set`. The parameter `num_examples` indicates the number of batches (
currently of batch size 1) used for an evaluation cycle, and often is the total
size of the evaluation dataset. The parameter `metrics_set` indicates which
metrics to run during evaluation (i.e. `"coco_detection_metrics"`).
|
PyTorch/Classification/GPUNet/triton/deployment_toolkit | deployment_toolkit | report | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import csv
import re
from typing import Dict, List
from natsort import natsorted
from tabulate import tabulate
def sort_results(results: List):
results = natsorted(results, key=lambda item: [item[key] for key in item.keys()])
return results
def save_results(filename: str, data: List, formatted: bool = False):
data = format_data(data=data) if formatted else data
with open(filename, "a") as csvfile:
fieldnames = data[0].keys()
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for row in data:
writer.writerow(row)
def format_data(data: List[Dict]) -> List[Dict]:
formatted_data = list()
for item in data:
formatted_item = format_keys(data=item)
formatted_data.append(formatted_item)
return formatted_data
def format_keys(data: Dict) -> Dict:
keys = {format_key(key=key): value for key, value in data.items()}
return keys
def format_key(key: str) -> str:
key = " ".join([k.capitalize() for k in re.split("_| ", key)])
return key
def show_results(results: List[Dict]):
headers = list(results[0].keys())
summary = map(lambda x: list(map(lambda item: item[1], x.items())), results)
print(tabulate(summary, headers=headers))
|
TensorFlow/Translation/GNMT | GNMT | requirements | sacrebleu==1.2.10
git+https://github.com/NVIDIA/[email protected]#egg=dllogger |
PyTorch/LanguageModeling/BERT/triton/dist4l/runner | runner | start_NVIDIA-T4 | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/bin/bash
# Install Docker
. /etc/os-release && \
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - && \
echo "deb [arch=amd64] https://download.docker.com/linux/debian buster stable" > /etc/apt/sources.list.d/docker.list && \
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey| apt-key add - && \
curl -s -L https://nvidia.github.io/nvidia-docker/$ID$VERSION_ID/nvidia-docker.list > /etc/apt/sources.list.d/nvidia-docker.list && \
apt-get update && \
apt-get install -y docker-ce docker-ce-cli containerd.io nvidia-docker2
# Install packages
pip install -r triton/runner/requirements.txt
# Evaluate Runner
python3 -m "triton.dist4l.runner.__main__" \
--config-path "triton/dist4l/runner/config_NVIDIA-T4.yaml" \
--device 0 |
DGLPyTorch/DrugDiscovery/SE3Transformer/scripts | scripts | train | #!/usr/bin/env bash
# CLI args with defaults
BATCH_SIZE=${1:-240}
AMP=${2:-true}
NUM_EPOCHS=${3:-100}
LEARNING_RATE=${4:-0.002}
WEIGHT_DECAY=${5:-0.1}
# choices: 'mu', 'alpha', 'homo', 'lumo', 'gap', 'r2', 'zpve', 'U0', 'U', 'H', 'G', 'Cv',
# 'U0_atom', 'U_atom', 'H_atom', 'G_atom', 'A', 'B', 'C'
TASK=homo
python -m se3_transformer.runtime.training \
--amp "$AMP" \
--batch_size "$BATCH_SIZE" \
--epochs "$NUM_EPOCHS" \
--lr "$LEARNING_RATE" \
--weight_decay "$WEIGHT_DECAY" \
--use_layer_norm \
--norm \
--save_ckpt_path model_qm9.pth \
--precompute_bases \
--seed 42 \
--task "$TASK" |
TensorFlow2/LanguageModeling/BERT/official/utils/misc | misc | model_helpers_test | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for Model Helper functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf # pylint: disable=g-bad-import-order
from official.utils.misc import keras_utils
from official.utils.misc import model_helpers
class PastStopThresholdTest(tf.test.TestCase):
"""Tests for past_stop_threshold."""
def setUp(self):
super(PastStopThresholdTest, self).setUp()
if keras_utils.is_v2_0:
tf.compat.v1.disable_eager_execution()
def test_past_stop_threshold(self):
"""Tests for normal operating conditions."""
self.assertTrue(model_helpers.past_stop_threshold(0.54, 1))
self.assertTrue(model_helpers.past_stop_threshold(54, 100))
self.assertFalse(model_helpers.past_stop_threshold(0.54, 0.1))
self.assertFalse(model_helpers.past_stop_threshold(-0.54, -1.5))
self.assertTrue(model_helpers.past_stop_threshold(-0.54, 0))
self.assertTrue(model_helpers.past_stop_threshold(0, 0))
self.assertTrue(model_helpers.past_stop_threshold(0.54, 0.54))
def test_past_stop_threshold_none_false(self):
"""Tests that check None returns false."""
self.assertFalse(model_helpers.past_stop_threshold(None, -1.5))
self.assertFalse(model_helpers.past_stop_threshold(None, None))
self.assertFalse(model_helpers.past_stop_threshold(None, 1.5))
# Zero should be okay, though.
self.assertTrue(model_helpers.past_stop_threshold(0, 1.5))
def test_past_stop_threshold_not_number(self):
"""Tests for error conditions."""
with self.assertRaises(ValueError):
model_helpers.past_stop_threshold("str", 1)
with self.assertRaises(ValueError):
model_helpers.past_stop_threshold("str", tf.constant(5))
with self.assertRaises(ValueError):
model_helpers.past_stop_threshold("str", "another")
with self.assertRaises(ValueError):
model_helpers.past_stop_threshold(0, None)
with self.assertRaises(ValueError):
model_helpers.past_stop_threshold(0.7, "str")
with self.assertRaises(ValueError):
model_helpers.past_stop_threshold(tf.constant(4), None)
class SyntheticDataTest(tf.test.TestCase):
"""Tests for generate_synthetic_data."""
def test_generate_synethetic_data(self):
input_element, label_element = tf.compat.v1.data.make_one_shot_iterator(
model_helpers.generate_synthetic_data(input_shape=tf.TensorShape([5]),
input_value=123,
input_dtype=tf.float32,
label_shape=tf.TensorShape([]),
label_value=456,
label_dtype=tf.int32)).get_next()
with self.session() as sess:
for n in range(5):
inp, lab = sess.run((input_element, label_element))
self.assertAllClose(inp, [123., 123., 123., 123., 123.])
self.assertEquals(lab, 456)
def test_generate_only_input_data(self):
d = model_helpers.generate_synthetic_data(
input_shape=tf.TensorShape([4]),
input_value=43.5,
input_dtype=tf.float32)
element = tf.compat.v1.data.make_one_shot_iterator(d).get_next()
self.assertFalse(isinstance(element, tuple))
with self.session() as sess:
inp = sess.run(element)
self.assertAllClose(inp, [43.5, 43.5, 43.5, 43.5])
def test_generate_nested_data(self):
d = model_helpers.generate_synthetic_data(
input_shape={'a': tf.TensorShape([2]),
'b': {'c': tf.TensorShape([3]), 'd': tf.TensorShape([])}},
input_value=1.1)
element = tf.compat.v1.data.make_one_shot_iterator(d).get_next()
self.assertIn('a', element)
self.assertIn('b', element)
self.assertEquals(len(element['b']), 2)
self.assertIn('c', element['b'])
self.assertIn('d', element['b'])
self.assertNotIn('c', element)
with self.session() as sess:
inp = sess.run(element)
self.assertAllClose(inp['a'], [1.1, 1.1])
self.assertAllClose(inp['b']['c'], [1.1, 1.1, 1.1])
self.assertAllClose(inp['b']['d'], 1.1)
if __name__ == "__main__":
tf.test.main()
|
PyTorch/SpeechRecognition/Jasper/notebooks | notebooks | README | # Jasper notebooks
This folder provides different notebooks to run Jasper inference step by step.
## Table Of Contents
- [Jasper Jupyter Notebook for TensorRT](#jasper-jupyter-notebook-for-tensorrt)
* [Requirements](#requirements)
* [Quick Start Guide](#quick-start-guide)
- [Jasper Colab Notebook for TensorRT](#jasper-colab-notebook-for-tensorrt)
* [Requirements](#requirements)
* [Quick Start Guide](#quick-start-guide)
- [Jasper Jupyter Notebook for TensorRT Inference Server](#jasper-colab-notebook-for-tensorrt-inference-server)
* [Requirements](#requirements)
* [Quick Start Guide](#quick-start-guide)
## Jasper Jupyter Notebook for TensorRT
### Requirements
`./trt/` contains a Dockerfile which extends the PyTorch 19.09-py3 NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components:
* [NVIDIA Turing](https://www.nvidia.com/en-us/geforce/turing/) or [Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/) based GPU
* [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
* [PyTorch 19.09-py3 NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch)
* [NVIDIA machine learning repository](https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb) and [NVIDIA cuda repository](https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.1.243-1_amd64.deb) for NVIDIA TensorRT 6
* [NVIDIA Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/) or [Turing](https://www.nvidia.com/en-us/geforce/turing/) based GPU
* [Pretrained Jasper Model Checkpoint](https://ngc.nvidia.com/catalog/models/nvidia:jasperpyt_fp16)
### Quick Start Guide
Running the following scripts will build and launch the container containing all required dependencies for both TensorRT as well as native PyTorch. This is necessary for using inference with TensorRT and can also be used for data download, processing and training of the model.
#### 1. Clone the repository.
```
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/PyTorch/SpeechRecognition/Jasper
```
#### 2. Build the Jasper PyTorch with TRT 6 container:
```
bash trt/scripts/docker/build.sh
```
#### 3. Create directories
Prepare to start a detached session in the NGC container.
Create three directories on your local machine for dataset, checkpoint, and result, respectively, naming "data" "checkpoint" "result":
```
mkdir data checkpoint result
```
#### 4. Download the checkpoint
Download the checkpoint file jasperpyt_fp16 from NGC Model Repository:
- https://ngc.nvidia.com/catalog/models/nvidia:jasperpyt_fp16
to the directory: _checkpoint_
The Jasper PyTorch container will be launched in the Jupyter notebook. Within the container, the contents of the root repository will be copied to the /workspace/jasper directory.
The /datasets, /checkpoints, /results directories are mounted as volumes and mapped to the corresponding directories "data" "checkpoint" "result" on the host.
#### 5. Run the notebook
For running the notebook on your local machine, run:
```
jupyter notebook -- notebooks/JasperTRT.ipynb
```
For running the notebook on another machine remotely, run:
```
jupyter notebook --ip=0.0.0.0 --allow-root
```
And navigate a web browser to the IP address or hostname of the host machine at port 8888: `http://[host machine]:8888`
Use the token listed in the output from running the jupyter command to log in, for example: `http://[host machine]:8888/?token=aae96ae9387cd28151868fee318c3b3581a2d794f3b25c6b`
## Jasper Colab Notebook for TensorRT
### Requirements
`./trt/` contains a Dockerfile which extends the PyTorch 19.09-py3 NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components:
* [NVIDIA Turing](https://www.nvidia.com/en-us/geforce/turing/) or [Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/) based GPU
* [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
* [PyTorch 19.09-py3 NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch)
* [NVIDIA machine learning repository](https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb) and [NVIDIA cuda repository](https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.1.243-1_amd64.deb) for NVIDIA TensorRT 6
* [NVIDIA Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/) or [Turing](https://www.nvidia.com/en-us/geforce/turing/) based GPU
* [Pretrained Jasper Model Checkpoint](https://ngc.nvidia.com/catalog/models/nvidia:jasperpyt_fp16)
### Quick Start Guide
Running the following scripts will build and launch the container containing all required dependencies for both TensorRT as well as native PyTorch. This is necessary for using inference with TensorRT and can also be used for data download, processing and training of the model.
#### 1. Clone the repository.
```
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/PyTorch/SpeechRecognition/Jasper
```
#### 2. Build the Jasper PyTorch with TRT 6 container:
```
bash trt/scripts/docker/build.sh
```
#### 3. Create directories
Prepare to start a detached session in the NGC container.
Create three directories on your local machine for dataset, checkpoint, and result, respectively, naming "data" "checkpoint" "result":
```
mkdir data checkpoint result
```
#### 4. Download the checkpoint
Download the checkpoint file jasperpyt_fp16 from NGC Model Repository:
- https://ngc.nvidia.com/catalog/models/nvidia:jasperpyt_fp16
to the directory: _checkpoint_
The Jasper PyTorch container will be launched in the Jupyter notebook. Within the container, the contents of the root repository will be copied to the /workspace/jasper directory.
The /datasets, /checkpoints, /results directories are mounted as volumes and mapped to the corresponding directories "data" "checkpoint" "result" on the host.
#### 5. Run the notebook
>>>>>>> 2deaddbc2ea58d5318b06203ae30ace2dd576ecb
For running the notebook on your local machine, run:
```
jupyter notebook -- notebooks/Colab_Jasper_TRT_inference_demo.ipynb
```
For running the notebook on another machine remotely, run:
```
jupyter notebook --ip=0.0.0.0 --allow-root
```
And navigate a web browser to the IP address or hostname of the host machine at port 8888: `http://[host machine]:8888`
Use the token listed in the output from running the jupyter command to log in, for example: `http://[host machine]:8888/?token=aae96ae9387cd28151868fee318c3b3581a2d794f3b25c6b`
## Jasper Jupyter Notebook for TensorRT Inference Server
### Requirements
`./trtis/` contains a Dockerfile which extends the PyTorch 19.09-py3 NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components:
* [NVIDIA Turing](https://www.nvidia.com/en-us/geforce/turing/) or [Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/) based GPU
* [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
* [PyTorch 19.09-py3 NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch)
* [TensorRT Inference Server 19.09 NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:tensorrtserver)
* [NVIDIA machine learning repository](https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb) and [NVIDIA cuda repository](https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.1.243-1_amd64.deb) for NVIDIA TensorRT 6
* [NVIDIA Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/) or [Turing](https://www.nvidia.com/en-us/geforce/turing/) based GPU
* [Pretrained Jasper Model Checkpoint](https://ngc.nvidia.com/catalog/models/nvidia:jasperpyt_fp16)
### Quick Start Guide
#### 1. Clone the repository.
```
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/PyTorch/SpeechRecognition/Jasper
```
#### 2. Build a container that extends NGC PyTorch 19.09, TensorRT, TensorRT Inference Server, and TensorRT Inference Client.
```
bash trtis/scripts/docker/build.sh
```
#### 3. Download the checkpoint
Download the checkpoint file jasper_fp16.pt from NGC Model Repository:
- https://ngc.nvidia.com/catalog/models/nvidia:jasperpyt_fp16
to an user specified directory _CHECKPOINT_DIR_
#### 4. Run the notebook
For running the notebook on your local machine, run:
```
jupyter notebook -- notebooks/JasperTRTIS.ipynb
```
For running the notebook on another machine remotely, run:
```
jupyter notebook --ip=0.0.0.0 --allow-root
```
And navigate a web browser to the IP address or hostname of the host machine at port 8888: `http://[host machine]:8888`
Use the token listed in the output from running the jupyter command to log in, for example: `http://[host machine]:8888/?token=aae96ae9387cd28151868fee318c3b3581a2d794f3b25c6b`
|
Tools/DGLPyTorch/SyntheticGraphGeneration/syngen/graph_aligner | graph_aligner | base_graph_aligner | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
class BaseGraphAligner(abc.ABC):
"""Base class for all graph alignment objects"""
@classmethod
def get_aligners(cls, include_parents=True):
"""Recursively find sublcasses of `BaseGraphAligner`
Args:
include_parents (bool): whether to include parents to other classes.
(default: `True`)
"""
aligners = dict()
for child in cls.__subclasses__():
children = child.get_aligners(include_parents)
aligners.update(children)
if include_parents or not children:
if abc.ABC not in child.__bases__:
aligners[child.__name__] = child
return aligners
def fit(self, *args, **kwargs) -> None:
"""function to fit aligner required to be implemented by aligners"""
raise NotImplementedError()
def align(self, *args, **kwargs):
"""align function to align generated graph and generated features,
required to be implemented by aligners
"""
raise NotImplementedError()
def save(self, path):
raise NotImplementedError()
@classmethod
def load(cls, path):
raise NotImplementedError()
|
TensorFlow/Segmentation/UNet_Industrial/model | model | unet | # !/usr/bin/env python
# -*- coding: utf-8 -*-
# ==============================================================================
#
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ==============================================================================
import tensorflow as tf
import horovod.tensorflow as hvd
from model import layers
from model import blocks
from utils import hvd_utils
from utils import losses
from utils import metrics
from utils import image_processing
from dllogger import Logger
__all__ = ["UNet_v1"]
class UNet_v1(object):
authorized_weight_init_methods = [
"he_normal",
"he_uniform",
"glorot_normal",
"glorot_uniform",
"orthogonal",
]
authorized_models_variants = [
"original",
"tinyUNet",
]
def __init__(
self,
model_name,
compute_format,
input_format,
n_output_channels,
unet_variant,
activation_fn,
weight_init_method,
):
if unet_variant == "original": # Total Params: 36,950,273
input_filters = 64
unet_block_filters = [128, 256, 512]
bottleneck_filters = 1024
output_filters = 64
elif unet_variant == "tinyUNet": # Total Params: 1,824,945
input_filters = 32
unet_block_filters = [32, 64, 128]
bottleneck_filters = 256
output_filters = 32
else:
raise ValueError(
"Unknown `UNet` variant: %s. Authorized: %s" % (unet_variant, UNet_v1.authorized_models_variants)
)
if activation_fn not in blocks.authorized_activation_fn:
raise ValueError(
"Unknown activation function: %s - Authorised: %s" % (activation_fn, blocks.authorized_activation_fn)
)
self.model_hparams = tf.contrib.training.HParams(
compute_format=compute_format,
input_format=input_format,
input_filters=input_filters,
unet_block_filters=unet_block_filters,
bottleneck_filters=bottleneck_filters,
output_filters=output_filters,
n_output_channels=n_output_channels,
model_name=model_name,
)
self.conv2d_hparams = tf.contrib.training.HParams(
kernel_initializer=None, bias_initializer=tf.initializers.constant(0.0), activation_fn=activation_fn
)
if weight_init_method == "he_normal":
self.conv2d_hparams.kernel_initializer = tf.initializers.variance_scaling(
scale=2.0, distribution='truncated_normal', mode='fan_in'
)
elif weight_init_method == "he_uniform":
self.conv2d_hparams.kernel_initializer = tf.initializers.variance_scaling(
scale=2.0, distribution='uniform', mode='fan_in'
)
elif weight_init_method == "glorot_normal":
self.conv2d_hparams.kernel_initializer = tf.initializers.variance_scaling(
scale=1.0, distribution='truncated_normal', mode='fan_avg'
)
elif weight_init_method == "glorot_uniform":
self.conv2d_hparams.kernel_initializer = tf.initializers.variance_scaling(
scale=1.0, distribution='uniform', mode='fan_avg'
)
elif weight_init_method == "orthogonal":
self.conv2d_hparams.kernel_initializer = tf.initializers.orthogonal(gain=1.0)
else:
raise ValueError(
"Unknown weight init method: %s - Authorized: %s" %
(weight_init_method, UNet_v1.authorized_weight_init_methods)
)
def __call__(self, features, labels, mode, params):
if "debug_verbosity" not in params.keys():
raise RuntimeError("Parameter `debug_verbosity` is missing...")
if mode == tf.estimator.ModeKeys.TRAIN:
if "rmsprop_decay" not in params.keys():
raise RuntimeError("Parameter `rmsprop_decay` is missing...")
if "rmsprop_momentum" not in params.keys():
raise RuntimeError("Parameter `rmsprop_momentum` is missing...")
if "learning_rate" not in params.keys():
raise RuntimeError("Parameter `learning_rate` is missing...")
if "learning_rate_decay_steps" not in params.keys():
raise RuntimeError("Parameter `learning_rate` is missing...")
if "learning_rate_decay_factor" not in params.keys():
raise RuntimeError("Parameter `learning_rate` is missing...")
if "weight_decay" not in params.keys():
raise RuntimeError("Parameter `weight_decay` is missing...")
if "loss_fn_name" not in params.keys():
raise RuntimeError("Parameter `loss_fn_name` is missing...")
if mode == tf.estimator.ModeKeys.PREDICT:
y_pred, y_pred_logits = self.build_model(
features, training=False, reuse=False, debug_verbosity=params["debug_verbosity"]
)
predictions = {'logits': y_pred}
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
input_image, mask_image = features
with tf.device("/gpu:0"):
tf.identity(input_image, name="input_image_ref")
tf.identity(mask_image, name="mask_image_ref")
tf.identity(labels, name="labels_ref")
y_pred, y_pred_logits = self.build_model(
input_image,
training=mode == tf.estimator.ModeKeys.TRAIN,
reuse=False,
debug_verbosity=params["debug_verbosity"]
)
all_trainable_vars = tf.reduce_sum([tf.reduce_prod(v.shape) for v in tf.trainable_variables()])
tf.identity(all_trainable_vars, name='trainable_parameters_count_ref')
if mode == tf.estimator.ModeKeys.EVAL:
eval_metrics = dict()
# ==================== Samples ==================== #
image_uint8 = tf.cast((input_image + 1) * 127.5, dtype=tf.uint8)
input_image_jpeg = tf.image.encode_jpeg(image_uint8[0], format='grayscale', quality=100)
tf.identity(input_image_jpeg, name="input_image_jpeg_ref")
for threshold in [None, 0.05, 0.125, 0.25, 0.5, 0.75, 0.85, 0.95, 0.99]:
binarize_img, binarize_img_jpeg = image_processing.binarize_output(y_pred[0], threshold=threshold)
tf.identity(binarize_img_jpeg, name="output_sample_ths_%s_ref" % threshold)
tf.summary.image('output_sample_ths_%s' % threshold, binarize_img, 10)
# ==============+ Evaluation Metrics ==================== #
with tf.name_scope("IoU_Metrics"):
for threshold in [0.05, 0.125, 0.25, 0.5, 0.75, 0.85, 0.95, 0.99]:
iou_score = metrics.iou_score(y_pred=y_pred, y_true=mask_image, threshold=threshold)
tf.identity(iou_score, name='iou_score_ths_%s_ref' % threshold)
tf.summary.scalar('iou_score_ths_%s' % threshold, iou_score)
if mode == tf.estimator.ModeKeys.EVAL:
eval_metrics["IoU_THS_%s" % threshold] = tf.metrics.mean(iou_score)
labels = tf.cast(labels, tf.float32)
labels_preds = tf.reduce_max(y_pred, axis=(1, 2, 3))
assert (
abs(labels_preds - tf.clip_by_value(labels_preds, 0, 1)) < 0.00001,
"Clipping labels_preds introduces non-trivial loss."
)
labels_preds = tf.clip_by_value(labels_preds, 0, 1)
with tf.variable_scope("Confusion_Matrix") as scope:
tp, update_tp = tf.metrics.true_positives_at_thresholds(
labels=labels,
predictions=labels_preds,
thresholds=[0.05, 0.125, 0.25, 0.5, 0.75, 0.85, 0.95, 0.99],
)
tn, update_tn = tf.metrics.true_negatives_at_thresholds(
labels=labels,
predictions=labels_preds,
thresholds=[0.05, 0.125, 0.25, 0.5, 0.75, 0.85, 0.95, 0.99],
)
fp, update_fp = tf.metrics.false_positives_at_thresholds(
labels=labels,
predictions=labels_preds,
thresholds=[0.05, 0.125, 0.25, 0.5, 0.75, 0.85, 0.95, 0.99],
)
fn, update_fn = tf.metrics.false_negatives_at_thresholds(
labels=labels,
predictions=labels_preds,
thresholds=[0.05, 0.125, 0.25, 0.5, 0.75, 0.85, 0.95, 0.99],
)
if mode == tf.estimator.ModeKeys.TRAIN:
local_vars = tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES, scope=scope.name)
confusion_matrix_reset_op = tf.initializers.variables(local_vars, name='reset_op')
with tf.control_dependencies([confusion_matrix_reset_op]):
with tf.control_dependencies([update_tp, update_tn, update_fp, update_fn]):
tp = tf.identity(tp)
tn = tf.identity(tn)
fp = tf.identity(fp)
fn = tf.identity(fn)
else:
eval_metrics["Confusion_Matrix_TP"] = tp, update_tp
eval_metrics["Confusion_Matrix_TN"] = tn, update_tn
eval_metrics["Confusion_Matrix_FP"] = fp, update_fp
eval_metrics["Confusion_Matrix_FN"] = fn, update_fn
tf.identity(tp, name='true_positives_ref') # Confusion_Matrix/true_positives_ref:0
tf.identity(tn, name='true_negatives_ref') # Confusion_Matrix/true_negatives_ref:0
tf.identity(fp, name='false_positives_ref') # Confusion_Matrix/false_positives_ref:0
tf.identity(fn, name='false_negatives_ref') # Confusion_Matrix/false_negatives_ref:0
tf.summary.scalar('true_positives', tp[3]) # For Ths = 0.5
tf.summary.scalar('true_negatives', tn[3]) # For Ths = 0.5
tf.summary.scalar('false_positives', fp[3]) # For Ths = 0.5
tf.summary.scalar('false_negatives', fn[3]) # For Ths = 0.5
binarized_mask, binarized_mask_jpeg = image_processing.binarize_output(mask_image[0], threshold=0.5)
tf.identity(binarized_mask_jpeg, name="mask_sample_ref")
tf.summary.image('sample_mask', binarized_mask, 10)
##########################
mask_max_val = tf.reduce_max(mask_image)
tf.identity(mask_max_val, name='mask_max_val_ref')
mask_min_val = tf.reduce_min(mask_image)
tf.identity(mask_min_val, name='mask_min_val_ref')
mask_mean_val = tf.reduce_mean(mask_image)
tf.identity(mask_mean_val, name='mask_mean_val_ref')
mask_std_val = tf.math.reduce_std(mask_image)
tf.identity(mask_std_val, name='mask_std_val_ref')
##########################
output_max_val = tf.reduce_max(y_pred)
tf.identity(output_max_val, name='output_max_val_ref')
output_min_val = tf.reduce_min(y_pred)
tf.identity(output_min_val, name='output_min_val_ref')
output_mean_val = tf.reduce_mean(y_pred)
tf.identity(output_mean_val, name='output_mean_val_ref')
output_std_val = tf.math.reduce_std(y_pred)
tf.identity(output_std_val, name='output_std_val_ref')
with tf.variable_scope("losses"):
# ==============+ Reconstruction Loss ==================== #
if params["loss_fn_name"] == "x-entropy":
reconstruction_loss = losses.reconstruction_x_entropy(y_pred=y_pred, y_true=mask_image)
elif params["loss_fn_name"] == "l2_loss":
reconstruction_loss = losses.reconstruction_l2loss(y_pred=y_pred, y_true=mask_image)
elif params["loss_fn_name"] == "dice_sorensen":
reconstruction_loss = 1 - losses.dice_coe(y_pred=y_pred, y_true=mask_image, loss_type='sorensen')
elif params["loss_fn_name"] == "dice_jaccard":
reconstruction_loss = 1 - losses.dice_coe(y_pred=y_pred, y_true=mask_image, loss_type='jaccard')
elif params["loss_fn_name"] == "adaptive_loss":
reconstruction_loss = losses.adaptive_loss(
y_pred=y_pred,
y_pred_logits=y_pred_logits,
y_true=mask_image,
switch_at_threshold=0.3,
loss_type='sorensen'
)
else:
raise ValueError("Unknown loss function received: %s" % params["loss_fn_name"])
tf.identity(reconstruction_loss, name='reconstruction_loss_ref')
tf.summary.scalar('reconstruction_loss', reconstruction_loss)
if mode == tf.estimator.ModeKeys.TRAIN:
# ============== Regularization Loss ==================== #
l2_loss = losses.regularization_l2loss(weight_decay=params["weight_decay"])
tf.identity(l2_loss, name='l2_loss_ref')
tf.summary.scalar('l2_loss', l2_loss)
total_loss = tf.add(reconstruction_loss, l2_loss, name="total_loss")
else:
total_loss = reconstruction_loss
tf.identity(total_loss, name='total_loss_ref')
tf.summary.scalar('total_loss', total_loss)
if mode == tf.estimator.ModeKeys.TRAIN:
with tf.variable_scope("optimizers"):
# Update Global Step
global_step = tf.train.get_or_create_global_step()
tf.identity(global_step, name="global_step_ref")
learning_rate = tf.train.exponential_decay(
learning_rate=params["learning_rate"],
decay_steps=params["learning_rate_decay_steps"],
decay_rate=params["learning_rate_decay_factor"],
global_step=global_step,
staircase=True
)
tf.identity(learning_rate, name="learning_rate_ref")
tf.summary.scalar('learning_rate_ref', learning_rate)
opt = tf.train.RMSPropOptimizer(
learning_rate=learning_rate,
use_locking=False,
centered=True,
decay=params["rmsprop_decay"],
momentum=params["rmsprop_momentum"],
)
if hvd_utils.is_using_hvd():
opt = hvd.DistributedOptimizer(opt, device_dense='/gpu:0')
if params["apply_manual_loss_scaling"]:
# if not hvd_utils.is_using_hvd() or hvd.rank() == 0:
# Logger.log("Applying manual Loss Scaling ...")
loss_scale_manager = tf.contrib.mixed_precision.ExponentialUpdateLossScaleManager(
init_loss_scale=2**32, # 4,294,967,296
incr_every_n_steps=1000
)
opt = tf.contrib.mixed_precision.LossScaleOptimizer(opt, loss_scale_manager)
deterministic = True
gate_gradients = (tf.train.Optimizer.GATE_OP if deterministic else tf.train.Optimizer.GATE_NONE)
backprop_op = opt.minimize(total_loss, gate_gradients=gate_gradients, global_step=global_step)
train_op = tf.group(backprop_op, tf.get_collection(tf.GraphKeys.UPDATE_OPS))
return tf.estimator.EstimatorSpec(
mode,
loss=total_loss,
train_op=train_op,
)
elif mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(
mode, loss=total_loss, eval_metric_ops=eval_metrics, predictions={"output": y_pred}
)
else:
raise NotImplementedError('Unknown mode {}'.format(mode))
def build_model(self, inputs, training=True, reuse=False, debug_verbosity=0):
"""
U-Net: Convolutional Networks for Biomedical Image Segmentation
https://arxiv.org/pdf/1505.04597
"""
skip_connections = []
with tf.variable_scope(self.model_hparams.model_name, reuse=reuse):
with tf.variable_scope("input_reshape"):
with tf.variable_scope("initial_zero_padding"):
inputs = tf.image.resize_image_with_crop_or_pad(inputs, target_height=512, target_width=512)
if self.model_hparams.input_format == 'NHWC' and self.model_hparams.compute_format == 'NCHW':
# Convert the inputs from channels_last (NHWC) to channels_first (NCHW).
# This provides a large performance boost on GPU. See
# https://www.tensorflow.org/performance/performance_guide#data_formats
# Reshape inputs: NHWC => NCHW
net = tf.transpose(inputs, [0, 3, 1, 2])
elif self.model_hparams.input_format == 'NCHW' and self.model_hparams.compute_format == 'NHWC':
# Reshape inputs: NCHW => NHWC
net = tf.transpose(inputs, [0, 2, 3, 1])
else:
net = inputs
# net, out = input_block(net, filters=64)
net, out = blocks.input_unet_block(
net,
filters=self.model_hparams.input_filters,
data_format=self.model_hparams.compute_format,
is_training=training,
conv2d_hparams=self.conv2d_hparams
)
skip_connections.append(out)
for idx, filters in enumerate(self.model_hparams.unet_block_filters):
# net, out = downsample_block(net, filters=filters, idx=idx)
net, skip_connect = blocks.downsample_unet_block(
net,
filters=filters,
data_format=self.model_hparams.compute_format,
is_training=training,
conv2d_hparams=self.conv2d_hparams,
block_name="downsample_block_%d" % (idx + 1)
)
skip_connections.append(skip_connect)
net = blocks.bottleneck_unet_block(
net,
filters=self.model_hparams.bottleneck_filters,
data_format=self.model_hparams.compute_format,
is_training=training,
conv2d_hparams=self.conv2d_hparams,
)
for idx, filters in enumerate(reversed(self.model_hparams.unet_block_filters)):
net = blocks.upsample_unet_block(
net,
residual_input=skip_connections.pop(),
filters=filters,
data_format=self.model_hparams.compute_format,
is_training=training,
conv2d_hparams=self.conv2d_hparams,
block_name='upsample_block_%d' % (idx + 1)
)
logits = blocks.output_unet_block(
inputs=net,
residual_input=skip_connections.pop(),
filters=self.model_hparams.output_filters,
n_output_channels=self.model_hparams.n_output_channels,
data_format=self.model_hparams.compute_format,
is_training=training,
conv2d_hparams=self.conv2d_hparams,
block_name='ouputs_block'
)
if self.model_hparams.compute_format == "NCHW":
logits = tf.transpose(logits, [0, 2, 3, 1])
outputs = layers.sigmoid(logits)
return outputs, logits
|
PyTorch/SpeechSynthesis/HiFiGAN/common/text | text | cmudict | """ from https://github.com/keithito/tacotron """
import re
import sys
import urllib.request
from pathlib import Path
valid_symbols = [
'AA', 'AA0', 'AA1', 'AA2', 'AE', 'AE0', 'AE1', 'AE2', 'AH', 'AH0', 'AH1', 'AH2',
'AO', 'AO0', 'AO1', 'AO2', 'AW', 'AW0', 'AW1', 'AW2', 'AY', 'AY0', 'AY1', 'AY2',
'B', 'CH', 'D', 'DH', 'EH', 'EH0', 'EH1', 'EH2', 'ER', 'ER0', 'ER1', 'ER2', 'EY',
'EY0', 'EY1', 'EY2', 'F', 'G', 'HH', 'IH', 'IH0', 'IH1', 'IH2', 'IY', 'IY0', 'IY1',
'IY2', 'JH', 'K', 'L', 'M', 'N', 'NG', 'OW', 'OW0', 'OW1', 'OW2', 'OY', 'OY0',
'OY1', 'OY2', 'P', 'R', 'S', 'SH', 'T', 'TH', 'UH', 'UH0', 'UH1', 'UH2', 'UW',
'UW0', 'UW1', 'UW2', 'V', 'W', 'Y', 'Z', 'ZH'
]
_valid_symbol_set = set(valid_symbols)
class CMUDict:
'''Thin wrapper around CMUDict data. http://www.speech.cs.cmu.edu/cgi-bin/cmudict'''
def __init__(self, file_or_path=None, heteronyms_path=None, keep_ambiguous=True):
self._entries = {}
self.heteronyms = []
if file_or_path is not None:
self.initialize(file_or_path, heteronyms_path, keep_ambiguous)
def initialize(self, file_or_path, heteronyms_path, keep_ambiguous=True):
if isinstance(file_or_path, str):
if not Path(file_or_path).exists():
print("CMUdict missing. Downloading to data/cmudict/.")
self.download()
with open(file_or_path, encoding='latin-1') as f:
entries = _parse_cmudict(f)
else:
entries = _parse_cmudict(file_or_path)
if not keep_ambiguous:
entries = {word: pron for word, pron in entries.items() if len(pron) == 1}
self._entries = entries
if heteronyms_path is not None:
with open(heteronyms_path, encoding='utf-8') as f:
self.heteronyms = [l.rstrip() for l in f]
def __len__(self):
if len(self._entries) == 0:
raise ValueError("CMUDict not initialized")
return len(self._entries)
def lookup(self, word):
'''Returns list of ARPAbet pronunciations of the given word.'''
if len(self._entries) == 0:
raise ValueError("CMUDict not initialized")
return self._entries.get(word.upper())
def download(self):
url = 'https://github.com/Alexir/CMUdict/raw/master/cmudict-0.7b'
try:
Path('data/cmudict').mkdir(parents=False, exist_ok=True)
urllib.request.urlretrieve(url, filename='data/cmudict/cmudict-0.7b')
except:
print("Automatic download of CMUdict failed. Try manually with:")
print()
print(" bash scripts/download_cmudict.sh")
print()
print("and re-run the script.")
sys.exit(0)
_alt_re = re.compile(r'\([0-9]+\)')
def _parse_cmudict(file):
cmudict = {}
for line in file:
if len(line) and (line[0] >= 'A' and line[0] <= 'Z' or line[0] == "'"):
parts = line.split(' ')
word = re.sub(_alt_re, '', parts[0])
pronunciation = _get_pronunciation(parts[1])
if pronunciation:
if word in cmudict:
cmudict[word].append(pronunciation)
else:
cmudict[word] = [pronunciation]
return cmudict
def _get_pronunciation(s):
parts = s.strip().split(' ')
for part in parts:
if part not in _valid_symbol_set:
return None
return ' '.join(parts)
|
TensorFlow2/Recommendation/DLRM_and_DCNv2/deployment/deployment_toolkit/triton_performance_runner/perf_analyzer | perf_analyzer | perf_analyzer | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import pathlib
from subprocess import PIPE, CalledProcessError, Popen
# method from PEP-366 to support relative import in executed modules
from typing import List, Optional
if __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from .exceptions import PerfAnalyzerException
MAX_INTERVAL_CHANGES = 10
COUNT_INTERVAL_DELTA = 50
TIME_INTERVAL_DELTA = 2000
LOGGER = logging.getLogger(__name__)
class PerfAnalyzer:
"""
This class provides an interface for running workloads
with perf_analyzer.
"""
def __init__(self, config, timeout: Optional[int]):
"""
Parameters
----------
config : PerfAnalyzerConfig
keys are names of arguments to perf_analyzer,
values are their values.
"""
self.bin_path = "perf_analyzer"
self._config = config
self._output = ""
self._timeout = timeout
def run(self):
"""
Runs the perf analyzer with the
initialized configuration
Returns
-------
List of Records
List of the metrics obtained from this
run of perf_analyzer
Raises
------
PerfAnalyzerException
If subprocess throws CalledProcessError
"""
self._output = ""
for _ in range(MAX_INTERVAL_CHANGES):
command = [self.bin_path]
command += self._config.to_cli_string().replace("=", " ").split()
LOGGER.debug(f"Perf Analyze command: {command}")
if not self._timeout:
LOGGER.debug("Perf Analyze command timeout not set")
else:
LOGGER.debug(f"Perf Analyze command timeout: {self._timeout} [s]")
try:
self._run_with_stream(command=command)
return
except CalledProcessError as e:
if self._failed_with_measurement_inverval(e.output):
if self._config["measurement-mode"] is None or self._config["measurement-mode"] == "count_windows":
self._increase_request_count()
else:
self._increase_time_interval()
else:
raise PerfAnalyzerException(
f"Running perf_analyzer with {e.cmd} failed with" f" exit status {e.returncode} : {e.output}"
)
raise PerfAnalyzerException(f"Ran perf_analyzer {MAX_INTERVAL_CHANGES} times, but no valid requests recorded.")
def output(self):
"""
Returns
-------
The stdout output of the
last perf_analyzer run
"""
if self._output:
return self._output
raise PerfAnalyzerException("Attempted to get perf_analyzer output" "without calling run first.")
def _run_with_stream(self, command: List[str]):
commands_lst = []
if self._timeout:
commands_lst = ["timeout", str(self._timeout)]
commands_lst.extend(command)
LOGGER.debug(f"Run with stream: {commands_lst}")
process = Popen(commands_lst, start_new_session=True, stdout=PIPE, encoding="utf-8")
streamed_output = ""
while True:
output = process.stdout.readline()
if output == "" and process.poll() is not None:
break
if output:
streamed_output += output
print(output.rstrip())
self._output += streamed_output
result = process.poll()
LOGGER.debug(f"Perf Analyzer process exited with result: {result}")
# WAR for Perf Analyzer exit code 0 when stabilization failed
if result == 0 and self._failed_with_measurement_inverval(streamed_output):
LOGGER.debug("Perf Analyzer finished with exit status 0, however measurement stabilization failed.")
result = 1
if result != 0:
raise CalledProcessError(returncode=result, cmd=commands_lst, output=streamed_output)
def _failed_with_measurement_inverval(self, output: str):
checks = [
output.find("Failed to obtain stable measurement"),
output.find("Please use a larger time window"),
]
result = any([status != -1 for status in checks])
LOGGER.debug(f"Measurement stability message validation: {checks}. Result: {result}.")
return result
def _increase_request_count(self):
self._config["measurement-request-count"] += COUNT_INTERVAL_DELTA
LOGGER.debug(
"perf_analyzer's measurement request count is too small, "
f"increased to {self._config['measurement-request-count']}."
)
def _increase_time_interval(self):
self._config["measurement-interval"] += TIME_INTERVAL_DELTA
LOGGER.debug(
"perf_analyzer's measurement window is too small, "
f"increased to {self._config['measurement-interval']} ms."
)
|
TensorFlow2/Classification/ConvNets/efficientnet_v1/B0/training/AMP | AMP | convergence_1xA100-80G | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
horovodrun -np 1 bash ./scripts/bind.sh --cpu=exclusive --ib=single -- python3 main.py \
--cfg config/efficientnet_v1/b0_cfg.py \
--mode train_and_eval \
--use_amp \
--use_xla \
--model_dir ./output \
--data_dir /data \
--log_steps 100 \
--max_epochs 500 \
--save_checkpoint_freq 5 \
--train_batch_size 1024 \
--eval_batch_size 1024 \
--augmenter_name autoaugment \
--lr_decay cosine \
--memory_limit 81000 \
--defer_img_mixing \
--moving_average_decay 0.9999 \
--lr_init 0.005
|
Tools/PyTorch/TimeSeriesPredictionPlatform/conf/deployment/export | export | onnx | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
config:
type: onnx
|
PyTorch/Translation/GNMT/scripts/tests | tests | train_1epoch | #!/bin/bash
# Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
set -e
DATASET_DIR='data/wmt16_de_en'
REPO_DIR='/workspace/gnmt'
REFERENCE_FILE=$REPO_DIR/scripts/tests/reference_training_performance
MATH=$1
if [[ ${MATH} != "fp16" && ${MATH} != "fp32" && ${MATH} != "tf32" ]]; then
echo "Unsupported option for MATH, use either 'fp16' or 'fp32' or 'tf32'"
exit 1
fi
PERF_TOLERANCE=0.9
GPU_NAME=`nvidia-smi --query-gpu=gpu_name --format=csv,noheader |uniq`
echo 'GPU_NAME:' ${GPU_NAME}
GPU_COUNT=`nvidia-smi --query-gpu=gpu_name --format=csv,noheader |wc -l`
echo 'GPU_COUNT:' ${GPU_COUNT}
if [[ ${GPU_COUNT} -eq 1 || ${GPU_COUNT} -eq 2 || ${GPU_COUNT} -eq 4 || ${GPU_COUNT} -eq 8 ]]; then
GLOBAL_BATCH_SIZE=1024
elif [ ${GPU_COUNT} -eq 16 ]; then
GLOBAL_BATCH_SIZE=2048
else
echo "Unsupported number of GPUs"
exit 1
fi
REFERENCE_PERF=`grep "${MATH},${GPU_COUNT},${GPU_NAME}" \
${REFERENCE_FILE} | \cut -f 4 -d ','`
if [ -z "${REFERENCE_PERF}" ]; then
echo "WARNING: COULD NOT FIND REFERENCE PERFORMANCE FOR EXECUTED CONFIG"
TARGET_PERF=''
else
PERF_THRESHOLD=$(awk 'BEGIN {print ('${REFERENCE_PERF}' * '${PERF_TOLERANCE}')}')
TARGET_PERF='--target-perf '${PERF_THRESHOLD}
fi
cd $REPO_DIR
python3 -m torch.distributed.launch --nproc_per_node=${GPU_COUNT} train.py \
--dataset-dir $DATASET_DIR \
--seed 2 \
--epochs 1 \
--remain-steps 1.0 \
--target-bleu 18.00 \
--math ${MATH} \
--train-global-batch-size ${GLOBAL_BATCH_SIZE} \
${TARGET_PERF}
|
TensorFlow/Detection/SSD/models/research/object_detection/builders | builders | input_reader_builder_test | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for input_reader_builder."""
import os
import numpy as np
import tensorflow as tf
from google.protobuf import text_format
from object_detection.builders import input_reader_builder
from object_detection.core import standard_fields as fields
from object_detection.protos import input_reader_pb2
from object_detection.utils import dataset_util
class InputReaderBuilderTest(tf.test.TestCase):
def create_tf_record(self):
path = os.path.join(self.get_temp_dir(), 'tfrecord')
writer = tf.python_io.TFRecordWriter(path)
image_tensor = np.random.randint(255, size=(4, 5, 3)).astype(np.uint8)
flat_mask = (4 * 5) * [1.0]
with self.test_session():
encoded_jpeg = tf.image.encode_jpeg(tf.constant(image_tensor)).eval()
example = tf.train.Example(features=tf.train.Features(feature={
'image/encoded': dataset_util.bytes_feature(encoded_jpeg),
'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),
'image/height': dataset_util.int64_feature(4),
'image/width': dataset_util.int64_feature(5),
'image/object/bbox/xmin': dataset_util.float_list_feature([0.0]),
'image/object/bbox/xmax': dataset_util.float_list_feature([1.0]),
'image/object/bbox/ymin': dataset_util.float_list_feature([0.0]),
'image/object/bbox/ymax': dataset_util.float_list_feature([1.0]),
'image/object/class/label': dataset_util.int64_list_feature([2]),
'image/object/mask': dataset_util.float_list_feature(flat_mask),
}))
writer.write(example.SerializeToString())
writer.close()
return path
def test_build_tf_record_input_reader(self):
tf_record_path = self.create_tf_record()
input_reader_text_proto = """
shuffle: false
num_readers: 1
tf_record_input_reader {{
input_path: '{0}'
}}
""".format(tf_record_path)
input_reader_proto = input_reader_pb2.InputReader()
text_format.Merge(input_reader_text_proto, input_reader_proto)
tensor_dict = input_reader_builder.build(input_reader_proto)
with tf.train.MonitoredSession() as sess:
output_dict = sess.run(tensor_dict)
self.assertTrue(fields.InputDataFields.groundtruth_instance_masks
not in output_dict)
self.assertEquals(
(4, 5, 3), output_dict[fields.InputDataFields.image].shape)
self.assertEquals(
[2], output_dict[fields.InputDataFields.groundtruth_classes])
self.assertEquals(
(1, 4), output_dict[fields.InputDataFields.groundtruth_boxes].shape)
self.assertAllEqual(
[0.0, 0.0, 1.0, 1.0],
output_dict[fields.InputDataFields.groundtruth_boxes][0])
def test_build_tf_record_input_reader_and_load_instance_masks(self):
tf_record_path = self.create_tf_record()
input_reader_text_proto = """
shuffle: false
num_readers: 1
load_instance_masks: true
tf_record_input_reader {{
input_path: '{0}'
}}
""".format(tf_record_path)
input_reader_proto = input_reader_pb2.InputReader()
text_format.Merge(input_reader_text_proto, input_reader_proto)
tensor_dict = input_reader_builder.build(input_reader_proto)
with tf.train.MonitoredSession() as sess:
output_dict = sess.run(tensor_dict)
self.assertEquals(
(4, 5, 3), output_dict[fields.InputDataFields.image].shape)
self.assertEquals(
[2], output_dict[fields.InputDataFields.groundtruth_classes])
self.assertEquals(
(1, 4), output_dict[fields.InputDataFields.groundtruth_boxes].shape)
self.assertAllEqual(
[0.0, 0.0, 1.0, 1.0],
output_dict[fields.InputDataFields.groundtruth_boxes][0])
self.assertAllEqual(
(1, 4, 5),
output_dict[fields.InputDataFields.groundtruth_instance_masks].shape)
def test_raises_error_with_no_input_paths(self):
input_reader_text_proto = """
shuffle: false
num_readers: 1
load_instance_masks: true
"""
input_reader_proto = input_reader_pb2.InputReader()
text_format.Merge(input_reader_text_proto, input_reader_proto)
with self.assertRaises(ValueError):
input_reader_builder.build(input_reader_proto)
if __name__ == '__main__':
tf.test.main()
|
PyTorch/SpeechSynthesis/Tacotron2/tacotron2 | tacotron2 | entrypoints | # *****************************************************************************
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# *****************************************************************************
import urllib.request
import torch
import os
import sys
#from https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/SpeechSynthesis/Tacotron2/inference.py
def checkpoint_from_distributed(state_dict):
"""
Checks whether checkpoint was generated by DistributedDataParallel. DDP
wraps model in additional "module.", it needs to be unwrapped for single
GPU inference.
:param state_dict: model's state dict
"""
ret = False
for key, _ in state_dict.items():
if key.find('module.') != -1:
ret = True
break
return ret
# from https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/SpeechSynthesis/Tacotron2/inference.py
def unwrap_distributed(state_dict):
"""
Unwraps model from DistributedDataParallel.
DDP wraps model in additional "module.", it needs to be removed for single
GPU inference.
:param state_dict: model's state dict
"""
new_state_dict = {}
for key, value in state_dict.items():
new_key = key.replace('module.1.', '')
new_key = new_key.replace('module.', '')
new_state_dict[new_key] = value
return new_state_dict
def _download_checkpoint(checkpoint, force_reload):
model_dir = os.path.join(torch.hub._get_torch_home(), 'checkpoints')
if not os.path.exists(model_dir):
os.makedirs(model_dir)
ckpt_file = os.path.join(model_dir, os.path.basename(checkpoint))
if not os.path.exists(ckpt_file) or force_reload:
sys.stderr.write('Downloading checkpoint from {}\n'.format(checkpoint))
urllib.request.urlretrieve(checkpoint, ckpt_file)
return ckpt_file
def nvidia_tacotron2(pretrained=True, **kwargs):
"""Constructs a Tacotron 2 model (nn.module with additional infer(input) method).
For detailed information on model input and output, training recipies, inference and performance
visit: github.com/NVIDIA/DeepLearningExamples and/or ngc.nvidia.com
Args (type[, default value]):
pretrained (bool, True): If True, returns a model pretrained on LJ Speech dataset.
model_math (str, 'fp32'): returns a model in given precision ('fp32' or 'fp16')
n_symbols (int, 148): Number of symbols used in a sequence passed to the prenet, see
https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/SpeechSynthesis/Tacotron2/tacotron2/text/symbols.py
p_attention_dropout (float, 0.1): dropout probability on attention LSTM (1st LSTM layer in decoder)
p_decoder_dropout (float, 0.1): dropout probability on decoder LSTM (2nd LSTM layer in decoder)
max_decoder_steps (int, 1000): maximum number of generated mel spectrograms during inference
"""
from tacotron2 import model as tacotron2
fp16 = "model_math" in kwargs and kwargs["model_math"] == "fp16"
force_reload = "force_reload" in kwargs and kwargs["force_reload"]
if pretrained:
if fp16:
checkpoint = 'https://api.ngc.nvidia.com/v2/models/nvidia/tacotron2_pyt_ckpt_amp/versions/19.09.0/files/nvidia_tacotron2pyt_fp16_20190427'
else:
checkpoint = 'https://api.ngc.nvidia.com/v2/models/nvidia/tacotron2_pyt_ckpt_fp32/versions/19.09.0/files/nvidia_tacotron2pyt_fp32_20190427'
ckpt_file = _download_checkpoint(checkpoint, force_reload)
ckpt = torch.load(ckpt_file)
state_dict = ckpt['state_dict']
if checkpoint_from_distributed(state_dict):
state_dict = unwrap_distributed(state_dict)
config = ckpt['config']
else:
config = {'mask_padding': False, 'n_mel_channels': 80, 'n_symbols': 148,
'symbols_embedding_dim': 512, 'encoder_kernel_size': 5,
'encoder_n_convolutions': 3, 'encoder_embedding_dim': 512,
'attention_rnn_dim': 1024, 'attention_dim': 128,
'attention_location_n_filters': 32,
'attention_location_kernel_size': 31, 'n_frames_per_step': 1,
'decoder_rnn_dim': 1024, 'prenet_dim': 256,
'max_decoder_steps': 1000, 'gate_threshold': 0.5,
'p_attention_dropout': 0.1, 'p_decoder_dropout': 0.1,
'postnet_embedding_dim': 512, 'postnet_kernel_size': 5,
'postnet_n_convolutions': 5, 'decoder_no_early_stopping': False}
for k,v in kwargs.items():
if k in config.keys():
config[k] = v
m = tacotron2.Tacotron2(**config)
if pretrained:
m.load_state_dict(state_dict)
return m
def nvidia_tts_utils():
class Processing:
from tacotron2.text import text_to_sequence
@staticmethod
def pad_sequences(batch):
# Right zero-pad all one-hot text sequences to max input length
input_lengths, ids_sorted_decreasing = torch.sort(
torch.LongTensor([len(x) for x in batch]),
dim=0, descending=True)
max_input_len = input_lengths[0]
text_padded = torch.LongTensor(len(batch), max_input_len)
text_padded.zero_()
for i in range(len(ids_sorted_decreasing)):
text = batch[ids_sorted_decreasing[i]]
text_padded[i, :text.size(0)] = text
return text_padded, input_lengths
@staticmethod
def prepare_input_sequence(texts, cpu_run=False):
d = []
for i,text in enumerate(texts):
d.append(torch.IntTensor(
Processing.text_to_sequence(text, ['english_cleaners'])[:]))
text_padded, input_lengths = Processing.pad_sequences(d)
if not cpu_run:
text_padded = text_padded.cuda().long()
input_lengths = input_lengths.cuda().long()
else:
text_padded = text_padded.long()
input_lengths = input_lengths.long()
return text_padded, input_lengths
return Processing()
|
TensorFlow/Classification/ConvNets/triton | triton | metrics | from typing import Any, Dict, List, Optional
import numpy as np
from deployment_toolkit.core import BaseMetricsCalculator
class MetricsCalculator(BaseMetricsCalculator):
def __init__(self):
self._equals = []
def update(
self,
*,
ids: List[Any],
y_pred: Dict[str, np.ndarray],
x: Optional[Dict[str, np.ndarray]],
y_real: Optional[Dict[str, np.ndarray]],
):
classes_real = y_real["classes"]
classes_pred = y_pred["classes"]
classes_real = np.squeeze(classes_real)
classes_pred = np.squeeze(classes_pred)
assert classes_real.shape == classes_pred.shape, (
f"classes_pred.shape={classes_pred.shape} != " f"classes_real.shape={classes_real.shape}"
)
self._equals.append(classes_real == classes_pred)
@property
def metrics(self) -> Dict[str, Any]:
return {"accuracy": np.concatenate(self._equals, axis=0).mean()} |
PyTorch/Forecasting/TFT/triton/runner/maintainer/docker/containers | containers | triton_server_container | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import pathlib
from threading import Thread
from typing import Dict, Generator, Union
from docker.models.containers import ExecResult
from docker.types import DeviceRequest, Ulimit
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from ....logger import LOGGER
from ...exceptions import ContainerNotStarted
from ..container import DockerContainer
class TritonServerContainer(DockerContainer):
def __init__(
self,
name: str,
command: str,
image: str,
volumes: Dict,
devices: Union[list, int],
environment: Dict,
log_file: Union[pathlib.Path, str],
network: str = "host",
shm_size: str = "1G",
):
"""
Initialize Triton Server Container
Args:
name: Container name
command: Triton Server command to exec on container start
image: Docker Image
volumes: Volumes to mount inside container
devices: Devices which has to be visible in container
environment: Environment variables
log_file: Path where logs should be saved
network: Network mode
shm_size: Shared memory size
"""
super().__init__(name)
self._image = image
self._command = command
self._volumes = volumes
self._devices = devices
self._environment = environment
self._network = network
self._shm_size = shm_size
self._triton_exec = None
self._logging_thread = None
self._log_file_path = pathlib.Path(log_file)
def start(self) -> None:
"""
Start Triton Server Container
"""
devices = [
DeviceRequest(capabilities=[["gpu"]], device_ids=self._devices),
]
LOGGER.info(f"Triton environment: {json.dumps(self._environment, indent=4)}")
LOGGER.info(f"Starting Triton container {self.name}.")
self._container = self._docker_client.containers.run(
image=self._image,
name=self.name,
device_requests=devices,
detach=True,
tty=True,
shm_size=self._shm_size,
ulimits=[
Ulimit(name="memlock", soft=-1, hard=-1),
Ulimit(name="stack", soft=67108864, hard=67108864),
],
volumes=self._volumes,
environment=self._environment,
network_mode=self._network,
auto_remove=True,
ipc_mode="host",
)
LOGGER.info(f"Triton command:")
LOGGER.info(f" {self._command}")
LOGGER.info(f"Starting Triton Server {self.name}.")
self._triton_exec = self._docker_api_client.exec_create(
container=self._container.id,
cmd=self._command,
)
stream_generator = self._docker_api_client.exec_start(exec_id=self._triton_exec["Id"], stream=True)
self._logging_thread = Thread(target=TritonServerContainer._logging, args=(self, stream_generator), daemon=True)
self._logging_thread.start()
def stop(self) -> None:
"""
Stop Triton Server Container and save logs to file
"""
if self._container is not None:
triton_result = self._docker_api_client.exec_inspect(self._triton_exec["Id"])
if triton_result.get("ExitCode") not in (0, None):
LOGGER.info(
f"Triton Inference Server instance {self.name} failed. Exit code: {triton_result.get('ExitCode')}"
)
LOGGER.info(f"Stopping triton server {self.name}.")
self._container.stop()
self._container = None
self._docker_client.close()
self._docker_api_client.close()
def run(self, command: str) -> ExecResult:
"""
Run command in container
Args:
command: Command to execute
Returns:
ExecResult
"""
if not self._container:
raise ContainerNotStarted("Triton Server Container is not running. Use .start() first.")
return self._container.exec_run(command)
def _logging(self, generator: Generator) -> None:
"""Triton logging thread for Triton Inference Server
Args:
generator (string generator): Triton log stream.
"""
with open(self._log_file_path, mode="w") as file:
try:
while True:
log = next(generator)
txt = log.decode("utf-8")
file.write(txt)
except StopIteration:
LOGGER.info(f"Saving Triton Inference Server {self.name} logs in {self._log_file_path}.")
|
PyTorch/Segmentation/MaskRCNN/pytorch/configs/gn_baselines | gn_baselines | e2e_mask_rcnn_R_50_FPN_Xconv1fc_1x_gn | INPUT:
MIN_SIZE_TRAIN: 800
MAX_SIZE_TRAIN: 1333
MIN_SIZE_TEST: 800
MAX_SIZE_TEST: 1333
MODEL:
META_ARCHITECTURE: "GeneralizedRCNN"
WEIGHT: "catalog://ImageNetPretrained/MSRA/R-50-GN"
BACKBONE:
CONV_BODY: "R-50-FPN"
OUT_CHANNELS: 256
RESNETS: # use GN for backbone
TRANS_FUNC: "BottleneckWithGN"
STEM_FUNC: "StemWithGN"
FPN:
USE_GN: True # use GN for FPN
RPN:
USE_FPN: True
ANCHOR_STRIDE: (4, 8, 16, 32, 64)
PRE_NMS_TOP_N_TRAIN: 2000
PRE_NMS_TOP_N_TEST: 1000
POST_NMS_TOP_N_TEST: 1000
FPN_POST_NMS_TOP_N_TEST: 1000
ROI_HEADS:
USE_FPN: True
BATCH_SIZE_PER_IMAGE: 512
POSITIVE_FRACTION: 0.25
ROI_BOX_HEAD:
USE_GN: True # use GN for bbox head
POOLER_RESOLUTION: 7
POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)
POOLER_SAMPLING_RATIO: 2
CONV_HEAD_DIM: 256
NUM_STACKED_CONVS: 4
FEATURE_EXTRACTOR: "FPNXconv1fcFeatureExtractor"
PREDICTOR: "FPNPredictor"
ROI_MASK_HEAD:
USE_GN: True # use GN for mask head
POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)
CONV_LAYERS: (256, 256, 256, 256)
FEATURE_EXTRACTOR: "MaskRCNNFPNFeatureExtractor"
PREDICTOR: "MaskRCNNC4Predictor"
POOLER_RESOLUTION: 14
POOLER_SAMPLING_RATIO: 2
RESOLUTION: 28
SHARE_BOX_FEATURE_EXTRACTOR: False
MASK_ON: True
DATASETS:
TRAIN: ("coco_2014_train", "coco_2014_valminusminival")
TEST: ("coco_2014_minival",)
DATALOADER:
SIZE_DIVISIBILITY: 32
SOLVER:
# Assume 8 gpus
BASE_LR: 0.02
WEIGHT_DECAY: 0.0001
STEPS: (60000, 80000)
MAX_ITER: 90000
IMS_PER_BATCH: 16
TEST:
IMS_PER_BATCH: 8 |
TensorFlow2/Segmentation/Contrib/UNet3P/utils | utils | images_utils | """
Utility functions for image processing
"""
import numpy as np
import cv2
from omegaconf import DictConfig
import matplotlib.pyplot as plt
def read_image(img_path, color_mode):
"""
Read and return image as np array from given path.
In case of color image, it returns image in BGR mode.
"""
return cv2.imread(img_path, color_mode)
def resize_image(img, height, width, resize_method=cv2.INTER_CUBIC):
"""
Resize image
"""
return cv2.resize(img, dsize=(width, height), interpolation=resize_method)
def prepare_image(path: str, resize: DictConfig, normalize_type: str):
"""
Prepare image for model.
read image --> resize --> normalize --> return as float32
"""
image = read_image(path, cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if resize.VALUE:
# TODO verify image resizing method
image = resize_image(image, resize.HEIGHT, resize.WIDTH, cv2.INTER_AREA)
if normalize_type == "normalize":
image = image / 255.0
image = image.astype(np.float32)
return image
def prepare_mask(path: str, resize: dict, normalize_mask: dict):
"""
Prepare mask for model.
read mask --> resize --> normalize --> return as int32
"""
mask = read_image(path, cv2.IMREAD_GRAYSCALE)
if resize.VALUE:
mask = resize_image(mask, resize.HEIGHT, resize.WIDTH, cv2.INTER_NEAREST)
if normalize_mask.VALUE:
mask = mask / normalize_mask.NORMALIZE_VALUE
mask = mask.astype(np.int32)
return mask
def image_to_mask_name(image_name: str):
"""
Convert image file name to it's corresponding mask file name e.g.
image name --> mask name
image_28_0.png mask_28_0.png
replace image with mask
"""
return image_name.replace('image', 'mask')
def postprocess_mask(mask, classes, output_type=np.int32):
"""
Post process model output.
Covert probabilities into indexes based on maximum value.
"""
if classes == 1:
mask = np.where(mask > .5, 1.0, 0.0)
else:
mask = np.argmax(mask, axis=-1)
return mask.astype(output_type)
def denormalize_mask(mask, classes):
"""
Denormalize mask by multiplying each class with higher
integer (255 / classes) for better visualization.
"""
mask = mask * (255 / classes)
return mask.astype(np.int32)
def display(display_list, show_true_mask=False):
"""
Show list of images. it could be
either [image, true_mask, predicted_mask] or [image, predicted_mask].
Set show_true_mask to True if true mask is available or vice versa
"""
if show_true_mask:
title_list = ('Input Image', 'True Mask', 'Predicted Mask')
plt.figure(figsize=(12, 4))
else:
title_list = ('Input Image', 'Predicted Mask')
plt.figure(figsize=(8, 4))
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i + 1)
if title_list is not None:
plt.title(title_list[i])
if len(np.squeeze(display_list[i]).shape) == 2:
plt.imshow(np.squeeze(display_list[i]), cmap='gray')
plt.axis('on')
else:
plt.imshow(np.squeeze(display_list[i]))
plt.axis('on')
plt.show()
|
TensorFlow2/Segmentation/nnUNet/runtime | runtime | run | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
import horovod.tensorflow as hvd
import numpy as np
import tensorflow as tf
import tensorflow_addons as tfa
from tensorflow.python.compiler.tensorrt import trt_convert as trt
from runtime.checkpoint import CheckpointManager
from runtime.losses import DiceCELoss, WeightDecay
from runtime.metrics import Dice, MetricAggregator, make_class_logger_metrics
from runtime.utils import is_main_process, make_empty_dir, progress_bar
def update_best_metrics(old, new, start_time, iteration, watch_metric=None):
did_change = False
for metric, value in new.items():
if metric not in old or old[metric]["value"] < value:
old[metric] = {"value": value, "timestamp": time.time() - start_time, "iter": int(iteration)}
if watch_metric == metric:
did_change = True
return did_change
def get_scheduler(args, total_steps):
scheduler = {
"poly": tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=args.learning_rate,
end_learning_rate=args.end_learning_rate,
decay_steps=total_steps,
power=0.9,
),
"cosine": tf.keras.optimizers.schedules.CosineDecay(
initial_learning_rate=args.learning_rate, decay_steps=total_steps
),
"cosine_annealing": tf.keras.optimizers.schedules.CosineDecayRestarts(
initial_learning_rate=args.learning_rate,
first_decay_steps=args.cosine_annealing_first_cycle_steps,
alpha=0.1,
),
"none": args.learning_rate,
}[args.scheduler.lower()]
return scheduler
def get_optimizer(args, scheduler):
optimizer = {
"sgd": tf.keras.optimizers.SGD(learning_rate=scheduler, momentum=args.momentum),
"adam": tf.keras.optimizers.Adam(learning_rate=scheduler),
"radam": tfa.optimizers.RectifiedAdam(learning_rate=scheduler),
}[args.optimizer.lower()]
if args.lookahead:
optimizer = tfa.optimizers.Lookahead(optimizer)
if args.amp:
optimizer = tf.keras.mixed_precision.LossScaleOptimizer(optimizer, dynamic=True)
return optimizer
def get_epoch_size(args, batch_size, dataset_size):
if args.steps_per_epoch:
return args.steps_per_epoch
div = args.gpus * (batch_size if args.dim == 3 else args.nvol)
return (dataset_size + div - 1) // div
def process_performance_stats(deltas, batch_size, mode):
deltas_ms = 1000 * np.array(deltas)
throughput_imgps = 1000.0 * batch_size / deltas_ms.mean()
stats = {f"throughput_{mode}": throughput_imgps, f"latency_{mode}_mean": deltas_ms.mean()}
for level in [90, 95, 99]:
stats.update({f"latency_{mode}_{level}": np.percentile(deltas_ms, level)})
return stats
def benchmark(args, step_fn, data, steps, warmup_steps, logger, mode="train"):
assert steps > warmup_steps, "Number of benchmarked steps has to be greater then number of warmup steps"
deltas = []
wrapped_data = progress_bar(
enumerate(data),
quiet=args.quiet,
desc=f"Benchmark ({mode})",
unit="step",
postfix={"phase": "warmup"},
total=steps,
)
start = time.perf_counter()
for step, (images, labels) in wrapped_data:
output_map = step_fn(images, labels, warmup_batch=step == 0)
if step >= warmup_steps:
deltas.append(time.perf_counter() - start)
if step == warmup_steps and is_main_process() and not args.quiet:
wrapped_data.set_postfix(phase="benchmark")
start = time.perf_counter()
if step >= steps:
break
stats = process_performance_stats(deltas, args.gpus * args.batch_size, mode=mode)
logger.log_metrics(stats)
def train(args, model, dataset, logger):
train_data = dataset.train_dataset()
epochs = args.epochs
batch_size = args.batch_size if args.dim == 3 else args.nvol
steps_per_epoch = get_epoch_size(args, batch_size, dataset.train_size())
total_steps = epochs * steps_per_epoch
scheduler = get_scheduler(args, total_steps)
optimizer = get_optimizer(args, scheduler)
loss_fn = DiceCELoss(
y_one_hot=True,
reduce_batch=args.reduce_batch,
include_background=args.include_background,
)
wdecay = WeightDecay(factor=args.weight_decay)
tstep = tf.Variable(0)
@tf.function
def train_step_fn(features, labels, warmup_batch=False):
features, labels = model.adjust_batch(features, labels)
with tf.GradientTape() as tape:
output_map = model(features)
dice_loss = model.compute_loss(loss_fn, labels, output_map)
loss = dice_loss + wdecay(model)
if args.amp:
loss = optimizer.get_scaled_loss(loss)
tape = hvd.DistributedGradientTape(tape)
gradients = tape.gradient(loss, model.trainable_variables)
if args.amp:
gradients = optimizer.get_unscaled_gradients(gradients)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# Note: broadcast should be done after the first gradient step to ensure optimizer initialization.
if warmup_batch:
hvd.broadcast_variables(model.variables, root_rank=0)
hvd.broadcast_variables(optimizer.variables(), root_rank=0)
return dice_loss
dice_metrics = MetricAggregator(name="dice")
checkpoint = CheckpointManager(
args.ckpt_dir,
strategy=args.ckpt_strategy,
resume_training=args.resume_training,
variables={"model": model, "optimizer": optimizer, "step": tstep, **dice_metrics.checkpoint_metrics()},
)
if args.benchmark:
benchmark(args, train_step_fn, train_data, args.bench_steps, args.warmup_steps, logger)
else:
wrapped_data = progress_bar(
train_data,
quiet=args.quiet,
desc="Train",
postfix={"epoch": 1},
unit="step",
total=total_steps - int(tstep),
)
start_time = time.time()
total_train_loss, dice_score = 0.0, 0.0
for images, labels in wrapped_data:
if tstep >= total_steps:
break
tstep.assign_add(1)
loss = train_step_fn(images, labels, warmup_batch=tstep == 1)
total_train_loss += float(loss)
lr = scheduler(tstep) if callable(scheduler) else scheduler
metrics = {"loss": float(loss), "learning_rate": float(lr)}
if tstep % steps_per_epoch == 0:
epoch = int(tstep // steps_per_epoch)
if epoch > args.skip_eval:
dice = evaluate(args, model, dataset, logger)
dice_score = tf.reduce_mean(dice[1:])
did_improve = dice_metrics.update(dice_score)
metrics = dice_metrics.logger_metrics()
metrics.update(make_class_logger_metrics(dice))
if did_improve:
metrics["time_to_train"] = time.time() - start_time
logger.log_metrics(metrics=metrics, step=int(tstep))
checkpoint.update(float(dice_score))
logger.flush()
else:
checkpoint.update(None)
if is_main_process() and not args.quiet:
wrapped_data.set_postfix(epoch=epoch + 1)
elif tstep % steps_per_epoch == 0:
total_train_loss = 0.0
metrics = {
"train_loss": round(total_train_loss / steps_per_epoch, 5),
"val_loss": round(1 - float(dice_score), 5),
"dice": round(float(dice_metrics.metrics["max"].result()), 5),
}
logger.log_metrics(metrics=metrics)
logger.flush()
def evaluate(args, model, dataset, logger):
dice = Dice(n_class=model.n_class)
data_size = dataset.val_size()
wrapped_data = progress_bar(
enumerate(dataset.val_dataset()),
quiet=args.quiet,
desc="Validation",
unit="step",
total=data_size,
)
for i, (features, labels) in wrapped_data:
if args.dim == 2:
features, labels = features[0], labels[0]
output_map = model.inference(features)
dice.update_state(output_map, labels)
if i + 1 == data_size:
break
result = dice.result()
if args.exec_mode == "evaluate":
metrics = {
"eval_dice": float(tf.reduce_mean(result)),
"eval_dice_nobg": float(tf.reduce_mean(result[1:])),
}
logger.log_metrics(metrics)
return result
def predict(args, model, dataset, logger):
if args.benchmark:
@tf.function
def predict_bench_fn(features, labels, warmup_batch):
if args.dim == 2:
features = features[0]
output_map = model(features, training=False)
return output_map
benchmark(
args,
predict_bench_fn,
dataset.test_dataset(),
args.bench_steps,
args.warmup_steps,
logger,
mode="predict",
)
else:
if args.save_preds:
prec = "amp" if args.amp else "fp32"
dir_name = f"preds_task_{args.task}_dim_{args.dim}_fold_{args.fold}_{prec}"
if args.tta:
dir_name += "_tta"
save_dir = args.results / dir_name
make_empty_dir(save_dir)
data_size = dataset.test_size()
wrapped_data = progress_bar(
enumerate(dataset.test_dataset()),
quiet=args.quiet,
desc="Predict",
unit="step",
total=data_size,
)
for i, (images, meta) in wrapped_data:
features, _ = model.adjust_batch(images, None)
pred = model.inference(features, training=False)
if args.save_preds:
model.save_pred(pred, meta, idx=i, data_module=dataset, save_dir=save_dir)
if i + 1 == data_size:
break
def export_model(args, model):
checkpoint = tf.train.Checkpoint(model=model)
checkpoint.restore(tf.train.latest_checkpoint(args.ckpt_dir)).expect_partial()
input_shape = [1, *model.patch_size, model.n_class]
dummy_input = tf.constant(tf.zeros(input_shape, dtype=tf.float32))
_ = model(dummy_input, training=False)
prec = "amp" if args.amp else "fp32"
path = str(args.results / f"saved_model_task_{args.task}_dim_{args.dim}_{prec}")
tf.keras.models.save_model(model, str(path))
trt_prec = trt.TrtPrecisionMode.FP32 if prec == "fp32" else trt.TrtPrecisionMode.FP16
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=path,
conversion_params=trt.TrtConversionParams(precision_mode=trt_prec),
)
converter.convert()
trt_path = str(args.results / f"trt_saved_model_task_{args.task}_dim_{args.dim}_{prec}")
converter.save(trt_path)
|
PyTorch/Translation/GNMT/seq2seq/models | models | encoder | # Copyright (c) 2017 Elad Hoffer
# Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import torch.nn as nn
from torch.nn.utils.rnn import pack_padded_sequence
from torch.nn.utils.rnn import pad_packed_sequence
import seq2seq.data.config as config
from seq2seq.utils import init_lstm_
class ResidualRecurrentEncoder(nn.Module):
"""
Encoder with Embedding, LSTM layers, residual connections and optional
dropout.
The first LSTM layer is bidirectional and uses variable sequence length
API, the remaining (num_layers-1) layers are unidirectional. Residual
connections are enabled after third LSTM layer, dropout is applied on
inputs to LSTM layers.
"""
def __init__(self, vocab_size, hidden_size=1024, num_layers=4, dropout=0.2,
batch_first=False, embedder=None, init_weight=0.1):
"""
Constructor for the ResidualRecurrentEncoder.
:param vocab_size: size of vocabulary
:param hidden_size: hidden size for LSTM layers
:param num_layers: number of LSTM layers, 1st layer is bidirectional
:param dropout: probability of dropout (on input to LSTM layers)
:param batch_first: if True the model uses (batch,seq,feature) tensors,
if false the model uses (seq, batch, feature)
:param embedder: instance of nn.Embedding, if None constructor will
create new embedding layer
:param init_weight: range for the uniform initializer
"""
super(ResidualRecurrentEncoder, self).__init__()
self.batch_first = batch_first
self.rnn_layers = nn.ModuleList()
# 1st LSTM layer, bidirectional
self.rnn_layers.append(
nn.LSTM(hidden_size, hidden_size, num_layers=1, bias=True,
batch_first=batch_first, bidirectional=True))
# 2nd LSTM layer, with 2x larger input_size
self.rnn_layers.append(
nn.LSTM((2 * hidden_size), hidden_size, num_layers=1, bias=True,
batch_first=batch_first))
# Remaining LSTM layers
for _ in range(num_layers - 2):
self.rnn_layers.append(
nn.LSTM(hidden_size, hidden_size, num_layers=1, bias=True,
batch_first=batch_first))
for lstm in self.rnn_layers:
init_lstm_(lstm, init_weight)
self.dropout = nn.Dropout(p=dropout)
if embedder is not None:
self.embedder = embedder
else:
self.embedder = nn.Embedding(vocab_size, hidden_size,
padding_idx=config.PAD)
nn.init.uniform_(self.embedder.weight.data, -init_weight,
init_weight)
def forward(self, inputs, lengths):
"""
Execute the encoder.
:param inputs: tensor with indices from the vocabulary
:param lengths: vector with sequence lengths (excluding padding)
returns: tensor with encoded sequences
"""
x = self.embedder(inputs)
# bidirectional layer
x = self.dropout(x)
x = pack_padded_sequence(x, lengths.cpu().numpy(),
batch_first=self.batch_first)
x, _ = self.rnn_layers[0](x)
x, _ = pad_packed_sequence(x, batch_first=self.batch_first)
# 1st unidirectional layer
x = self.dropout(x)
x, _ = self.rnn_layers[1](x)
# the rest of unidirectional layers,
# with residual connections starting from 3rd layer
for i in range(2, len(self.rnn_layers)):
residual = x
x = self.dropout(x)
x, _ = self.rnn_layers[i](x)
x = x + residual
return x
|
CUDA-Optimized/FastSpeech/fastspeech/trt | trt | fastspeech_trt_inferencer | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import ctypes
import glob
import os
import pathlib
import sys
from collections import OrderedDict
import numpy as np
import pycuda.driver as cuda
import tensorrt as trt
import torch
import torch.nn as nn
import torch.nn.functional as F
from tensorrt import Dims, ElementWiseOperation, MatrixOperation, Weights
import fastspeech.trt.common as common
from fastspeech.trt import TRT_BASE_PATH, TRT_LOGGER
from fastspeech.trt.trt_inferencer import TRTInferencer
from fastspeech.utils.logging import tprint
from fastspeech.utils.nvtx import Nvtx
from fastspeech.utils.pytorch import (remove_module_in_state_dict,
to_cpu_numpy, to_gpu_async)
class FastSpeechTRTInferencer(TRTInferencer):
def __init__(self, model_name, model, data_loader, ckpt_path=None, ckpt_file=None,
trt_max_ws_size=1, trt_file_path=None, trt_force_build=False, use_fp16=False,
trt_max_input_seq_len=256, trt_max_output_seq_len=1024, validate_accuracy=False):
self.trt_max_input_seq_len = trt_max_input_seq_len
self.trt_max_output_seq_len = trt_max_output_seq_len
self.validate_accuracy = validate_accuracy
self.load_plugin(os.path.join(TRT_BASE_PATH, 'plugins/repeat/RepeatPlugin.so'))
self.load_plugin(os.path.join(TRT_BASE_PATH, 'plugins/add_pos_enc/AddPosEncPlugin.so'))
super(FastSpeechTRTInferencer, self).__init__(model_name, model, data_loader, ckpt_path, ckpt_file, trt_max_ws_size, trt_file_path, trt_force_build, use_fp16)
def build_engine(self):
engine = None
if self.trt_file_path and os.path.isfile(self.trt_file_path) and not self.trt_force_build:
with open(self.trt_file_path, 'rb') as f:
engine_str = f.read()
with trt.Runtime(TRT_LOGGER) as runtime:
engine = runtime.deserialize_cuda_engine(engine_str)
if engine:
tprint('TRT Engine Loaded from {} successfully.'.format(self.trt_file_path))
return engine
else:
tprint('Loading TRT Engine from {} failed.'.format(self.trt_file_path))
tprint('Building a TRT Engine..')
engine = self.do_build_engine()
tprint('TRT Engine Built.')
if self.trt_file_path:
with open(self.trt_file_path, 'wb') as f:
f.write(engine.serialize())
tprint('TRT Engine Saved in {}.'.format(self.trt_file_path))
return engine
def create_plugins(self):
# create "adding positional encoding" plugin
self.plugins['AddPosEncPlugin'] = self.get_plugin_creator(
'AddPosEncPlugin').create_plugin('AddPosEncPlugin', trt.PluginFieldCollection())
# create "repeat" plugin
self.plugins['RepeatPlugin'] = self.get_plugin_creator('RepeatPlugin').create_plugin('RepeatPlugin', trt.PluginFieldCollection([
trt.PluginField('maxOutputLength', np.array(
[self.trt_max_output_seq_len], dtype=np.int32), trt.PluginFieldType.INT32)
]))
def do_build_engine(self):
weights = self.model.state_dict()
weights = self.preprocess_weights(weights)
self.create_plugins()
flags = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(flags) as network:
builder.max_workspace_size = common.GiB(self.trt_max_ws_size)
builder.fp16_mode = self.use_fp16
# builder.strict_type_constraints = True
network = self.populate_network(network, weights, self.batch_size, self.trt_max_input_seq_len, self.trt_max_output_seq_len)
return builder.build_cuda_engine(network)
def infer(self, acts=None):
inputs = next(self.data_loader_iter)
text_encoded = inputs["text_encoded"] # (b, t)
text_pos = inputs["text_pos"] # (b, t)
text_encoded = F.pad(text_encoded, pad=(0, self.trt_max_input_seq_len - text_encoded.size(1))) # (b, t)
text_pos = F.pad(text_pos, pad=(0, self.trt_max_input_seq_len - text_pos.size(1))) # (b, t)
text_mask = text_pos.ne(0) # padded is False
# TODO: process word emb in TRT if the API allows.
with torch.no_grad():
text_encoded = self.model.word_emb(text_encoded)
if self.use_fp16:
text_encoded = text_encoded.half()
# create input/output buffers
input_buffers = common.create_inputs_from_torch(self.engine, [text_encoded, text_mask])
output_buffers = common.create_outputs_from_torch(self.engine)
# execute
# self.context.profiler = trt.Profiler()
stream = cuda.Stream()
bindings = [int(data.data_ptr()) for data in (input_buffers + output_buffers)]
self.context.execute_async_v2(bindings=bindings, stream_handle=stream.handle)
# self.context.execute(batch_size=self.batch_size, bindings=bindings)
stream.synchronize()
outputs = dict()
outputs['mel'] = output_buffers[-2]
outputs['mel_mask'] = output_buffers[-1]
outputs['text'] = inputs["text_norm"]
# activations for verifying accuracy.
if acts is not None:
act_names = common.trt_output_names(self.engine)
n_acts = len(output_buffers) - 2 # exclude outputs(mel and mel_mask)
for i in range(n_acts):
acts[act_names[i]] = output_buffers[i]
return outputs
def add_activation_as_output(self, network, tensor, tensor_name):
tensor.name = tensor_name
network.mark_output(tensor=tensor)
def populate_network(self, network, weights, batch_size, trt_max_input_seq_len, trt_max_output_seq_len):
d_model = self.model.d_model
##
# Inputs
##
out_seq = network.add_input(
name="input_seq", dtype=trt.float32, shape=(batch_size, trt_max_input_seq_len, d_model)) # (b, t, d_model)
#
zeros = network.add_constant(weights=Weights(
np.zeros(shape=(batch_size, trt_max_input_seq_len, 1), dtype=np.float32)),
shape=(batch_size, trt_max_input_seq_len, 1)) # (b, t, 1)
out_zeros = zeros.get_output(0) # (b, t, 1)
seq = network.add_elementwise(input1=out_seq, input2=out_zeros, op=trt.ElementWiseOperation.SUM)
out_seq = seq.get_output(0) # (b, t, d_model)
if self.validate_accuracy:
self.add_activation_as_output(network, out_seq, "act.emb")
#
out_seq_mask = network.add_input( # paddings are False
name="input_mask", dtype=trt.bool, shape=(batch_size, trt_max_input_seq_len, 1)) # (b, t, 1)
##
# Phoneme-side FFT Blocks
##
# Positional Encoding
# The plugin adds positional encoding to the padding values also (for better performance), whereas Pytorch impl does not.
# It's fine because the padding values will be eventually masked out in coming layers, giving accurate output.
seq = network.add_plugin_v2([out_seq], self.get_plugin('AddPosEncPlugin'))
seq.name = "phoneme_side.add_pos_enc"
out_seq = seq.get_output(0) # (b, t, d_model)
if self.validate_accuracy:
self.add_activation_as_output(network, out_seq, "act.phoneme_side.add_pos_enc")
for layer_idx in range(self.model.phoneme_side_n_layer):
out_seq = self.populate_fft(name='phoneme_side.layer_stack.{}'.format(layer_idx),
network=network,
weights=weights,
seq_tensor=out_seq,
seq_mask_tensor=out_seq_mask,
batch_size=self.batch_size,
max_seq_len=trt_max_input_seq_len,
d_model=d_model,
n_heads=self.model.phoneme_side_head,
d_k=self.model.phoneme_side.d_k,
d_v=self.model.phoneme_side.d_v,
self_attn_temp=self.model.phoneme_side.d_k**0.5,
conv_filter_size=self.model.phoneme_side_conv1d_filter_size,
conv_kernel_size=self.model.fft_conv1d_kernel,
conv_padding=self.model.fft_conv1d_padding)
if self.validate_accuracy:
self.add_activation_as_output(network, out_seq, "act.phoneme_side.seq")
out_seq, out_seq_mask, out_dur = self.populate_length_regulator(name="length_regulator",
network=network,
weights=weights,
seq_tensor=out_seq,
seq_mask_tensor=out_seq_mask,
batch_size=batch_size,
trt_max_input_seq_len=trt_max_input_seq_len,
trt_max_output_seq_len=trt_max_output_seq_len,
d_model=d_model)
if self.validate_accuracy:
self.add_activation_as_output(network, out_seq, "act.length_regulator.seq")
self.add_activation_as_output(network, out_dur, "act.length_regulator.dur")
##
# Mel-side FFT Blocks
##
# Type int to bool: out_seq_mask. TODO: remove if bool output is allowed in the plugin.
ones = network.add_constant(weights=Weights(
np.ones(shape=(batch_size, trt_max_output_seq_len, 1), dtype=np.int32)),
shape=(batch_size, trt_max_output_seq_len, 1)) # (b, t, 1)
out_ones = ones.get_output(0) # (b, t, 1)
seq_mask = network.add_elementwise(input1=out_seq_mask,
input2=out_ones,
op=ElementWiseOperation.EQUAL) # (b, t, 1)
seq_mask.name = "mel_side.seq_mask"
out_seq_mask = seq_mask.get_output(0)
# Positional Encoding
seq = network.add_plugin_v2([out_seq], self.get_plugin('AddPosEncPlugin'))
seq.name = "mel_side.add_pos_enc"
out_seq = seq.get_output(0)
if self.validate_accuracy:
self.add_activation_as_output(network, out_seq, "act.mel_side.add_pos_enc")
for layer_idx in range(self.model.mel_side_n_layer):
out_seq = self.populate_fft(name="mel_side.layer_stack.{}".format(layer_idx),
network=network,
weights=weights,
seq_tensor=out_seq,
seq_mask_tensor=out_seq_mask,
batch_size=self.batch_size,
max_seq_len=trt_max_output_seq_len,
d_model=d_model,
n_heads=self.model.mel_side_head,
d_k=self.model.mel_side.d_k,
d_v=self.model.mel_side.d_v,
self_attn_temp=self.model.mel_side.d_k**0.5,
conv_filter_size=self.model.mel_side_conv1d_filter_size,
conv_kernel_size=self.model.fft_conv1d_kernel,
conv_padding=self.model.fft_conv1d_padding)
if self.validate_accuracy:
self.add_activation_as_output(network, out_seq, "act.mel_side.seq")
##
# Linear
##
# Pytorch: self.mel_linear = nn.Linear(mel_side_output_size, n_mels, bias=True)
w = weights["mel_linear.weight"] # (n_mels, d_model)
out_w = network.add_constant(shape=(1, self.model.n_mels, d_model), weights=trt.Weights(w)).get_output(0) # (1, n_mels, d_model)
linear_w = network.add_matrix_multiply(out_seq, MatrixOperation.NONE, out_w, MatrixOperation.TRANSPOSE) # (b, t, d_model) * (1->b, d_model, n_mels) => (b, t, n_mels)
linear_w.name = "linear.w"
out_seq = linear_w.get_output(0) # (b, t, n_mels)
b = weights["mel_linear.bias"] # (n_mels,)
out_b = network.add_constant(shape=(1, 1, self.model.n_mels), weights=trt.Weights(b)).get_output(0) # (1, 1, n_mels)
linear_b = network.add_elementwise(input1=out_seq, input2=out_b, op=trt.ElementWiseOperation.SUM)
linear_b.name = "linear.b"
out_seq = linear_b.get_output(0) # (b, t, n_mels)
##
# Outputs
##
if self.validate_accuracy:
self.add_activation_as_output(network, out_seq_mask, "out.seq_mask")
self.add_activation_as_output(network, out_seq, "out.seq")
seq = network.add_shuffle(input=out_seq) # (b, t, n_mels) to (b, n_mels, t)
seq.reshape_dims = Dims((batch_size, trt_max_output_seq_len, self.model.n_mels))
seq.second_transpose = trt.Permutation([0, 2, 1])
seq.name = "trans_seq"
out_seq = seq.get_output(0)
seq_mask = network.add_shuffle(input=out_seq_mask) # (b, t, 1) to (b, t)
seq_mask.reshape_dims = Dims((batch_size, trt_max_output_seq_len))
out_seq_mask = seq_mask.get_output(0) # (b, t)
network.mark_output(tensor=out_seq) # (b, n_mels, t)
network.mark_output(tensor=out_seq_mask) # (b, t)
return network
def populate_fft(self, name, network, weights, seq_tensor, seq_mask_tensor, batch_size,
max_seq_len, d_model, n_heads, d_k, d_v, self_attn_temp,
conv_filter_size, conv_kernel_size, conv_padding):
# Self attn
out = self.populate_slf_attn("{}.slf_attn".format(name), network, weights, seq_tensor, seq_mask_tensor, batch_size,
max_seq_len, d_model, n_heads, d_k, d_v) # (b, t, d_model)
# Masking
zeros = network.add_constant(weights=Weights(
np.zeros(shape=(batch_size, max_seq_len, 1), dtype=np.float32)),
shape=(batch_size, max_seq_len, 1)) # (b, t, 1)
out_zeros = zeros.get_output(0) # (b, t, 1)
seq = network.add_select(condition=seq_mask_tensor, then_input=out, else_input=out_zeros)
seq.name = "{}.mask1".format(name)
out = seq.get_output(0) # (b, t, d_model)
# Position-wise
out = self.populate_pos_wise("{}.pos_ffn".format(name), network, weights, out,
batch_size, max_seq_len, d_model,
conv_filter_size, conv_kernel_size, conv_padding) # (b, t, d_model)
# Masking
seq = network.add_select(condition=seq_mask_tensor, then_input=out, else_input=out_zeros)
seq.name = "{}.mask2".format(name)
out = seq.get_output(0) # (b, t, d_model)
if self.validate_accuracy:
self.add_activation_as_output(network, out, "act.{}".format(name))
return out
def populate_slf_attn(self, name, network, weights, seq_tensor, seq_mask_tensor, batch_size,
max_seq_len, d_model, n_heads, d_k, d_v):
d_qkv = d_k + d_k + d_v
# Pytorch: x = self.linear(x)
w = weights["{}.linear.weight".format(name)] # (n_heads * d_qkv, d_model)
out_w = network.add_constant(shape=(1, d_model, n_heads * d_qkv), weights=trt.Weights(w)).get_output(0) # (1, n_heads * d_qkv, d_model)
linear_w = network.add_matrix_multiply(seq_tensor, MatrixOperation.NONE, out_w, MatrixOperation.TRANSPOSE) # (b, t, d_model) * (1->b, d_model, n_heads * d_qkv) => (b, t, n_heads * d_qkv)
linear_w.name = "{}.linear.w".format(name)
out = linear_w.get_output(0) # (b, t, n_heads * d_qkv)
b = weights["{}.linear.bias".format(name)] # (n_heads * d_qkv,)
out_b = network.add_constant(shape=(1, 1, n_heads * d_qkv), weights=trt.Weights(b)).get_output(0) # (1, 1, n_heads * d_qkv)
linear_b = network.add_elementwise(input1=out, input2=out_b, op=trt.ElementWiseOperation.SUM)
linear_b.name = "{}.linear.b".format(name)
out = linear_b.get_output(0) # (b, t, n_heads * d_qkv)
if self.validate_accuracy:
self.add_activation_as_output(network, out, "act.{}.linear".format(name))
trans1 = network.add_shuffle(input=out) # (b, t, n_heads * d_qkv) to (b, n_heads, t, d_qkv)
trans1.reshape_dims = Dims(
(batch_size, max_seq_len, n_heads, d_qkv))
trans1.second_transpose = trt.Permutation([0, 2, 1, 3])
trans1.name = "{}.trans1".format(name)
out = trans1.get_output(0) # (b, n_heads, t, d_qkv)
# if self.validate_accuracy:
# self.add_activation_as_output(network, out, "act.{}.reshape".format(name))
q = network.add_slice(input=out,
start=Dims((0, 0, 0, 0)),
shape=Dims(
(batch_size, n_heads, max_seq_len, d_k)),
stride=Dims((1, 1, 1, 1)))
q.name = "{}.slide_q".format(name)
k = network.add_slice(input=out,
start=Dims((0, 0, 0, d_k)),
shape=Dims(
(batch_size, n_heads, max_seq_len, d_k)),
stride=Dims((1, 1, 1, 1)))
k.name = "{}.slide_k".format(name)
v = network.add_slice(input=out,
start=Dims((0, 0, 0, 2 * d_k)),
shape=Dims(
(batch_size, n_heads, max_seq_len, d_k)),
stride=Dims((1, 1, 1, 1)))
v.name = "{}.slide_v".format(name)
out_q = q.get_output(0) # (b, n_heads, t, d_q)
out_k = k.get_output(0) # (b, n_heads, t, d_k)
out_v = v.get_output(0) # (b, n_heads, t, d_v)
# Pytorch: output, attn = self.attention(q, k, v, mask=mask)
out = self.populate_scaled_dot(
name="{}.scaled_dot".format(name), # (b, n_heads, t, d_k)
network=network,
q_tensor=out_q,
k_tensor=out_k,
v_tensor=out_v,
mask_tensor=seq_mask_tensor,
batch_size=batch_size,
max_seq_len=max_seq_len,
n_heads=n_heads,
temperature=d_k**0.5)
# Pytorch:
# output = output.view(self.n_head, bs, seq_len, self.d_v)
# output = output.permute(1, 2, 0, 3).contiguous().view(bs, seq_len, self.n_head * self.d_v)
trans2 = network.add_shuffle(input=out) # b, n_heads, t, d_k) to (b, t, n_heads * d_k)
trans2.first_transpose = trt.Permutation([0, 2, 1, 3])
trans2.reshape_dims = Dims((batch_size, max_seq_len, n_heads * d_v))
trans2.name = "{}.trans2".format(name)
out = trans2.get_output(0) # (b, t, n_heads * d_k)
if self.validate_accuracy:
self.add_activation_as_output(network, out, "act.{}.scaled_dot".format(name))
# Pytorch: output = self.fc(output)
w = weights["{}.fc.weight".format(name)] # (d_model, n_heads * d_v)
out_w = network.add_constant(shape=(1, d_model, n_heads * d_v), weights=trt.Weights(w)).get_output(0) # (1, d_model, n_heads * d_v)
fc_w = network.add_matrix_multiply(out, MatrixOperation.NONE, out_w, MatrixOperation.TRANSPOSE) # (b, t, n_heads * d_k) * (1->b, n_heads * d_k, d_model) => (b, t, d_model)
fc_w.name = "{}.fc.w".format(name)
out = fc_w.get_output(0) # (b, t, d_model)
b = weights["{}.fc.bias".format(name)] # (d_model,)
out_b = network.add_constant(shape=(1, 1, n_heads * d_qkv), weights=trt.Weights(b)).get_output(0) # (1, 1, d_model)
fc_b = network.add_elementwise(input1=out, input2=out_b, op=trt.ElementWiseOperation.SUM)
fc_b.name = "{}.fc.b".format(name)
out = fc_b.get_output(0) # (b, t, d_model)
# if self.validate_accuracy:
# self.add_activation_as_output(network, out, "act.{}.fc".format(name))
# Pytorch: output += residual
residual = network.add_elementwise(input1=seq_tensor, input2=out, op=ElementWiseOperation.SUM)
residual.name = "{}.residual".format(name)
out = residual.get_output(0) # (b, t, d_model)
if self.validate_accuracy:
self.add_activation_as_output(network, out, "act.{}.residual".format(name))
# Pytorch: output = self.layer_norm(output)
out = self.populate_layernorm(name="{}.layer_norm".format(name),
network=network,
weights=weights,
seq_tensor=out,
batch_size=self.batch_size,
max_seq_len=max_seq_len,
d_layer=d_model,
) # (b, t, d_model)
if self.validate_accuracy:
self.add_activation_as_output(network, out, "act.{}.ln".format(name))
return out
def populate_scaled_dot(self, name, network, q_tensor, k_tensor, v_tensor, mask_tensor, batch_size, max_seq_len, n_heads, temperature):
# if self.validate_accuracy:
# self.add_activation_as_output(network, q_tensor, "act.{}.q".format(name))
# self.add_activation_as_output(network, k_tensor, "act.{}.k".format(name))
# self.add_activation_as_output(network, v_tensor, "act.{}.v".format(name))
# Pytorch: attn = self.bmm1(q, k.transpose(1, 2))
attn = network.add_matrix_multiply(q_tensor, MatrixOperation.NONE, k_tensor, MatrixOperation.TRANSPOSE) # (b, n, t, d_k) * (b, n, d_k, t) = (b, n, t, t)
attn.name = "{}.bmm1".format(name)
out = attn.get_output(0)
# if self.validate_accuracy:
# self.add_activation_as_output(network, out, "act.{}.bmm1".format(name))
# Pytorch: attn = attn / self.temperature
temperature = network.add_constant(weights=Weights(np.full((batch_size, n_heads, max_seq_len, max_seq_len), temperature, dtype=np.float32)),
shape=Dims((batch_size, n_heads, max_seq_len, max_seq_len))) # (b, n, t, t)
output_temperature = temperature.get_output(0)
attn = network.add_elementwise(input1=out, input2=output_temperature, op=ElementWiseOperation.DIV) # (b, n, t, t)
attn.name = "{}.div".format(name)
out = attn.get_output(0)
# Pytorch: attn = attn.masked_fill(mask, -65504)
minus_inf = network.add_constant(weights=Weights(np.full((batch_size, n_heads, max_seq_len, max_seq_len), -65504, dtype=np.float32)),
shape=Dims((batch_size, n_heads, max_seq_len, max_seq_len))) # (b, n, t, t)
output_minus_inf = minus_inf.get_output(0)
mask = network.add_shuffle(input=mask_tensor)
mask.reshape_dims = Dims((batch_size, 1, 1, max_seq_len)) # (b, t, 1) -> (b, 1, 1, t)
mask.name = "{}.mask_reshape".format(name)
mask_tensor = mask.get_output(0)
attn = network.add_select(condition=mask_tensor, # (b, 1->n, 1, t)
then_input=out, # (b, n, t, t)
else_input=output_minus_inf) # (b, n, t, t)
attn.name = "{}.mask".format(name)
out = attn.get_output(0)
# if self.validate_accuracy:
# self.add_activation_as_output(network, out, "act.{}.masked_fill".format(name))
# Pytorch: attn = self.softmax(attn)
softmax = network.add_softmax(input=out)
softmax.axes = (1 << 3) # dim=3
softmax.name = "{}.softmax".format(name)
out = softmax.get_output(0)
# if self.validate_accuracy:
# self.add_activation_as_output(network, out, "act.{}.softmax".format(name))
# Pytorch: output = self.bmm2(attn, v)
attn = network.add_matrix_multiply(out, MatrixOperation.NONE, v_tensor, MatrixOperation.NONE) # (b, n, t, t) * (b, n, t, d_k) => (b, n, t, d_k)
attn.name = "{}.bmm2".format(name)
out = attn.get_output(0)
# if self.validate_accuracy:
# self.add_activation_as_output(network, out, "act.{}.bmm2".format(name))
return out
def populate_pos_wise(self, name, network, weights, seq_tensor,
batch_size, max_seq_len, d_model,
conv_filter_size, conv_kernel_size, conv_padding):
# Pytorch: output = x.transpose(1, 2)
trans1 = network.add_shuffle(input=seq_tensor) # (b, t, d_model) to (b, d_model, t, 1)
trans1.first_transpose = trt.Permutation([0, 2, 1])
trans1.reshape_dims = Dims((batch_size, d_model, max_seq_len, 1))
trans1.name = "{}.trans1".format(name)
out = trans1.get_output(0) # (b, d_model, t, 1)
# Pytorch: output = self.w_1(output)
conv1_w = weights["{}.w_1.weight".format(name)] # (1, conv_filter_size, d_model, conv_kernel_size, 1)
conv1_b = weights["{}.w_1.bias".format(name)] # (cov_filter_size,)
conv1 = network.add_convolution(input=out, num_output_maps=conv_filter_size, kernel_shape=trt.DimsHW(conv_kernel_size, 1),
kernel=Weights(conv1_w), bias=Weights(conv1_b))
conv1.padding = trt.DimsHW(1, 0)
conv1.name = "{}.conv1".format(name)
out = conv1.get_output(0) # (b, conv_filter_size, t, 1)
if self.validate_accuracy:
self.add_activation_as_output(network, out, "act.{}.conv1".format(name))
# Pytorch: output = F.relu(output)
relu = network.add_activation(input=out, type=trt.ActivationType.RELU)
relu.name = "{}.relu".format(name)
out = relu.get_output(0) # (b, conv_filter_size, t, 1)
# Pytorch: output = self.w_2(output)
conv2_w = weights["{}.w_2.weight".format(name)] # (1, d_model, conv_filter_size, conv_kernel_size, 1)
conv2_b = weights["{}.w_2.bias".format(name)] # (d_model, )
conv2 = network.add_convolution(input=out, num_output_maps=d_model, kernel_shape=trt.DimsHW(conv_kernel_size, 1),
kernel=Weights(conv2_w), bias=Weights(conv2_b))
conv2.padding = trt.DimsHW(1, 0)
conv2.name = "{}.conv2".format(name)
out = conv2.get_output(0) # (b, d_model, t, 1)
if self.validate_accuracy:
self.add_activation_as_output(network, out, "act.{}.conv2".format(name))
# Pytorch: output = output.transpose(1, 2)
trans2 = network.add_shuffle(input=out) # (b, d_model, t, 1) to (b, t, d_model)
trans2.first_transpose = trt.Permutation([0, 2, 1, 3])
trans2.reshape_dims = Dims((batch_size, max_seq_len, d_model))
trans2.name = "{}.trans2".format(name)
out = trans2.get_output(0) # (b, t, d_model)
# Pytorch: output += residual
residual = network.add_elementwise(input1=seq_tensor, input2=out, op=trt.ElementWiseOperation.SUM)
residual.name = "{}.residual".format(name)
out = residual.get_output(0) # (b, t, d_model)
if self.validate_accuracy:
self.add_activation_as_output(network, out, "act.{}.residual".format(name))
# Pytorch: output = self.layer_norm(output)
out = self.populate_layernorm(name="{}.layer_norm".format(name),
network=network,
weights=weights,
seq_tensor=out,
batch_size=self.batch_size,
max_seq_len=max_seq_len,
d_layer=d_model,
) # (b, t, d_model)
if self.validate_accuracy:
self.add_activation_as_output(network, out, "act.{}.ln".format(name))
return out
def populate_length_regulator(self, name, network, weights, seq_tensor, seq_mask_tensor, batch_size, trt_max_input_seq_len, trt_max_output_seq_len, d_model):
out_dur = self.populate_duration_predictor(name="{}.duration_predictor".format(name),
network=network,
weights=weights,
seq_tensor=seq_tensor,
seq_mask_tensor=seq_mask_tensor,
batch_size=batch_size,
max_seq_len=trt_max_input_seq_len,
d_model=d_model) # (b, t)
# Pytorch: output.append(torch.repeat_interleave(input[i], repeats, dim=0))
seq = network.add_plugin_v2([seq_tensor, out_dur], self.get_plugin('RepeatPlugin'))
seq.name = "{}.repeat_seq".format(name)
out_seq = seq.get_output(0) # (b, t, d), (b, t) => (b, t', d), dtype: float32
# Type bool to int: seq_mask_tensor. TODO: remove if bool input is allowed in the plugin.
zeros = network.add_constant(weights=Weights(
np.zeros(shape=(batch_size, trt_max_input_seq_len, 1), dtype=np.int32)),
shape=(batch_size, trt_max_input_seq_len, 1))
out_zeros = zeros.get_output(0) # (b, t, 1)
ones = network.add_constant(weights=Weights(
np.ones(shape=(batch_size, trt_max_input_seq_len, 1), dtype=np.int32)),
shape=(batch_size, trt_max_input_seq_len, 1))
out_ones = ones.get_output(0) # (b, t, 1)
seq_mask = network.add_select(condition=seq_mask_tensor, then_input=out_ones, else_input=out_zeros)
seq_mask.name = "{}.seq_mask".format(name)
out_seq_mask = seq_mask.get_output(0) # (b, t, 1)
seq_mask = network.add_plugin_v2([out_seq_mask, out_dur], self.get_plugin('RepeatPlugin'))
seq_mask.name = "{}.repeat_seq_mask".format(name)
out_seq_mask = seq_mask.get_output(0) # (b, t, 1), (b, t) => (b, t', 1), dtype: int32
return out_seq, out_seq_mask, out_dur
def populate_duration_predictor(self, name, network, weights, seq_tensor, seq_mask_tensor, batch_size, max_seq_len, d_model):
duration_predictor_filter_size=self.model.duration_predictor_filter_size
duration_predictor_kernel_size=self.model.duration_predictor_kernel_size
# Pytorch: input *= input_mask.to(input.dtype)
# can be skipped.
# Pytorch: out = self.conv1d_1(input.transpose(1,2)).transpose(1,2)
trans1 = network.add_shuffle(input=seq_tensor) # (b, t, d_model) to (b, d_model, t, 1)
trans1.first_transpose = trt.Permutation([0, 2, 1])
trans1.reshape_dims = Dims((batch_size, d_model, max_seq_len, 1))
trans1.name = "{}.trans1".format(name)
out = trans1.get_output(0) # (b, d_model, t, 1)
conv1_w = weights["{}.conv1d_1.weight".format(name)] # (1, d_model, duration_predictor_filter_size, duration_predictor_kernel_size, 1)
conv1_b = weights["{}.conv1d_1.bias".format(name)] # (duration_predictor_filter_size, )
conv1 = network.add_convolution(input=out, num_output_maps=duration_predictor_filter_size, kernel_shape=trt.DimsHW(duration_predictor_kernel_size, 1),
kernel=Weights(conv1_w), bias=Weights(conv1_b))
conv1.padding = trt.DimsHW(1, 0)
conv1.name = "{}.conv1".format(name)
out = conv1.get_output(0) # (b, duration_predictor_filter_size, t, 1)
trans2 = network.add_shuffle(input=out) # (b, duration_predictor_filter_size, t, 1) to (b, t, duration_predictor_filter_size)
trans2.first_transpose = trt.Permutation([0, 2, 1, 3])
trans2.reshape_dims = Dims((batch_size, max_seq_len, duration_predictor_filter_size))
trans2.name = "{}.trans2".format(name)
out = trans2.get_output(0) # (b, t, duration_predictor_filter_size)
# Pytorch: out = self.relu_1(out)
relu = network.add_activation(input=out, type=trt.ActivationType.RELU)
relu.name = "{}.relu1".format(name)
out_relu = relu.get_output(0) # (b, t, duration_predictor_filter_size)
# Pytorch: out = self.layer_norm_1(out)
out = self.populate_layernorm(name="{}.layer_norm_1".format(name),
network=network,
weights=weights,
seq_tensor=out_relu,
d_layer=duration_predictor_filter_size,
batch_size=batch_size,
max_seq_len=max_seq_len)
# Pytorch: out = self.conv1d_2(out.transpose(1,2)).transpose(1,2)
trans3 = network.add_shuffle(input=out) # (b, t, duration_predictor_filter_size) to (b, duration_predictor_filter_size, t, 1)
trans3.first_transpose = trt.Permutation([0, 2, 1])
trans3.reshape_dims = Dims((batch_size, duration_predictor_filter_size, max_seq_len, 1))
trans3.name = "{}.trans3".format(name)
out = trans3.get_output(0) # (b, duration_predictor_filter_size, t, 1)
conv2_w = weights["{}.conv1d_2.weight".format(name)] # (1, duration_predictor_filter_size, duration_predictor_filter_size, duration_predictor_kernel_size, 1)
conv2_b = weights["{}.conv1d_2.bias".format(name)] # (duration_predictor_filter_size, )
conv2 = network.add_convolution(input=out, num_output_maps=duration_predictor_filter_size, kernel_shape=trt.DimsHW(duration_predictor_kernel_size, 1),
kernel=Weights(conv2_w), bias=Weights(conv2_b))
conv2.padding = trt.DimsHW(1, 0)
conv2.name = "{}.conv2".format(name)
out = conv2.get_output(0)
trans4 = network.add_shuffle(input=out) # (b, duration_predictor_filter_size, t, 1) to (b, t, duration_predictor_filter_size)
trans4.first_transpose = trt.Permutation([0, 2, 1, 3])
trans4.reshape_dims = Dims((batch_size, max_seq_len, duration_predictor_filter_size))
trans4.name = "{}.trans4".format(name)
out = trans4.get_output(0) # (b, t, duration_predictor_filter_size)
# Pytorch: out = self.relu_2(out)
relu = network.add_activation(input=out, type=trt.ActivationType.RELU)
relu.name = "{}.relu2".format(name)
out_relu = relu.get_output(0) # (b, t, duration_predictor_filter_size)
# Pytorch: out = self.layer_norm_2(out)
out = self.populate_layernorm(name="{}.layer_norm_2".format(name),
network=network,
weights=weights,
seq_tensor=out_relu,
d_layer=duration_predictor_filter_size,
batch_size=batch_size,
max_seq_len=max_seq_len,
) # (b, t, duration_predictor_filter_size)
# Pytorch: out = self.linear_layer(out)
w = weights["{}.linear_layer.weight".format(name)] # (1, duration_predictor_filter_size)
out_w = network.add_constant(shape=(1, 1, duration_predictor_filter_size), weights=trt.Weights(w)).get_output(0) # (1, 1, duration_predictor_filter_size)
linear_w = network.add_matrix_multiply(out, MatrixOperation.NONE, out_w, MatrixOperation.TRANSPOSE) # (b, t, duration_predictor_filter_size) * (1->b, duration_predictor_filter_size, 1) => (b, t, 1)
linear_w.name = "{}.linear.w".format(name)
out = linear_w.get_output(0) # (b, t, 1)
b = weights["{}.linear_layer.bias".format(name)] # (1,)
out_b = network.add_constant(shape=(1, 1, 1), weights=trt.Weights(b)).get_output(0) # (1, 1, 1)
linear_b = network.add_elementwise(input1=out, input2=out_b, op=trt.ElementWiseOperation.SUM)
linear_b.name = "{}.linear.b".format(name)
out = linear_b.get_output(0) # (b, t, 1)
# Pytorch: out *= input_mask.to(out.dtype)
zeros = network.add_constant(weights=Weights(
np.zeros(shape=(batch_size, max_seq_len, 1), dtype=np.float32)),
shape=(batch_size, max_seq_len, 1))
out_zeros = zeros.get_output(0) # (b, t, 1)
dur = network.add_select(condition=seq_mask_tensor, then_input=out, else_input=out_zeros)
dur.name = "{}.mask".format(name)
out_dur = dur.get_output(0)
# Pytorch: duration = torch.clamp_min(torch.exp(duration) - 1, 0)
exp = network.add_unary(input=out_dur, op=trt.UnaryOperation.EXP)
exp.name = "{}.exp".format(name)
out_exp = exp.get_output(0)
ones = network.add_constant(weights=Weights(
np.ones(shape=(batch_size, max_seq_len, 1), dtype=np.float32)),
shape=(batch_size, max_seq_len, 1))
out_ones = ones.get_output(0) # (b, t, 1)
sub = network.add_elementwise(input1=out_exp, input2=out_ones, op=trt.ElementWiseOperation.SUB)
sub.name = "{}.sub_one".format(name)
out_sub = sub.get_output(0)
dur = network.add_elementwise(input1=out_sub, input2=out_zeros, op=trt.ElementWiseOperation.MAX)
dur.name = "{}.max".format(name)
out_dur = dur.get_output(0)
# Pytorch: repeats = torch.round(repeats).long()
half_ones = network.add_constant(weights=Weights(
np.full((batch_size, max_seq_len, 1), 0.5, dtype=np.float32)),
shape=(batch_size, max_seq_len, 1))
out_half_ones = half_ones.get_output(0) # (b, t, 1)
add = network.add_elementwise(input1=out_dur, input2=out_half_ones, op=trt.ElementWiseOperation.SUM)
add.name = "{}.round_add".format(name)
out_add = add.get_output(0) # (b, t, 1)
dur = network.add_elementwise(input1=out_add, input2=out_ones, op=trt.ElementWiseOperation.FLOOR_DIV)
dur.name = "{}.round_floor_div".format(name)
out_dur = dur.get_output(0) # (b, t, 1)
dur = network.add_shuffle(input=out_dur) # (b, t, 1) to (b, t)
dur.reshape_dims = Dims(shape=(batch_size, max_seq_len))
out_dur = dur.get_output(0) # (b, t)
return out_dur
def populate_layernorm(self, name, network, weights, seq_tensor, batch_size, max_seq_len, d_layer):
# m
mean = network.add_reduce(input=seq_tensor, op=trt.ReduceOperation.AVG, axes=(1 << 2), keep_dims=True)
mean.name = "{}.mean".format(name)
out_mean = mean.get_output(0) # (b, t, 1)
# m^2
square_mean = network.add_elementwise(input1=out_mean, input2=out_mean, op=ElementWiseOperation.PROD)
square_mean.name = "{}.square_mean".format(name)
out_square_mean = square_mean.get_output(0) # (b, t, 1)
# x^2
square = network.add_elementwise(input1=seq_tensor, input2=seq_tensor, op=ElementWiseOperation.PROD)
square.name = "{}.square".format(name)
out_square = square.get_output(0) # (b, t, h)
# e[x^2]
mean_square = network.add_reduce(input=out_square, op=trt.ReduceOperation.AVG, axes=(1 << 2), keep_dims=True)
mean_square.name = "{}.mean_square".format(name)
out_mean_square = mean_square.get_output(0) # (b, t, 1)
# e[x^2] - m^2
sub_square = network.add_elementwise(input1=out_mean_square, input2=out_square_mean, op=ElementWiseOperation.SUB)
sub_square.name = "{}.sub_square".format(name)
out_sub_square = sub_square.get_output(0) # (b, t, 1)
# + eps
eps = network.add_constant(weights=Weights(np.full((batch_size, max_seq_len, 1), 1e-5, dtype=np.float32)),
shape=Dims((batch_size, max_seq_len, 1))) # (b, t, 1)
out_eps = eps.get_output(0)
eps.name = "{}.eps".format(name)
std = network.add_elementwise(input1=out_sub_square, input2=out_eps, op=ElementWiseOperation.SUM)
std.name = "{}.std".format(name)
out_std = std.get_output(0) # (b, t, 1)
# std
sqrt = network.add_unary(input=out_std, op=trt.UnaryOperation.SQRT)
sqrt.name = "{}.sqrt".format(name)
out_sqrt = sqrt.get_output(0) # (b, t, 1)
# y = (x - mean) / std
sub = network.add_elementwise(input1=seq_tensor, input2=out_mean, op=ElementWiseOperation.SUB)
sub.name = "{}.sub".format(name)
out_sub_square = sub.get_output(0) # (b, t, h)
div = network.add_elementwise(input1=out_sub_square, input2=out_sqrt, op=ElementWiseOperation.DIV)
div.name = "{}.div".format(name)
out = div.get_output(0) # (b, t, h)
# Pytorch: y = self.weight * y + self.bias
w = weights["{}.weight".format(name)] # (h, )
out_w = network.add_constant(shape=(1, 1, d_layer), weights=trt.Weights(w)).get_output(0) # (1, 1, h)
scale_w = network.add_elementwise(input1=out, input2=out_w, op=ElementWiseOperation.PROD) # (b, t, h) * (1->b, 1->t, h) => (b, t, h)
scale_w.name = "{}.scale.w".format(name)
out = scale_w.get_output(0) # (b, t, h)
b = weights["{}.bias".format(name)] # (h, )
out_b = network.add_constant(shape=(1, 1, d_layer), weights=trt.Weights(b)).get_output(0) # (1, 1, h)
scale_b = network.add_elementwise(input1=out, input2=out_b, op=ElementWiseOperation.SUM) # (b, t, h) * (1->b, 1->t, h) => (b, t, h)
scale_b.name = "{}.scale.b".format(name)
out = scale_b.get_output(0) # (b, t, h)
return out
def preprocess_weights(self, weights):
# torch.Tensor to numpy
weights = OrderedDict({k:v.numpy() for k,v in weights.items()})
return weights
|
PyTorch/Segmentation/MaskRCNN/pytorch/maskrcnn_benchmark/data/transforms | transforms | build | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
from . import transforms as T
def build_transforms(cfg, is_train=True):
if is_train:
min_size = cfg.INPUT.MIN_SIZE_TRAIN
max_size = cfg.INPUT.MAX_SIZE_TRAIN
flip_prob = 0.5 # cfg.INPUT.FLIP_PROB_TRAIN
else:
min_size = cfg.INPUT.MIN_SIZE_TEST
max_size = cfg.INPUT.MAX_SIZE_TEST
flip_prob = 0
to_bgr255 = cfg.INPUT.TO_BGR255
normalize_transform = T.Normalize(
mean=cfg.INPUT.PIXEL_MEAN, std=cfg.INPUT.PIXEL_STD, to_bgr255=to_bgr255
)
transform = T.Compose(
[
T.Resize(min_size, max_size),
T.RandomHorizontalFlip(flip_prob),
T.ToTensor(),
normalize_transform,
]
)
return transform
|
PyTorch/SpeechSynthesis/Tacotron2 | Tacotron2 | preprocess_audio2mel | import argparse
import torch
from tacotron2.data_function import TextMelLoader
from tacotron2_common.utils import load_filepaths_and_text
def parse_args(parser):
"""
Parse commandline arguments.
"""
parser.add_argument('-d', '--dataset-path', type=str,
default='./', help='Path to dataset')
parser.add_argument('--wav-files', required=True,
type=str, help='Path to filelist with audio paths and text')
parser.add_argument('--mel-files', required=True,
type=str, help='Path to filelist with mel paths and text')
parser.add_argument('--text-cleaners', nargs='*',
default=['english_cleaners'], type=str,
help='Type of text cleaners for input text')
parser.add_argument('--max-wav-value', default=32768.0, type=float,
help='Maximum audiowave value')
parser.add_argument('--sampling-rate', default=22050, type=int,
help='Sampling rate')
parser.add_argument('--filter-length', default=1024, type=int,
help='Filter length')
parser.add_argument('--hop-length', default=256, type=int,
help='Hop (stride) length')
parser.add_argument('--win-length', default=1024, type=int,
help='Window length')
parser.add_argument('--mel-fmin', default=0.0, type=float,
help='Minimum mel frequency')
parser.add_argument('--mel-fmax', default=8000.0, type=float,
help='Maximum mel frequency')
parser.add_argument('--n-mel-channels', default=80, type=int,
help='Number of bins in mel-spectrograms')
return parser
def audio2mel(dataset_path, audiopaths_and_text, melpaths_and_text, args):
melpaths_and_text_list = load_filepaths_and_text(dataset_path, melpaths_and_text)
audiopaths_and_text_list = load_filepaths_and_text(dataset_path, audiopaths_and_text)
data_loader = TextMelLoader(dataset_path, audiopaths_and_text, args)
for i in range(len(melpaths_and_text_list)):
if i%100 == 0:
print("done", i, "/", len(melpaths_and_text_list))
mel = data_loader.get_mel(audiopaths_and_text_list[i][0])
torch.save(mel, melpaths_and_text_list[i][0])
def main():
parser = argparse.ArgumentParser(description='PyTorch Tacotron 2 Training')
parser = parse_args(parser)
args = parser.parse_args()
args.load_mel_from_disk = False
audio2mel(args.dataset_path, args.wav_files, args.mel_files, args)
if __name__ == '__main__':
main()
|
PyTorch/Segmentation/MaskRCNN/pytorch/maskrcnn_benchmark/modeling/rpn | rpn | rpn | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
import torch
import torch.nn.functional as F
from torch import nn
from maskrcnn_benchmark.modeling import registry
from maskrcnn_benchmark.modeling.box_coder import BoxCoder
from .loss import make_rpn_loss_evaluator
from .anchor_generator import make_anchor_generator
from .inference import make_rpn_postprocessor
@registry.RPN_HEADS.register("SingleConvRPNHead")
class RPNHead(nn.Module):
"""
Adds a simple RPN Head with classification and regression heads
"""
def __init__(self, cfg, in_channels, num_anchors):
"""
Arguments:
cfg : config
in_channels (int): number of channels of the input feature
num_anchors (int): number of anchors to be predicted
"""
super(RPNHead, self).__init__()
self.conv = nn.Conv2d(
in_channels, in_channels, kernel_size=3, stride=1, padding=1
)
self.cls_logits = nn.Conv2d(in_channels, num_anchors, kernel_size=1, stride=1)
self.bbox_pred = nn.Conv2d(
in_channels, num_anchors * 4, kernel_size=1, stride=1
)
for l in [self.conv, self.cls_logits, self.bbox_pred]:
torch.nn.init.normal_(l.weight, std=0.01)
torch.nn.init.constant_(l.bias, 0)
def forward(self, x):
logits = []
bbox_reg = []
for feature in x:
t = F.relu(self.conv(feature))
logits.append(self.cls_logits(t))
bbox_reg.append(self.bbox_pred(t))
return logits, bbox_reg
class RPNModule(torch.nn.Module):
"""
Module for RPN computation. Takes feature maps from the backbone and RPN
proposals and losses. Works for both FPN and non-FPN.
"""
def __init__(self, cfg):
super(RPNModule, self).__init__()
self.cfg = cfg.clone()
anchor_generator = make_anchor_generator(cfg)
in_channels = cfg.MODEL.BACKBONE.OUT_CHANNELS
rpn_head = registry.RPN_HEADS[cfg.MODEL.RPN.RPN_HEAD]
head = rpn_head(
cfg, in_channels, anchor_generator.num_anchors_per_location()[0]
)
rpn_box_coder = BoxCoder(weights=(1.0, 1.0, 1.0, 1.0))
box_selector_train = make_rpn_postprocessor(cfg, rpn_box_coder, is_train=True)
box_selector_test = make_rpn_postprocessor(cfg, rpn_box_coder, is_train=False)
loss_evaluator = make_rpn_loss_evaluator(cfg, rpn_box_coder)
self.anchor_generator = anchor_generator
self.head = head
self.box_selector_train = box_selector_train
self.box_selector_test = box_selector_test
self.loss_evaluator = loss_evaluator
def forward(self, images, features, targets=None):
"""
Arguments:
images (ImageList): images for which we want to compute the predictions
features (list[Tensor]): features computed from the images that are
used for computing the predictions. Each tensor in the list
correspond to different feature levels
targets (list[BoxList): ground-truth boxes present in the image (optional)
Returns:
boxes (list[BoxList]): the predicted boxes from the RPN, one BoxList per
image.
losses (dict[Tensor]): the losses for the model during training. During
testing, it is an empty dict.
"""
objectness, rpn_box_regression = self.head(features)
anchors = self.anchor_generator(images, features)
if self.training:
return self._forward_train(anchors, objectness, rpn_box_regression, targets)
else:
return self._forward_test(anchors, objectness, rpn_box_regression)
def _forward_train(self, anchors, objectness, rpn_box_regression, targets):
if self.cfg.MODEL.RPN_ONLY:
# When training an RPN-only model, the loss is determined by the
# predicted objectness and rpn_box_regression values and there is
# no need to transform the anchors into predicted boxes; this is an
# optimization that avoids the unnecessary transformation.
boxes = anchors
else:
# For end-to-end models, anchors must be transformed into boxes and
# sampled into a training batch.
with torch.no_grad():
boxes = self.box_selector_train(
anchors, objectness, rpn_box_regression, targets
)
loss_objectness, loss_rpn_box_reg = self.loss_evaluator(
anchors, objectness, rpn_box_regression, targets
)
losses = {
"loss_objectness": loss_objectness,
"loss_rpn_box_reg": loss_rpn_box_reg,
}
return boxes, losses
def _forward_test(self, anchors, objectness, rpn_box_regression):
boxes = self.box_selector_test(anchors, objectness, rpn_box_regression)
if self.cfg.MODEL.RPN_ONLY:
# For end-to-end models, the RPN proposals are an intermediate state
# and don't bother to sort them in decreasing score order. For RPN-only
# models, the proposals are the final output and we return them in
# high-to-low confidence order.
inds = [
box.get_field("objectness").sort(descending=True)[1] for box in boxes
]
boxes = [box[ind] for box, ind in zip(boxes, inds)]
return boxes, {}
def build_rpn(cfg):
"""
This gives the gist of it. Not super important because it doesn't change as much
"""
return RPNModule(cfg)
|
TensorFlow/Detection/SSD/models/research/object_detection/builders | builders | region_similarity_calculator_builder_test | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for region_similarity_calculator_builder."""
import tensorflow as tf
from google.protobuf import text_format
from object_detection.builders import region_similarity_calculator_builder
from object_detection.core import region_similarity_calculator
from object_detection.protos import region_similarity_calculator_pb2 as sim_calc_pb2
class RegionSimilarityCalculatorBuilderTest(tf.test.TestCase):
def testBuildIoaSimilarityCalculator(self):
similarity_calc_text_proto = """
ioa_similarity {
}
"""
similarity_calc_proto = sim_calc_pb2.RegionSimilarityCalculator()
text_format.Merge(similarity_calc_text_proto, similarity_calc_proto)
similarity_calc = region_similarity_calculator_builder.build(
similarity_calc_proto)
self.assertTrue(isinstance(similarity_calc,
region_similarity_calculator.IoaSimilarity))
def testBuildIouSimilarityCalculator(self):
similarity_calc_text_proto = """
iou_similarity {
}
"""
similarity_calc_proto = sim_calc_pb2.RegionSimilarityCalculator()
text_format.Merge(similarity_calc_text_proto, similarity_calc_proto)
similarity_calc = region_similarity_calculator_builder.build(
similarity_calc_proto)
self.assertTrue(isinstance(similarity_calc,
region_similarity_calculator.IouSimilarity))
def testBuildNegSqDistSimilarityCalculator(self):
similarity_calc_text_proto = """
neg_sq_dist_similarity {
}
"""
similarity_calc_proto = sim_calc_pb2.RegionSimilarityCalculator()
text_format.Merge(similarity_calc_text_proto, similarity_calc_proto)
similarity_calc = region_similarity_calculator_builder.build(
similarity_calc_proto)
self.assertTrue(isinstance(similarity_calc,
region_similarity_calculator.
NegSqDistSimilarity))
if __name__ == '__main__':
tf.test.main()
|
TensorFlow/Segmentation/UNet_Industrial/notebooks | notebooks | download_and_preprocess_dagm2007_public | #!/bin/bash
##############################################################################
# Copyright (c) Jonathan Dekhtiar - [email protected]
# All Rights Reserved.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
##############################################################################
# Usage: ./download_and_preprocess_dagm2007.sh /path/to/dataset/directory/
if [[ ! "$BASH_VERSION" ]] ; then
echo "Please do not use sh to run this script ($0), just execute it directly" 1>&2
exit 1
fi
if [[ -z "$1" ]]
then
echo -e "Error: Argument is missing. No dataset directory received."
echo -e "Usage: '$0 /path/to/dataset/directory/'"
exit 1
fi
DATASET_DIR=$(realpath -s $1)
ZIP_FILES_DIR=${DATASET_DIR}/zip_files
RAW_IMAGES_DIR=${DATASET_DIR}/raw_images
PUBLIC_ZIP_FILES_DIR=${ZIP_FILES_DIR}/public
PUBLIC_RAW_IMAGES_DIR=${RAW_IMAGES_DIR}/public
if [[ ! -e ${PUBLIC_ZIP_FILES_DIR} ]]; then
echo "creating ${PUBLIC_ZIP_FILES_DIR} ..."
mkdir -p ${PUBLIC_ZIP_FILES_DIR}
fi
if [[ ! -e ${PUBLIC_RAW_IMAGES_DIR} ]]; then
echo "creating ${PUBLIC_RAW_IMAGES_DIR} ..."
mkdir -p ${PUBLIC_RAW_IMAGES_DIR}
fi
PRIVATE_ZIP_FILES_DIR=${ZIP_FILES_DIR}/private
PRIVATE_RAW_IMAGES_DIR=${RAW_IMAGES_DIR}/private
if [[ ! -e ${PRIVATE_ZIP_FILES_DIR} ]]; then
echo "creating ${PRIVATE_ZIP_FILES_DIR} ..."
mkdir -p ${PRIVATE_ZIP_FILES_DIR}
fi
if [[ ! -e ${PRIVATE_RAW_IMAGES_DIR} ]]; then
echo "creating ${PRIVATE_RAW_IMAGES_DIR} ..."
mkdir -p ${PRIVATE_RAW_IMAGES_DIR}
fi
echo -e "\n################################################"
echo -e "Processing Public Dataset"
echo -e "################################################\n"
sleep 2
BASE_PUBLIC_URL="https://resources.mpi-inf.mpg.de/conference/dagm/2007"
declare -a arr=(
"Class1.zip"
"Class1_def.zip"
"Class2.zip"
"Class2_def.zip"
"Class3.zip"
"Class3_def.zip"
"Class4.zip"
"Class4_def.zip"
"Class5.zip"
"Class5_def.zip"
"Class6.zip"
"Class6_def.zip"
)
for file in "${arr[@]}"
do
if [[ ! -e ${PUBLIC_ZIP_FILES_DIR}/${file} ]]; then
echo -e "Downloading File: $BASE_PUBLIC_URL/$file ..."
wget -N ${BASE_PUBLIC_URL}/${file} -O ${PUBLIC_ZIP_FILES_DIR}/${file}
fi
# Unzip without overwriting
unzip -n ${PUBLIC_ZIP_FILES_DIR}/${file} -d ${PUBLIC_RAW_IMAGES_DIR}
done
chmod -R 744 ${PUBLIC_ZIP_FILES_DIR}
chmod -R 744 ${PUBLIC_RAW_IMAGES_DIR} |
PyTorch/SpeechRecognition/wav2vec2/utils | utils | generate_1h_10h_datasets | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from itertools import chain
from pathlib import Path
def load_lines(fpath):
with open(fpath) as f:
return [line for line in f]
parser = argparse.ArgumentParser()
parser.add_argument('ls_ft', type=Path,
help='Libri-light librispeech_finetuning dir')
parser.add_argument('ls_filelists', type=Path,
help='Directory with .tsv .wrd etc files for LibriSpeech full 960')
parser.add_argument('out', type=Path, help='Output directory')
args = parser.parse_args()
# Load LS
tsv = load_lines(args.ls_filelists / "train-full-960.tsv")
wrd = load_lines(args.ls_filelists / "train-full-960.wrd")
ltr = load_lines(args.ls_filelists / "train-full-960.ltr")
assert len(tsv) == len(wrd) + 1
assert len(ltr) == len(wrd)
files = {}
for path_frames, w, l in zip(tsv[1:], wrd, ltr):
path, _ = path_frames.split("\t")
key = Path(path).stem
files[key] = (path_frames, w, l)
print(f"Loaded {len(files)} entries from {args.ls_filelists}/train-full-960")
# Load LL-LS
files_1h = list((args.ls_ft / "1h").rglob("*.flac"))
files_9h = list((args.ls_ft / "9h").rglob("*.flac"))
print(f"Found {len(files_1h)} files in the 1h dataset")
print(f"Found {len(files_9h)} files in the 9h dataset")
for name, file_iter in [("train-1h", files_1h),
("train-10h", chain(files_1h, files_9h))]:
with open(args.out / f"{name}.tsv", "w") as ftsv, \
open(args.out / f"{name}.wrd", "w") as fwrd, \
open(args.out / f"{name}.ltr", "w") as fltr:
nframes = 0
ftsv.write(tsv[0])
for fpath in file_iter:
key = fpath.stem
t, w, l = files[key]
ftsv.write(t)
fwrd.write(w)
fltr.write(l)
nframes += int(t.split()[1])
print(f"Written {nframes} frames ({nframes / 16000 / 60 / 60:.2f} h at 16kHz)")
|
PyTorch/Classification/ConvNets/resnext101-32x4d/training/TF32 | TF32 | DGXA100_resnext101-32x4d_TF32_90E | python ./multiproc.py --nproc_per_node 8 ./launch.py --model resnext101-32x4d --precision TF32 --mode convergence --platform DGXA100 /imagenet --epochs 90 --mixup 0.0 --workspace ${1:-./} --raport-file raport.json
|
PyTorch/SpeechSynthesis/Tacotron2 | Tacotron2 | loss_functions | # *****************************************************************************
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# *****************************************************************************
import torch
import torch.nn as nn
from tacotron2.loss_function import Tacotron2Loss
from waveglow.loss_function import WaveGlowLoss
def get_loss_function(loss_function, sigma=1.0):
if loss_function == 'Tacotron2':
loss = Tacotron2Loss()
elif loss_function == 'WaveGlow':
loss = WaveGlowLoss(sigma=sigma)
else:
raise NotImplementedError(
"unknown loss function requested: {}".format(loss_function))
loss.cuda()
return loss
|
PyTorch/SpeechSynthesis/FastPitch/common/text | text | __init__ | from .cmudict import CMUDict
cmudict = CMUDict()
|
Kaldi/SpeechRecognition/scripts/docker | docker | build | #!/bin/bash
# Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -eu
# Use development branch of Kaldi for latest feature support
docker build . -f Dockerfile \
--rm -t triton_kaldi_server
docker build . -f Dockerfile.client \
--rm -t triton_kaldi_client
|
PyTorch/SpeechSynthesis/FastPitch/platform | platform | DGXA100_FastPitch_TF32_8GPU | #!/bin/bash
set -a
: ${NUM_GPUS:=8}
: ${BATCH_SIZE:=32}
: ${GRAD_ACCUMULATION:=1}
: ${AMP:=false}
bash scripts/train.sh "$@"
|
TensorFlow2/Detection/Efficientdet/object_detection | object_detection | region_similarity_calculator | # Copyright 2020 Google Research. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Region Similarity Calculators for BoxLists.
Region Similarity Calculators compare a pairwise measure of similarity
between the boxes in two BoxLists.
"""
from abc import ABCMeta
from abc import abstractmethod
import tensorflow.compat.v1 as tf
def area(boxlist, scope=None):
"""Computes area of boxes.
Args:
boxlist: BoxList holding N boxes
scope: name scope.
Returns:
a tensor with shape [N] representing box areas.
"""
with tf.name_scope(scope, 'Area'):
y_min, x_min, y_max, x_max = tf.split(
value=boxlist.get(), num_or_size_splits=4, axis=1)
return tf.squeeze((y_max - y_min) * (x_max - x_min), [1])
def intersection(boxlist1, boxlist2, scope=None):
"""Compute pairwise intersection areas between boxes.
Args:
boxlist1: BoxList holding N boxes
boxlist2: BoxList holding M boxes
scope: name scope.
Returns:
a tensor with shape [N, M] representing pairwise intersections
"""
with tf.name_scope(scope, 'Intersection'):
y_min1, x_min1, y_max1, x_max1 = tf.split(
value=boxlist1.get(), num_or_size_splits=4, axis=1)
y_min2, x_min2, y_max2, x_max2 = tf.split(
value=boxlist2.get(), num_or_size_splits=4, axis=1)
all_pairs_min_ymax = tf.minimum(y_max1, tf.transpose(y_max2))
all_pairs_max_ymin = tf.maximum(y_min1, tf.transpose(y_min2))
intersect_heights = tf.maximum(0.0, all_pairs_min_ymax - all_pairs_max_ymin)
all_pairs_min_xmax = tf.minimum(x_max1, tf.transpose(x_max2))
all_pairs_max_xmin = tf.maximum(x_min1, tf.transpose(x_min2))
intersect_widths = tf.maximum(0.0, all_pairs_min_xmax - all_pairs_max_xmin)
return intersect_heights * intersect_widths
def iou(boxlist1, boxlist2, scope=None):
"""Computes pairwise intersection-over-union between box collections.
Args:
boxlist1: BoxList holding N boxes
boxlist2: BoxList holding M boxes
scope: name scope.
Returns:
a tensor with shape [N, M] representing pairwise iou scores.
"""
with tf.name_scope(scope, 'IOU'):
intersections = intersection(boxlist1, boxlist2)
areas1 = area(boxlist1)
areas2 = area(boxlist2)
unions = (
tf.expand_dims(areas1, 1) + tf.expand_dims(areas2, 0) - intersections)
return tf.where(
tf.equal(intersections, 0.0),
tf.zeros_like(intersections), tf.truediv(intersections, unions))
class RegionSimilarityCalculator(object):
"""Abstract base class for region similarity calculator."""
__metaclass__ = ABCMeta
def compare(self, boxlist1, boxlist2, scope=None):
"""Computes matrix of pairwise similarity between BoxLists.
This op (to be overridden) computes a measure of pairwise similarity between
the boxes in the given BoxLists. Higher values indicate more similarity.
Note that this method simply measures similarity and does not explicitly
perform a matching.
Args:
boxlist1: BoxList holding N boxes.
boxlist2: BoxList holding M boxes.
scope: Op scope name. Defaults to 'Compare' if None.
Returns:
a (float32) tensor of shape [N, M] with pairwise similarity score.
"""
with tf.name_scope(scope, 'Compare', [boxlist1, boxlist2]) as scope:
return self._compare(boxlist1, boxlist2)
@abstractmethod
def _compare(self, boxlist1, boxlist2):
pass
class IouSimilarity(RegionSimilarityCalculator):
"""Class to compute similarity based on Intersection over Union (IOU) metric.
This class computes pairwise similarity between two BoxLists based on IOU.
"""
def _compare(self, boxlist1, boxlist2):
"""Compute pairwise IOU similarity between the two BoxLists.
Args:
boxlist1: BoxList holding N boxes.
boxlist2: BoxList holding M boxes.
Returns:
A tensor with shape [N, M] representing pairwise iou scores.
"""
return iou(boxlist1, boxlist2)
|
TensorFlow/Detection/SSD/models | models | CONTRIBUTING | # Contributing guidelines
If you have created a model and would like to publish it here, please send us a
pull request. For those just getting started with pull requests, GitHub has a
[howto](https://help.github.com/articles/using-pull-requests/).
The code for any model in this repository is licensed under the Apache License
2.0.
In order to accept our code, we have to make sure that we can publish your code:
You have to sign a Contributor License Agreement (CLA).
### Contributor License Agreements
Please fill out either the individual or corporate Contributor License Agreement (CLA).
* If you are an individual writing original source code and you're sure you own the intellectual property, then you'll need to sign an [individual CLA](http://code.google.com/legal/individual-cla-v1.0.html).
* If you work for a company that wants to allow you to contribute your work, then you'll need to sign a [corporate CLA](http://code.google.com/legal/corporate-cla-v1.0.html).
Follow either of the two links above to access the appropriate CLA and instructions for how to sign and return it. Once we receive it, we'll be able to accept your pull requests.
***NOTE***: Only original source code from you and other people that have signed the CLA can be accepted into the repository.
|
TensorFlow2/Recommendation/WideAndDeep/triton/runner | runner | start_NVIDIA-DGX-1-(1x-V100-32GB) | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/bin/bash
# Install Docker
. /etc/os-release && \
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - && \
echo "deb [arch=amd64] https://download.docker.com/linux/debian buster stable" > /etc/apt/sources.list.d/docker.list && \
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey| apt-key add - && \
curl -s -L https://nvidia.github.io/nvidia-docker/$ID$VERSION_ID/nvidia-docker.list > /etc/apt/sources.list.d/nvidia-docker.list && \
apt-get update && \
apt-get install -y docker-ce docker-ce-cli containerd.io nvidia-docker2
# Install packages
pip install -r triton/runner/requirements.txt
# Evaluate Runner
python3 -m "triton.runner.__main__" \
--config-path "triton/runner/config_NVIDIA-DGX-1-(1x-V100-32GB).yaml" \
--device 0 |
TensorFlow2/Recommendation/DLRM_and_DCNv2/deployment/deployment_toolkit/triton_performance_runner/perf_analyzer | perf_analyzer | perf_config | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any
from .exceptions import PerfAnalyzerException
class PerfAnalyzerConfig:
"""
A config class to set arguments to the perf_analyzer.
An argument set to None will use the perf_analyzer's default.
"""
perf_analyzer_args = [
"async",
"sync",
"measurement-interval",
"measurement-mode",
"measurement-request-count",
"concurrency-range",
"request-rate-range",
"request-distribution",
"request-intervals",
"binary-search",
"num-of-sequence",
"latency-threshold",
"max-threads",
"stability-percentage",
"max-trials",
"percentile",
"input-data",
"shared-memory",
"output-shared-memory-size",
"sequence-length",
"string-length",
"string-data",
]
perf_analyzer_multiple_args = [
"shape",
]
input_to_options = [
"model-name",
"model-version",
"batch-size",
"url",
"protocol",
"latency-report-file",
"streaming",
]
input_to_verbose = ["verbose", "extra-verbose"]
def __init__(self):
"""
Construct a PerfAnalyzerConfig
"""
self._args = {k: None for k in self.perf_analyzer_args}
self._multiple_args = {k: [] for k in self.perf_analyzer_multiple_args}
self._options = {
"-m": None,
"-x": None,
"-b": None,
"-u": None,
"-i": None,
"-f": None,
"-H": None,
"-c": None,
"-t": None,
}
self._verbose = {"-v": None, "-v -v": None}
self._input_to_options = {
"model-name": "-m",
"model-version": "-x",
"batch-size": "-b",
"url": "-u",
"protocol": "-i",
"latency-report-file": "-f",
"streaming": "-H",
"concurrency": "-c",
"threads": "-t",
}
self._input_to_verbose = {"verbose": "-v", "extra-verbose": "-v -v"}
@classmethod
def allowed_keys(cls):
"""
Returns
-------
list of str
The keys that are allowed to be
passed into perf_analyzer
"""
return (
list(cls.perf_analyzer_args)
+ list(cls.perf_analyzer_multiple_args)
+ list(cls.input_to_options)
+ list(cls.input_to_verbose)
)
def update_config(self, params=None):
"""
Allows setting values from a
params dict
Parameters
----------
params: dict
keys are allowed args to perf_analyzer
"""
if params:
for key in params:
self[key] = params[key]
def to_cli_string(self):
"""
Utility function to convert a config into a
string of arguments to the perf_analyzer with CLI.
Returns
-------
str
cli command string consisting of all arguments
to the perf_analyzer set in the config, without
the executable name.
"""
# single dashed options, then verbose flags, then main args
args = [f"{k} {v}" for k, v in self._options.items() if v]
args += [k for k, v in self._verbose.items() if v]
args += [f"--{k}={v}" for k, v in self._args.items() if v]
for k, v in self._multiple_args.items():
for item in v:
args.append(f"--{k}={item}")
return " ".join(args)
def __getitem__(self, key: str):
"""
Gets an arguments value in config
Parameters
----------
key : str
The name of the argument to the perf_analyzer
Returns
-------
The value that the argument is set to in this config
Raises
------
TritonModelAnalyzerException
If argument not found in the config
"""
if key in self._args:
return self._args[key]
elif key in self._multiple_args:
return self._multiple_args[key]
elif key in self._input_to_options:
return self._options[self._input_to_options[key]]
elif key in self._input_to_verbose:
return self._verbose[self._input_to_verbose[key]]
else:
raise PerfAnalyzerException(f"'{key}' Key not found in config")
def __setitem__(self, key: str, value: Any):
"""
Sets an arguments value in config
after checking if defined/supported.
Parameters
----------
key : str
The name of the argument to the perf_analyzer
value : (any)
The value to which the argument is being set
Raises
------
TritonModelAnalyzerException
If key is unsupported or undefined in the
config class
"""
if key in self._args:
self._args[key] = value
elif key in self._multiple_args:
self._multiple_args[key].append(value)
elif key in self._input_to_options:
self._options[self._input_to_options[key]] = value
elif key in self._input_to_verbose:
self._verbose[self._input_to_verbose[key]] = value
else:
raise PerfAnalyzerException(
f"The argument '{key}' to the perf_analyzer " "is not supported by the model analyzer."
)
|
TensorFlow/Detection/SSD/models/research/object_detection/builders | builders | box_coder_builder_test | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for box_coder_builder."""
import tensorflow as tf
from google.protobuf import text_format
from object_detection.box_coders import faster_rcnn_box_coder
from object_detection.box_coders import keypoint_box_coder
from object_detection.box_coders import mean_stddev_box_coder
from object_detection.box_coders import square_box_coder
from object_detection.builders import box_coder_builder
from object_detection.protos import box_coder_pb2
class BoxCoderBuilderTest(tf.test.TestCase):
def test_build_faster_rcnn_box_coder_with_defaults(self):
box_coder_text_proto = """
faster_rcnn_box_coder {
}
"""
box_coder_proto = box_coder_pb2.BoxCoder()
text_format.Merge(box_coder_text_proto, box_coder_proto)
box_coder_object = box_coder_builder.build(box_coder_proto)
self.assertIsInstance(box_coder_object,
faster_rcnn_box_coder.FasterRcnnBoxCoder)
self.assertEqual(box_coder_object._scale_factors, [10.0, 10.0, 5.0, 5.0])
def test_build_faster_rcnn_box_coder_with_non_default_parameters(self):
box_coder_text_proto = """
faster_rcnn_box_coder {
y_scale: 6.0
x_scale: 3.0
height_scale: 7.0
width_scale: 8.0
}
"""
box_coder_proto = box_coder_pb2.BoxCoder()
text_format.Merge(box_coder_text_proto, box_coder_proto)
box_coder_object = box_coder_builder.build(box_coder_proto)
self.assertIsInstance(box_coder_object,
faster_rcnn_box_coder.FasterRcnnBoxCoder)
self.assertEqual(box_coder_object._scale_factors, [6.0, 3.0, 7.0, 8.0])
def test_build_keypoint_box_coder_with_defaults(self):
box_coder_text_proto = """
keypoint_box_coder {
}
"""
box_coder_proto = box_coder_pb2.BoxCoder()
text_format.Merge(box_coder_text_proto, box_coder_proto)
box_coder_object = box_coder_builder.build(box_coder_proto)
self.assertIsInstance(box_coder_object, keypoint_box_coder.KeypointBoxCoder)
self.assertEqual(box_coder_object._scale_factors, [10.0, 10.0, 5.0, 5.0])
def test_build_keypoint_box_coder_with_non_default_parameters(self):
box_coder_text_proto = """
keypoint_box_coder {
num_keypoints: 6
y_scale: 6.0
x_scale: 3.0
height_scale: 7.0
width_scale: 8.0
}
"""
box_coder_proto = box_coder_pb2.BoxCoder()
text_format.Merge(box_coder_text_proto, box_coder_proto)
box_coder_object = box_coder_builder.build(box_coder_proto)
self.assertIsInstance(box_coder_object, keypoint_box_coder.KeypointBoxCoder)
self.assertEqual(box_coder_object._num_keypoints, 6)
self.assertEqual(box_coder_object._scale_factors, [6.0, 3.0, 7.0, 8.0])
def test_build_mean_stddev_box_coder(self):
box_coder_text_proto = """
mean_stddev_box_coder {
}
"""
box_coder_proto = box_coder_pb2.BoxCoder()
text_format.Merge(box_coder_text_proto, box_coder_proto)
box_coder_object = box_coder_builder.build(box_coder_proto)
self.assertTrue(
isinstance(box_coder_object,
mean_stddev_box_coder.MeanStddevBoxCoder))
def test_build_square_box_coder_with_defaults(self):
box_coder_text_proto = """
square_box_coder {
}
"""
box_coder_proto = box_coder_pb2.BoxCoder()
text_format.Merge(box_coder_text_proto, box_coder_proto)
box_coder_object = box_coder_builder.build(box_coder_proto)
self.assertTrue(
isinstance(box_coder_object, square_box_coder.SquareBoxCoder))
self.assertEqual(box_coder_object._scale_factors, [10.0, 10.0, 5.0])
def test_build_square_box_coder_with_non_default_parameters(self):
box_coder_text_proto = """
square_box_coder {
y_scale: 6.0
x_scale: 3.0
length_scale: 7.0
}
"""
box_coder_proto = box_coder_pb2.BoxCoder()
text_format.Merge(box_coder_text_proto, box_coder_proto)
box_coder_object = box_coder_builder.build(box_coder_proto)
self.assertTrue(
isinstance(box_coder_object, square_box_coder.SquareBoxCoder))
self.assertEqual(box_coder_object._scale_factors, [6.0, 3.0, 7.0])
def test_raise_error_on_empty_box_coder(self):
box_coder_text_proto = """
"""
box_coder_proto = box_coder_pb2.BoxCoder()
text_format.Merge(box_coder_text_proto, box_coder_proto)
with self.assertRaises(ValueError):
box_coder_builder.build(box_coder_proto)
if __name__ == '__main__':
tf.test.main()
|
TensorFlow/Segmentation/UNet_3D_Medical/scripts | scripts | unet3d_train_single | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script launches 3D-UNet run FP32 training on fold 0 for 16000 iterations each.
# Usage:
# bash examples/unet3d_train_single.sh <number/of/gpus> <path/to/dataset> <path/to/results/directory> <batch/size>
horovodrun -np $1 python main.py --data_dir $2 --model_dir $3 --exec_mode train_and_evaluate --augment --max_steps 16000 --batch_size $4 --xla --fold 0 |
PyTorch/Segmentation/MaskRCNN/pytorch/tests | tests | test_metric_logger | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
import unittest
from maskrcnn_benchmark.utils.metric_logger import MetricLogger
class TestMetricLogger(unittest.TestCase):
def test_update(self):
meter = MetricLogger()
for i in range(10):
meter.update(metric=float(i))
m = meter.meters["metric"]
self.assertEqual(m.count, 10)
self.assertEqual(m.total, 45)
self.assertEqual(m.median, 4)
self.assertEqual(m.avg, 4.5)
def test_no_attr(self):
meter = MetricLogger()
_ = meter.meters
_ = meter.delimiter
def broken():
_ = meter.not_existent
self.assertRaises(AttributeError, broken)
if __name__ == "__main__":
unittest.main()
|
CUDA-Optimized/FastSpeech | FastSpeech | generate | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import pathlib
import sys
import time
import fire
import librosa
import torch
from fastspeech.data_load import PadDataLoader
from fastspeech.dataset.text_dataset import TextDataset
from fastspeech.inferencer.fastspeech_inferencer import FastSpeechInferencer
from fastspeech.model.fastspeech import Fastspeech
from fastspeech import hparam as hp, DEFAULT_DEVICE
from fastspeech.utils.logging import tprint
from fastspeech.utils.time import TimeElapsed
from fastspeech.utils.pytorch import to_device_async, to_cpu_numpy
from fastspeech.infer import get_inferencer
from fastspeech.inferencer.waveglow_inferencer import WaveGlowInferencer
MAX_FILESIZE=128
# TODO test with different speeds
def generate(hparam='infer.yaml',
text='test_sentences.txt',
results_path='results',
device=DEFAULT_DEVICE,
**kwargs):
"""The script for generating waveforms from texts with a vocoder.
By default, this script assumes to load parameters in the default config file, fastspeech/hparams/infer.yaml.
Besides the flags, you can also set parameters in the config file via the command-line. For examples,
--checkpoint_path=CHECKPOINT_PATH
Path to checkpoint directory. The latest checkpoint will be loaded.
--waveglow_path=WAVEGLOW_PATH
Path to the WaveGlow checkpoint file.
--waveglow_engine_path=WAVEGLOW_ENGINE_PATH
Path to the WaveGlow engine file. It can be only used with --use_trt=True.
--batch_size=BATCH_SIZE
Batch size to use. Defaults to 1.
Refer to fastspeech/hparams/infer.yaml to see more parameters.
Args:
hparam (str, optional): Path to default config file. Defaults to "infer.yaml".
text (str, optional): a sample text or a text file path to generate its waveform. Defaults to 'test_sentences.txt'.
results_path (str, optional): Path to output waveforms directory. Defaults to 'results'.
device (str, optional): Device to use. Defaults to "cuda" if avaiable, or "cpu".
"""
hp.set_hparam(hparam, kwargs)
if os.path.isfile(text):
f = open(text, 'r', encoding="utf-8")
texts = f.read().splitlines()
else: # single string
texts = [text]
dataset = TextDataset(texts)
data_loader = PadDataLoader(dataset,
batch_size=hp.batch_size,
num_workers=hp.n_workers,
shuffle=False,
drop_last=False)
# text to mel
model = Fastspeech(
max_seq_len=hp.max_seq_len,
d_model=hp.d_model,
phoneme_side_n_layer=hp.phoneme_side_n_layer,
phoneme_side_head=hp.phoneme_side_head,
phoneme_side_conv1d_filter_size=hp.phoneme_side_conv1d_filter_size,
phoneme_side_output_size=hp.phoneme_side_output_size,
mel_side_n_layer=hp.mel_side_n_layer,
mel_side_head=hp.mel_side_head,
mel_side_conv1d_filter_size=hp.mel_side_conv1d_filter_size,
mel_side_output_size=hp.mel_side_output_size,
duration_predictor_filter_size=hp.duration_predictor_filter_size,
duration_predictor_kernel_size=hp.duration_predictor_kernel_size,
fft_conv1d_kernel=hp.fft_conv1d_kernel,
fft_conv1d_padding=hp.fft_conv1d_padding,
dropout=hp.dropout,
n_mels=hp.num_mels,
fused_layernorm=hp.fused_layernorm
)
fs_inferencer = get_inferencer(model, data_loader, device)
# set up WaveGlow
if hp.use_trt:
from fastspeech.trt.waveglow_trt_inferencer import WaveGlowTRTInferencer
wb_inferencer = WaveGlowTRTInferencer(
ckpt_file=hp.waveglow_path, engine_file=hp.waveglow_engine_path, use_fp16=hp.use_fp16)
else:
wb_inferencer = WaveGlowInferencer(
ckpt_file=hp.waveglow_path, device=device, use_fp16=hp.use_fp16)
tprint("Generating {} sentences.. ".format(len(dataset)))
with fs_inferencer, wb_inferencer:
try:
for i in range(len(data_loader)):
tprint("------------- BATCH # {} -------------".format(i))
with TimeElapsed(name="Inferece Time: E2E", format=":.6f"):
## Text-to-Mel ##
with TimeElapsed(name="Inferece Time: FastSpeech", device=device, cuda_sync=True, format=":.6f"), torch.no_grad():
outputs = fs_inferencer.infer()
texts = outputs["text"]
mels = outputs["mel"] # (b, n_mels, t)
mel_masks = outputs['mel_mask'] # (b, t)
# assert(mels.is_cuda)
# remove paddings
mel_lens = mel_masks.sum(axis=1)
max_len = mel_lens.max()
mels = mels[..., :max_len]
mel_masks = mel_masks[..., :max_len]
## Vocoder ##
with TimeElapsed(name="Inferece Time: WaveGlow", device=device, cuda_sync=True, format=":.6f"), torch.no_grad():
wavs = wb_inferencer.infer(mels)
wavs = to_cpu_numpy(wavs)
## Write wavs ##
pathlib.Path(results_path).mkdir(parents=True, exist_ok=True)
for i, (text, wav) in enumerate(zip(texts, wavs)):
tprint("TEXT #{}: \"{}\"".format(i, text))
# remove paddings in case of batch size > 1
wav_len = mel_lens[i] * hp.hop_len
wav = wav[:wav_len]
path = os.path.join(results_path, text[:MAX_FILESIZE] + ".wav")
librosa.output.write_wav(path, wav, hp.sr)
except StopIteration:
tprint("Generation has been done.")
except KeyboardInterrupt:
tprint("Generation has been canceled.")
if __name__ == '__main__':
fire.Fire(generate)
|
PyTorch/Recommendation/DLRM/preproc | preproc | verify_criteo_downloaded | # Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#! /bin/bash
set -e
set -x
download_dir=${1:-'/data/dlrm/criteo'}
cd ${download_dir}
for i in $(seq 0 23); do
filename=day_${i}
if [ -f $filename ]; then
echo "$filename exists, OK"
else
echo "$filename does not exist. Please follow the instructions at: http://labs.criteo.com/2013/12/download-terabyte-click-logs/ to download it"
exit 1
fi
done
cd -
echo "Criteo data verified"
|
PyTorch/Classification/ConvNets/efficientnet/inference/AMP | AMP | DGXA100_efficientnet-b4_AMP |
python ./multiproc.py --nproc_per_node 8 ./launch.py --model efficientnet-b4 --precision AMP --mode benchmark_inference --platform DGXA100 /imagenet -b 1 --workspace ${1:-./} --raport-file raport_1.json
python ./multiproc.py --nproc_per_node 8 ./launch.py --model efficientnet-b4 --precision AMP --mode benchmark_inference --platform DGXA100 /imagenet -b 2 --workspace ${1:-./} --raport-file raport_2.json
python ./multiproc.py --nproc_per_node 8 ./launch.py --model efficientnet-b4 --precision AMP --mode benchmark_inference --platform DGXA100 /imagenet -b 4 --workspace ${1:-./} --raport-file raport_4.json
python ./multiproc.py --nproc_per_node 8 ./launch.py --model efficientnet-b4 --precision AMP --mode benchmark_inference --platform DGXA100 /imagenet -b 8 --workspace ${1:-./} --raport-file raport_8.json
python ./multiproc.py --nproc_per_node 8 ./launch.py --model efficientnet-b4 --precision AMP --mode benchmark_inference --platform DGXA100 /imagenet -b 16 --workspace ${1:-./} --raport-file raport_16.json
python ./multiproc.py --nproc_per_node 8 ./launch.py --model efficientnet-b4 --precision AMP --mode benchmark_inference --platform DGXA100 /imagenet -b 32 --workspace ${1:-./} --raport-file raport_32.json
python ./multiproc.py --nproc_per_node 8 ./launch.py --model efficientnet-b4 --precision AMP --mode benchmark_inference --platform DGXA100 /imagenet -b 64 --workspace ${1:-./} --raport-file raport_64.json
python ./multiproc.py --nproc_per_node 8 ./launch.py --model efficientnet-b4 --precision AMP --mode benchmark_inference --platform DGXA100 /imagenet -b 128 --workspace ${1:-./} --raport-file raport_128.json
python ./multiproc.py --nproc_per_node 8 ./launch.py --model efficientnet-b4 --precision AMP --mode benchmark_inference --platform DGXA100 /imagenet -b 256 --workspace ${1:-./} --raport-file raport_256.json
|
PyTorch/LanguageModeling/BART/utils | utils | data_collator | # Copyright (c) 2022 NVIDIA CORPORATION. All rights reserved.
# Copyright 2020 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
import math
import warnings
from dataclasses import dataclass
from typing import Any, Callable, Dict, List, NewType, Optional, Tuple, Union
import torch
from torch.nn.utils.rnn import pad_sequence
from bart.tokenization.tokenization_utils_base import BatchEncoding, PaddingStrategy, PreTrainedTokenizerBase
from bart.modeling.modeling_bart import shift_tokens_right
InputDataClass = NewType("InputDataClass", Any)
"""
A DataCollator is a function that takes a list of samples from a Dataset and collate them into a batch, as a dictionary
of Tensors.
"""
DataCollator = NewType("DataCollator", Callable[[List[InputDataClass]], Dict[str, torch.Tensor]])
def default_data_collator(features: List[InputDataClass]) -> Dict[str, torch.Tensor]:
"""
Very simple data collator that simply collates batches of dict-like objects and performs special handling for
potential keys named:
- ``label``: handles a single value (int or float) per object
- ``label_ids``: handles a list of values per object
Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs
to the model. See glue and ner for example of how it's useful.
"""
# In this function we'll make the assumption that all `features` in the batch
# have the same attributes.
# So we will look at the first element as a proxy for what attributes exist
# on the whole batch.
if not isinstance(features[0], (dict, BatchEncoding)):
features = [vars(f) for f in features]
first = features[0]
batch = {}
# Special handling for labels.
# Ensure that tensor is created with the correct type
# (it should be automatically the case, but let's make sure of it.)
if "label" in first and first["label"] is not None:
label = first["label"].item() if isinstance(first["label"], torch.Tensor) else first["label"]
dtype = torch.long if isinstance(label, int) else torch.float
batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
elif "label_ids" in first and first["label_ids"] is not None:
if isinstance(first["label_ids"], torch.Tensor):
batch["labels"] = torch.stack([f["label_ids"] for f in features])
else:
dtype = torch.long if type(first["label_ids"][0]) is int else torch.float
batch["labels"] = torch.tensor([f["label_ids"] for f in features], dtype=dtype)
# Handling of all other possible keys.
# Again, we will use the first element to figure out which key/values are not None for this model.
for k, v in first.items():
if k not in ("label", "label_ids") and v is not None and not isinstance(v, str):
if isinstance(v, torch.Tensor):
batch[k] = torch.stack([f[k] for f in features])
else:
batch[k] = torch.tensor([f[k] for f in features])
return batch
@dataclass
class DataCollatorWithPadding:
"""
Data collator that will dynamically pad the inputs received.
Args:
tokenizer (:class:`~transformers.PreTrainedTokenizer` or :class:`~transformers.PreTrainedTokenizerFast`):
The tokenizer used for encoding the data.
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
among:
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
maximum acceptable input length for the model if that argument is not provided.
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
different lengths).
max_length (:obj:`int`, `optional`):
Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (:obj:`int`, `optional`):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
7.5 (Volta).
"""
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
batch = self.tokenizer.pad(
features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="pt",
)
if "label" in batch:
batch["labels"] = batch["label"]
del batch["label"]
if "label_ids" in batch:
batch["labels"] = batch["label_ids"]
del batch["label_ids"]
return batch
@dataclass
class DataCollatorForTokenClassification:
"""
Data collator that will dynamically pad the inputs received, as well as the labels.
Args:
tokenizer (:class:`~transformers.PreTrainedTokenizer` or :class:`~transformers.PreTrainedTokenizerFast`):
The tokenizer used for encoding the data.
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
among:
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
maximum acceptable input length for the model if that argument is not provided.
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
different lengths).
max_length (:obj:`int`, `optional`):
Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (:obj:`int`, `optional`):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
7.5 (Volta).
label_pad_token_id (:obj:`int`, `optional`, defaults to -100):
The id to use when padding the labels (-100 will be automatically ignore by PyTorch loss functions).
"""
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
label_pad_token_id: int = -100
def __call__(self, features):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature[label_name] for feature in features] if label_name in features[0].keys() else None
batch = self.tokenizer.pad(
features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
# Conversion to tensors will fail if we have labels as they are not of the same length yet.
return_tensors="pt" if labels is None else None,
)
if labels is None:
return batch
sequence_length = torch.tensor(batch["input_ids"]).shape[1]
padding_side = self.tokenizer.padding_side
if padding_side == "right":
batch["labels"] = [label + [self.label_pad_token_id] * (sequence_length - len(label)) for label in labels]
else:
batch["labels"] = [[self.label_pad_token_id] * (sequence_length - len(label)) + label for label in labels]
batch = {k: torch.tensor(v, dtype=torch.int64) for k, v in batch.items()}
return batch
def _collate_batch(examples, tokenizer, masks=None, max_length=None):
"""Collate `examples` into a batch, using the information in `tokenizer` for padding if necessary."""
# Tensorize if necessary.
if isinstance(examples[0], (list, tuple)):
examples = [torch.tensor(e, dtype=torch.long) for e in examples]
# Check if padding is necessary.
length_of_first = examples[0].size(0)
are_tensors_same_length = (
all(x.size(0) == length_of_first for x in examples) and
(max_length is None or max_length == length_of_first))
if are_tensors_same_length:
if masks is None:
return torch.stack(examples, dim=0)
else:
return torch.stack(examples, dim=0), torch.stack(masks, dim=0)
# If yes, check if we have a `pad_token`.
if tokenizer._pad_token is None:
raise ValueError(
"You are attempting to pad samples but the tokenizer you are using"
f" ({tokenizer.__class__.__name__}) does not have a pad token."
)
# Creating the full tensor and filling it with our data.
max_length = max_length if max_length is not None else max(x.size(0) for x in examples)
result = examples[0].new_full([len(examples), max_length], tokenizer.pad_token_id)
for i, example in enumerate(examples):
if tokenizer.padding_side == "right":
result[i, : example.shape[0]] = example
else:
result[i, -example.shape[0] :] = example
if masks is not None:
result_mask = masks[0].new_full([len(masks), max_length], 0)
for i, mask in enumerate(masks):
if tokenizer.padding_side == "right":
result_mask[i, : mask.shape[0]] = mask
else:
result_mask[i, -mask.shape[0] :] = mask
return result, result_mask
return result
def tolist(x: Union[List[Any], torch.Tensor]):
return x.tolist() if isinstance(x, torch.Tensor) else x
@dataclass
class DataCollatorForLanguageModeling:
"""
Data collator used for language modeling. Inputs are dynamically padded to the maximum length of a batch if they
are not all of the same length.
Args:
tokenizer (:class:`~transformers.PreTrainedTokenizer` or :class:`~transformers.PreTrainedTokenizerFast`):
The tokenizer used for encoding the data.
mlm (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to use masked language modeling. If set to :obj:`False`, the labels are the same as the
inputs with the padding tokens ignored (by setting them to -100). Otherwise, the labels are -100 for
non-masked tokens and the value to predict for the masked token.
mlm_probability (:obj:`float`, `optional`, defaults to 0.15):
The probability with which to (randomly) mask tokens in the input, when :obj:`mlm` is set to :obj:`True`.
.. note::
For best performance, this data collator should be used with a dataset having items that are dictionaries or
BatchEncoding, with the :obj:`"special_tokens_mask"` key, as returned by a
:class:`~transformers.PreTrainedTokenizer` or a :class:`~transformers.PreTrainedTokenizerFast` with the
argument :obj:`return_special_tokens_mask=True`.
"""
tokenizer: PreTrainedTokenizerBase
mlm: bool = True
mlm_probability: float = 0.15
def __post_init__(self):
if self.mlm and self.tokenizer.mask_token is None:
raise ValueError(
"This tokenizer does not have a mask token which is necessary for masked language modeling. "
"You should pass `mlm=False` to train on causal language modeling instead."
)
def __call__(
self, examples: List[Union[List[int], torch.Tensor, Dict[str, torch.Tensor]]]
) -> Dict[str, torch.Tensor]:
# Handle dict or lists with proper padding and conversion to tensor.
if isinstance(examples[0], (dict, BatchEncoding)):
batch = self.tokenizer.pad(examples, return_tensors="pt")
else:
batch = {"input_ids": _collate_batch(examples, self.tokenizer)}
# If special token mask has been preprocessed, pop it from the dict.
special_tokens_mask = batch.pop("special_tokens_mask", None)
if self.mlm:
batch["input_ids"], batch["labels"] = self.mask_tokens(
batch["input_ids"], special_tokens_mask=special_tokens_mask
)
else:
labels = batch["input_ids"].clone()
if self.tokenizer.pad_token_id is not None:
labels[labels == self.tokenizer.pad_token_id] = -100
batch["labels"] = labels
return batch
def mask_tokens(
self, inputs: torch.Tensor, special_tokens_mask: Optional[torch.Tensor] = None
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
"""
labels = inputs.clone()
# We sample a few tokens in each sequence for MLM training (with probability `self.mlm_probability`)
probability_matrix = torch.full(labels.shape, self.mlm_probability)
if special_tokens_mask is None:
special_tokens_mask = [
self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()
]
special_tokens_mask = torch.tensor(special_tokens_mask, dtype=torch.bool)
else:
special_tokens_mask = special_tokens_mask.bool()
probability_matrix.masked_fill_(special_tokens_mask, value=0.0)
masked_indices = torch.bernoulli(probability_matrix).bool()
labels[~masked_indices] = -100 # We only compute loss on masked tokens
# 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token)
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
# The rest of the time (10% of the time) we keep the masked input tokens unchanged
return inputs, labels
@dataclass
class DataCollatorForWholeWordMask(DataCollatorForLanguageModeling):
"""
Data collator used for language modeling.
- collates batches of tensors, honoring their tokenizer's pad_token
- preprocesses batches for masked language modeling
"""
def __call__(
self, examples: List[Union[List[int], torch.Tensor, Dict[str, torch.Tensor]]]
) -> Dict[str, torch.Tensor]:
if isinstance(examples[0], (dict, BatchEncoding)):
input_ids = [e["input_ids"] for e in examples]
else:
input_ids = examples
examples = [{"input_ids": e} for e in examples]
batch_input = _collate_batch(input_ids, self.tokenizer)
mask_labels = []
for e in examples:
ref_tokens = []
for id in tolist(e["input_ids"]):
token = self.tokenizer._convert_id_to_token(id)
ref_tokens.append(token)
# For Chinese tokens, we need extra inf to mark sub-word, e.g [喜,欢]-> [喜,##欢]
if "chinese_ref" in e:
ref_pos = tolist(e["chinese_ref"])
len_seq = e["input_ids"].size(0)
for i in range(len_seq):
if i in ref_pos:
ref_tokens[i] = "##" + ref_tokens[i]
mask_labels.append(self._whole_word_mask(ref_tokens))
batch_mask = _collate_batch(mask_labels, self.tokenizer)
inputs, labels = self.mask_tokens(batch_input, batch_mask)
return {"input_ids": inputs, "labels": labels}
def _whole_word_mask(self, input_tokens: List[str], max_predictions=512):
"""
Get 0/1 labels for masked tokens with whole word mask proxy
"""
cand_indexes = []
for (i, token) in enumerate(input_tokens):
if token == "[CLS]" or token == "[SEP]":
continue
if len(cand_indexes) >= 1 and token.startswith("##"):
cand_indexes[-1].append(i)
else:
cand_indexes.append([i])
random.shuffle(cand_indexes)
num_to_predict = min(max_predictions, max(1, int(round(len(input_tokens) * self.mlm_probability))))
masked_lms = []
covered_indexes = set()
for index_set in cand_indexes:
if len(masked_lms) >= num_to_predict:
break
# If adding a whole-word mask would exceed the maximum number of
# predictions, then just skip this candidate.
if len(masked_lms) + len(index_set) > num_to_predict:
continue
is_any_index_covered = False
for index in index_set:
if index in covered_indexes:
is_any_index_covered = True
break
if is_any_index_covered:
continue
for index in index_set:
covered_indexes.add(index)
masked_lms.append(index)
assert len(covered_indexes) == len(masked_lms)
mask_labels = [1 if i in covered_indexes else 0 for i in range(len(input_tokens))]
return mask_labels
def mask_tokens(self, inputs: torch.Tensor, mask_labels: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set
'mask_labels' means we use whole word mask (wwm), we directly mask idxs according to it's ref.
"""
if self.tokenizer.mask_token is None:
raise ValueError(
"This tokenizer does not have a mask token which is necessary for masked language modeling. Remove the --mlm flag if you want to use this tokenizer."
)
labels = inputs.clone()
# We sample a few tokens in each sequence for masked-LM training (with probability args.mlm_probability defaults to 0.15 in Bert/RoBERTa)
probability_matrix = mask_labels
special_tokens_mask = [
self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()
]
probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)
if self.tokenizer._pad_token is not None:
padding_mask = labels.eq(self.tokenizer.pad_token_id)
probability_matrix.masked_fill_(padding_mask, value=0.0)
masked_indices = probability_matrix.bool()
labels[~masked_indices] = -100 # We only compute loss on masked tokens
# 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token)
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
# The rest of the time (10% of the time) we keep the masked input tokens unchanged
return inputs, labels
@dataclass
class DataCollatorForSOP(DataCollatorForLanguageModeling):
"""
Data collator used for sentence order prediction task.
- collates batches of tensors, honoring their tokenizer's pad_token
- preprocesses batches for both masked language modeling and sentence order prediction
"""
def __init__(self, *args, **kwargs):
warnings.warn(
"DataCollatorForSOP is deprecated and will be removed in a future version, you can now use "
"DataCollatorForLanguageModeling instead.",
FutureWarning,
)
def __call__(self, examples: List[Dict[str, torch.Tensor]]) -> Dict[str, torch.Tensor]:
input_ids = [example["input_ids"] for example in examples]
input_ids = _collate_batch(input_ids, self.tokenizer)
input_ids, labels, attention_mask = self.mask_tokens(input_ids)
token_type_ids = [example["token_type_ids"] for example in examples]
# size of segment_ids varied because randomness, padding zero to the end as the original implementation
token_type_ids = pad_sequence(token_type_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id)
sop_label_list = [example["sentence_order_label"] for example in examples]
sentence_order_label = torch.stack(sop_label_list)
return {
"input_ids": input_ids,
"labels": labels,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
"sentence_order_label": sentence_order_label,
}
def mask_tokens(self, inputs: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Prepare masked tokens inputs/labels/attention_mask for masked language modeling: 80% MASK, 10% random, 10%
original. N-gram not applied yet.
"""
if self.tokenizer.mask_token is None:
raise ValueError(
"This tokenizer does not have a mask token which is necessary for masked language modeling. Remove the --mlm flag if you want to use this tokenizer."
)
labels = inputs.clone()
# We sample a few tokens in each sequence for masked-LM training (with probability args.mlm_probability defaults to 0.15 in Bert/RoBERTa)
probability_matrix = torch.full(labels.shape, self.mlm_probability)
special_tokens_mask = [
self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()
]
probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)
if self.tokenizer._pad_token is not None:
padding_mask = labels.eq(self.tokenizer.pad_token_id)
probability_matrix.masked_fill_(padding_mask, value=0.0)
masked_indices = torch.bernoulli(probability_matrix).bool()
# probability be `1` (masked), however in albert model attention mask `0` means masked, revert the value
attention_mask = (~masked_indices).float()
if self.tokenizer._pad_token is not None:
attention_padding_mask = labels.eq(self.tokenizer.pad_token_id)
attention_mask.masked_fill_(attention_padding_mask, value=1.0)
labels[~masked_indices] = -100 # We only compute loss on masked tokens, -100 is default for CE compute
# 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token)
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
# The rest of the time (10% of the time) we keep the masked input tokens unchanged
return inputs, labels, attention_mask
@dataclass
class DataCollatorForPermutationLanguageModeling:
"""
Data collator used for permutation language modeling.
- collates batches of tensors, honoring their tokenizer's pad_token
- preprocesses batches for permutation language modeling with procedures specific to XLNet
"""
tokenizer: PreTrainedTokenizerBase
plm_probability: float = 1 / 6
max_span_length: int = 5 # maximum length of a span of masked tokens
def __call__(
self, examples: List[Union[List[int], torch.Tensor, Dict[str, torch.Tensor]]]
) -> Dict[str, torch.Tensor]:
if isinstance(examples[0], (dict, BatchEncoding)):
examples = [e["input_ids"] for e in examples]
batch = _collate_batch(examples, self.tokenizer)
inputs, perm_mask, target_mapping, labels = self.mask_tokens(batch)
return {"input_ids": inputs, "perm_mask": perm_mask, "target_mapping": target_mapping, "labels": labels}
def mask_tokens(self, inputs: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
"""
The masked tokens to be predicted for a particular sequence are determined by the following algorithm:
0. Start from the beginning of the sequence by setting ``cur_len = 0`` (number of tokens processed so far).
1. Sample a ``span_length`` from the interval ``[1, max_span_length]`` (length of span of tokens to be
masked)
2. Reserve a context of length ``context_length = span_length / plm_probability`` to surround span to be
masked
3. Sample a starting point ``start_index`` from the interval ``[cur_len, cur_len + context_length -
span_length]`` and mask tokens ``start_index:start_index + span_length``
4. Set ``cur_len = cur_len + context_length``. If ``cur_len < max_len`` (i.e. there are tokens remaining in
the sequence to be processed), repeat from Step 1.
"""
if self.tokenizer.mask_token is None:
raise ValueError(
"This tokenizer does not have a mask token which is necessary for permutation language modeling. Please add a mask token if you want to use this tokenizer."
)
if inputs.size(1) % 2 != 0:
raise ValueError(
"This collator requires that sequence lengths be even to create a leakage-free perm_mask. Please see relevant comments in source code for details."
)
labels = inputs.clone()
# Creating the mask and target_mapping tensors
masked_indices = torch.full(labels.shape, 0, dtype=torch.bool)
target_mapping = torch.zeros((labels.size(0), labels.size(1), labels.size(1)), dtype=torch.float32)
for i in range(labels.size(0)):
# Start from the beginning of the sequence by setting `cur_len = 0` (number of tokens processed so far).
cur_len = 0
max_len = labels.size(1)
while cur_len < max_len:
# Sample a `span_length` from the interval `[1, max_span_length]` (length of span of tokens to be masked)
span_length = torch.randint(1, self.max_span_length + 1, (1,)).item()
# Reserve a context of length `context_length = span_length / plm_probability` to surround the span to be masked
context_length = int(span_length / self.plm_probability)
# Sample a starting point `start_index` from the interval `[cur_len, cur_len + context_length - span_length]` and mask tokens `start_index:start_index + span_length`
start_index = cur_len + torch.randint(context_length - span_length + 1, (1,)).item()
masked_indices[i, start_index : start_index + span_length] = 1
# Set `cur_len = cur_len + context_length`
cur_len += context_length
# Since we're replacing non-masked tokens with -100 in the labels tensor instead of skipping them altogether,
# the i-th predict corresponds to the i-th token.
target_mapping[i] = torch.eye(labels.size(1))
special_tokens_mask = torch.tensor(
[self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()],
dtype=torch.bool,
)
masked_indices.masked_fill_(special_tokens_mask, value=0.0)
if self.tokenizer._pad_token is not None:
padding_mask = labels.eq(self.tokenizer.pad_token_id)
masked_indices.masked_fill_(padding_mask, value=0.0)
# Mask indicating non-functional tokens, where functional tokens are [SEP], [CLS], padding, etc.
non_func_mask = ~(padding_mask | special_tokens_mask)
inputs[masked_indices] = self.tokenizer.mask_token_id
labels[~masked_indices] = -100 # We only compute loss on masked tokens
perm_mask = torch.zeros((labels.size(0), labels.size(1), labels.size(1)), dtype=torch.float32)
for i in range(labels.size(0)):
# Generate permutation indices i.e. sample a random factorisation order for the sequence. This will
# determine which tokens a given token can attend to (encoded in `perm_mask`).
# Note: Length of token sequence being permuted has to be less than or equal to reused sequence length
# (see documentation for `mems`), otherwise information may leak through due to reuse. In this implementation,
# we assume that reused length is half of sequence length and permutation length is equal to reused length.
# This requires that the sequence length be even.
# Create a linear factorisation order
perm_index = torch.arange(labels.size(1))
# Split this into two halves, assuming that half the sequence is reused each time
perm_index = perm_index.reshape((-1, labels.size(1) // 2)).transpose(0, 1)
# Permute the two halves such that they do not cross over
perm_index = perm_index[torch.randperm(labels.size(1) // 2)]
# Flatten this out into the desired permuted factorisation order
perm_index = torch.flatten(perm_index.transpose(0, 1))
# Set the permutation indices of non-masked (non-functional) tokens to the
# smallest index (-1) so that:
# (1) They can be seen by all other positions
# (2) They cannot see masked positions, so there won't be information leak
perm_index.masked_fill_(~masked_indices[i] & non_func_mask[i], -1)
# The logic for whether the i-th token can attend on the j-th token based on the factorisation order:
# 0 (can attend): If perm_index[i] > perm_index[j] or j is neither masked nor a functional token
# 1 (cannot attend): If perm_index[i] <= perm_index[j] and j is either masked or a functional token
perm_mask[i] = (
perm_index.reshape((labels.size(1), 1)) <= perm_index.reshape((1, labels.size(1)))
) & masked_indices[i]
return inputs.long(), perm_mask, target_mapping, labels.long()
@dataclass
class DataCollatorForBART(DataCollatorForLanguageModeling):
"""
Data collator used for language modeling.
- collates batches of tensors, honoring their tokenizer's pad_token
- preprocesses batches for masked language modeling
- includes sentence permutation and whole work masking
"""
permute_sentence_ratio: float = 1.0
decoder_start_token_id: int = None
def __call__(
self, examples: List[Union[List[int], torch.Tensor, Dict[str, torch.Tensor]]]
) -> Dict[str, torch.Tensor]:
assert (self.decoder_start_token_id is not None,
"This collator requires that decoder_start_token_id to be defined!")
input_attention_mask = None
batch = {}
if isinstance(examples[0], (dict, BatchEncoding)):
input_ids = [e["input_ids"] for e in examples]
input_attention_mask = [e["attention_mask"] for e in examples]
else:
input_ids = examples
examples = [{"input_ids": e} for e in examples]
if input_attention_mask is None:
batch_input = _collate_batch(input_ids, self.tokenizer)
else:
batch_input, input_attention_mask = _collate_batch(input_ids, self.tokenizer, input_attention_mask)
batch["attention_mask"] = input_attention_mask
max_length = batch_input.shape[1]
batch["labels"] = batch_input.clone()
batch["decoder_input_ids"] = shift_tokens_right(batch_input, self.tokenizer.pad_token_id, self.decoder_start_token_id)
if self.permute_sentence_ratio > 0.0:
batch_input = torch.stack([
self._permute_sentences(
input_id,
self.tokenizer._convert_token_to_id("."),
self.permute_sentence_ratio) for input_id in batch_input])
mask_labels = []
for i, input_id in enumerate(input_ids):
ref_tokens = []
for id in tolist(input_id):
token = self.tokenizer._convert_id_to_token(id)
ref_tokens.append(token)
#@TODO need to permute examples[i]["chinese"] according to sentence permutation above
# # For Chinese tokens, we need extra inf to mark sub-word, e.g [喜,欢]-> [喜,##欢]
# if "chinese_ref" in examples[i]:
# ref_pos = tolist(examples[i]["chinese_ref"])
# len_seq = input_id.size(0)
# for i in range(len_seq):
# if i in ref_pos:
# ref_tokens[i] = "##" + ref_tokens[i]
mask_labels.append(self._whole_word_mask(ref_tokens))
batch_mask = _collate_batch(mask_labels, self.tokenizer)
batch_input, input_attention_mask = self.mask_tokens_span(batch_input, batch_mask, input_attention_mask)
# Collate to max_length to match decoder inputs and labels
if input_attention_mask is None:
batch["input_ids"] = _collate_batch(batch_input, self.tokenizer, max_length=max_length)
else:
batch["input_ids"], batch["attention_mask"] = _collate_batch(
batch_input,
self.tokenizer,
input_attention_mask,
max_length)
return batch
def _whole_word_mask(self, input_tokens: List[str], max_predictions=512):
"""
Get 0/1 labels for masked tokens with whole word mask proxy
"""
cand_indexes = []
for (i, token) in enumerate(input_tokens):
if token == "[CLS]" or token == "[SEP]":
continue
if len(cand_indexes) >= 1 and (not token.startswith("Ġ") or token.startswith("##")): #@TODO hf error in start with token?
cand_indexes[-1].append(i)
else:
cand_indexes.append([i])
random.shuffle(cand_indexes)
num_to_predict = min(max_predictions, max(1, int(round(len(input_tokens) * self.mlm_probability))))
masked_lms = []
covered_indexes = set()
for index_set in cand_indexes:
if len(masked_lms) >= num_to_predict:
break
# If adding a whole-word mask would exceed the maximum number of
# predictions, then just skip this candidate.
if len(masked_lms) + len(index_set) > num_to_predict:
continue
is_any_index_covered = False
for index in index_set:
if index in covered_indexes:
is_any_index_covered = True
break
if is_any_index_covered:
continue
for index in index_set:
covered_indexes.add(index)
masked_lms.append(index)
assert len(covered_indexes) == len(masked_lms)
mask_labels = [1 if i in covered_indexes else 0 for i in range(len(input_tokens))]
return mask_labels
def mask_tokens_span(self, inputs: torch.Tensor, mask_labels: torch.Tensor, attention_mask) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set
'mask_labels' means we use whole word mask (wwm), we directly mask idxs according to it's ref.
"""
if self.tokenizer.mask_token is None:
raise ValueError(
"This tokenizer does not have a mask token which is necessary for masked language modeling. Remove the --mlm flag if you want to use this tokenizer."
)
# We sample a few tokens in each sequence for masked-LM training (with probability args.mlm_probability defaults to 0.15 in Bert/RoBERTa)
probability_matrix = mask_labels
special_tokens_mask = [
self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in inputs.tolist()
]
probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)
if self.tokenizer._pad_token is not None:
padding_mask = inputs.eq(self.tokenizer.pad_token_id)
probability_matrix.masked_fill_(padding_mask, value=0.0)
masked_indices = probability_matrix.bool()
#@Todo we are now computing loss on all labels
# labels[~masked_indices] = self.tokenizer.pad_token_id # We only compute loss on masked tokens
# 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
indices_replaced = torch.bernoulli(torch.full(inputs.shape, 0.8)).bool() & masked_indices
mask_token_id = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token)
inputs[indices_replaced] = mask_token_id
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(inputs.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(self.tokenizer), inputs.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
# The rest of the time (10% of the time) we keep the masked input tokens unchanged
# return inputs, labels
#Remove consecutive duplicate mask tokens. One mask token represents whole span
inputs_left_shift = torch.cat((inputs[:, 1:], torch.zeros(inputs.shape[0],1)), dim=-1)
mask_left_shift = torch.not_equal((inputs - inputs_left_shift), 0)
mask = torch.cat((torch.full((inputs.shape[0],1),True), mask_left_shift[:, :-1]), dim=-1) | torch.not_equal(inputs, mask_token_id)
inputs = [torch.masked_select(inputs[i,:], mask[i,:]) for i in range(inputs.shape[0])]
if attention_mask is not None:
attention_mask = [torch.masked_select(attention_mask[i, :], mask[i,:]) for i in range(attention_mask.shape[0])]
return inputs, attention_mask
def _permute_sentences(self, source, full_stop_index, p=1.0):
# Pretend it ends with a full stop so last span is a sentence
span_end = self.tokenizer.convert_tokens_to_ids(self.tokenizer.eos_token)
source[source == span_end] = full_stop_index
full_stops = source == full_stop_index
# Tokens that are full stops, where the previous token is not
sentence_ends = (full_stops[1:] * ~full_stops[:-1]).nonzero(as_tuple=False) + 2
result = source.clone()
num_sentences = sentence_ends.size(0)
num_to_permute = math.ceil((num_sentences * 2 * p) / 2.0)
substitutions = torch.randperm(num_sentences)[:num_to_permute]
ordering = torch.arange(0, num_sentences)
ordering[substitutions] = substitutions[torch.randperm(num_to_permute)]
# Ignore <bos> at start
index = 1
for i in ordering:
sentence = source[(sentence_ends[i - 1] if i > 0 else 1) : sentence_ends[i]]
result[index : index + sentence.size(0)] = sentence
index += sentence.size(0)
last_fullstop = (source == full_stop_index).nonzero(as_tuple=False)[-1]
#Convert last full stop to span end
source[last_fullstop] = span_end
result[last_fullstop] = span_end
return result |
PyTorch/SpeechRecognition/Jasper/utils | utils | download_librispeech | # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/env python
import os
import argparse
import pandas as pd
from download_utils import download_file, md5_checksum, extract
parser = argparse.ArgumentParser(description='Download, verify and extract dataset files')
parser.add_argument('csv', type=str,
help='CSV file with urls and checksums to download.')
parser.add_argument('dest', type=str,
help='Download destnation folder.')
parser.add_argument('-e', type=str, default=None,
help='Extraction destnation folder. Defaults to download folder if not provided')
parser.add_argument('--skip_download', action='store_true',
help='Skip downloading the files')
parser.add_argument('--skip_checksum', action='store_true',
help='Skip checksum')
parser.add_argument('--skip_extract', action='store_true',
help='Skip extracting files')
args = parser.parse_args()
args.e = args.e or args.dest
df = pd.read_csv(args.csv, delimiter=',')
if not args.skip_download:
for url in df.url:
fname = url.split('/')[-1]
print("Downloading %s:" % fname)
download_file(url=url, dest_folder=args.dest, fname=fname)
else:
print("Skipping file download")
if not args.skip_checksum:
for index, row in df.iterrows():
url = row['url']
md5 = row['md5']
fname = url.split('/')[-1]
fpath = os.path.join(args.dest, fname)
print("Verifing %s: " % fname, end='')
ret = md5_checksum(fpath=fpath, target_hash=md5)
print("Passed" if ret else "Failed")
else:
print("Skipping checksum")
if not args.skip_extract:
for url in df.url:
fname = url.split('/')[-1]
fpath = os.path.join(args.dest, fname)
print("Decompressing %s:" % fpath)
extract(fpath=fpath, dest_folder=args.e)
else:
print("Skipping file extraction")
|
PyTorch/Translation/Transformer/fairseq/optim | optim | fairseq_optimizer | # Copyright (c) 2017-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the license found in the LICENSE file in
# the root directory of this source tree. An additional grant of patent rights
# can be found in the PATENTS file in the same directory.
#
#-------------------------------------------------------------------------
#
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch.optim
class FairseqOptimizer(object):
def __init__(self, args, params):
super().__init__()
self.args = args
self.params = params
@staticmethod
def add_args(parser):
"""Add optimizer-specific arguments to the parser."""
pass
@property
def optimizer(self):
"""Return a torch.optim.optimizer.Optimizer instance."""
if not hasattr(self, '_optimizer'):
raise NotImplementedError
if not isinstance(self._optimizer, torch.optim.Optimizer):
raise ValueError('_optimizer must be an instance of torch.optim.Optimizer')
return self._optimizer
@property
def optimizer_config(self):
"""
Return a kwarg dictionary that will be used to override optimizer
args stored in checkpoints. This allows us to load a checkpoint and
resume training using a different set of optimizer args, e.g., with a
different learning rate.
"""
raise NotImplementedError
def get_lr(self):
"""Return the current learning rate."""
return self.optimizer.param_groups[0]['lr']
def set_lr(self, lr):
"""Set the learning rate."""
for param_group in self.optimizer.param_groups:
param_group['lr'] = lr
def state_dict(self):
"""Return the optimizer's state dict."""
return self.optimizer.state_dict()
def load_state_dict(self, state_dict):
"""Load an optimizer state dict.
In general we should prefer the configuration of the existing optimizer
instance (e.g., learning rate) over that found in the state_dict. This
allows us to resume training from a checkpoint using a new set of
optimizer args.
"""
self.optimizer.load_state_dict(state_dict)
# override learning rate, momentum, etc. with latest values
for group in self.optimizer.param_groups:
group.update(self.optimizer_config)
def step(self, closure=None):
"""Performs a single optimization step."""
return self.optimizer.step(closure)
def zero_grad(self):
"""Clears the gradients of all optimized parameters."""
for group in self.optimizer.param_groups:
for p in group['params']:
p.grad = None
return self.optimizer.zero_grad()
|
PyTorch/Classification/GPUNet/triton/08ms-D/runner | runner | start_NVIDIA-DGX-1-(1x-V100-32GB) | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/bin/bash
# Evaluate Runner
python3 -m "triton.08ms-D.runner.__main__" \
--config-path "triton/08ms-D/runner/config_NVIDIA-DGX-1-(1x-V100-32GB).yaml" \
--device 0 |
PyTorch/LanguageModeling/BERT/triton/dist6l/scripts | scripts | setup_parameters | #!/usr/bin/env bash
# Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
echo "Setting up deployment parameters"
export FORMAT="onnx"
export PRECISION="fp16"
export EXPORT_FORMAT="onnx"
export EXPORT_PRECISION="fp16"
export ACCELERATOR="trt"
export ACCELERATOR_PRECISION="fp16"
export CAPTURE_CUDA_GRAPH="0"
export BATCH_SIZE="16"
export MAX_BATCH_SIZE="16"
export MAX_SEQ_LENGTH="384"
export CHECKPOINT_VARIANT="dist-6l-qa"
export CHECKPOINT_DIR=${CHECKPOINTS_DIR}/${CHECKPOINT_VARIANT}
export TRITON_MAX_QUEUE_DELAY="1"
export TRITON_GPU_ENGINE_COUNT="1"
export TRITON_PREFERRED_BATCH_SIZES="1"
if [[ "${FORMAT}" == "ts-trace" || "${FORMAT}" == "ts-script" ]]; then
export CONFIG_FORMAT="torchscript"
else
export CONFIG_FORMAT="${FORMAT}"
fi
if [[ "${EXPORT_FORMAT}" == "trt" ]]; then
export FLAG="--fixed-batch-dim"
else
export FLAG=""
fi
if [[ "${FORMAT}" == "ts-trace" || "${FORMAT}" == "ts-script" ]]; then
export CONFIG_FORMAT="torchscript"
else
export CONFIG_FORMAT="${FORMAT}"
fi
if [[ "${FORMAT}" == "trt" ]]; then
export MBS="0"
else
export MBS="${MAX_BATCH_SIZE}"
fi
if [[ "${EXPORT_FORMAT}" == "ts-trace" || "${EXPORT_FORMAT}" == "ts-script" ]]; then
export FORMAT_SUFFIX="pt"
else
export FORMAT_SUFFIX="${EXPORT_FORMAT}"
fi |
PyTorch/Recommendation/DLRM/dlrm/data | data | feature_spec | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import yaml
import os
from typing import Dict
from typing import List
import numpy as np
from dlrm.data.defaults import CATEGORICAL_CHANNEL, NUMERICAL_CHANNEL, LABEL_CHANNEL, \
TRAIN_MAPPING, TEST_MAPPING, \
TYPE_SELECTOR, FEATURES_SELECTOR, FILES_SELECTOR, CARDINALITY_SELECTOR, DTYPE_SELECTOR, \
SPLIT_BINARY, \
get_categorical_feature_type
""" For performance reasons, numerical features are required to appear in the same order
in both source_spec and channel_spec.
For more detailed requirements, see the check_feature_spec method"""
class FeatureSpec:
def __init__(self, feature_spec=None, source_spec=None, channel_spec=None, metadata=None, base_directory=None):
self.feature_spec: Dict = feature_spec if feature_spec is not None else {}
self.source_spec: Dict = source_spec if source_spec is not None else {}
self.channel_spec: Dict = channel_spec if channel_spec is not None else {}
self.metadata: Dict = metadata if metadata is not None else {}
self.base_directory: str = base_directory
@classmethod
def from_yaml(cls, path):
with open(path, 'r') as feature_spec_file:
base_directory = os.path.dirname(path)
feature_spec = yaml.safe_load(feature_spec_file)
return cls.from_dict(feature_spec, base_directory=base_directory)
@classmethod
def from_dict(cls, source_dict, base_directory):
return cls(base_directory=base_directory, **source_dict)
def to_dict(self) -> Dict:
attributes_to_dump = ['feature_spec', 'source_spec', 'channel_spec', 'metadata']
return {attr: self.__dict__[attr] for attr in attributes_to_dump}
def to_string(self):
return yaml.dump(self.to_dict())
def to_yaml(self, output_path=None):
if not output_path:
output_path = self.base_directory + '/feature_spec.yaml'
with open(output_path, 'w') as output_file:
print(yaml.dump(self.to_dict()), file=output_file)
def get_number_of_numerical_features(self) -> int:
numerical_features = self.channel_spec[NUMERICAL_CHANNEL]
return len(numerical_features)
def cat_positions_to_names(self, positions: List[int]):
# Ordering needs to correspond to the one in get_categorical_sizes()
feature_names = self.get_categorical_feature_names()
return [feature_names[i] for i in positions]
def get_categorical_feature_names(self):
""" Provides the categorical feature names. The returned order should me maintained."""
return self.channel_spec[CATEGORICAL_CHANNEL]
def get_categorical_sizes(self) -> List[int]:
"""For a given feature spec, this function is expected to return the sizes in the order corresponding to the
order in the channel_spec section """
categorical_features = self.get_categorical_feature_names()
cardinalities = [self.feature_spec[feature_name][CARDINALITY_SELECTOR] for feature_name in
categorical_features]
return cardinalities
def check_feature_spec(self):
# TODO check if cardinality fits in dtype, check if base directory is set
# TODO split into two checking general and model specific requirements
# check that mappings are the ones expected
mapping_name_list = list(self.source_spec.keys())
assert sorted(mapping_name_list) == sorted([TEST_MAPPING, TRAIN_MAPPING])
# check that channels are the ones expected
channel_name_list = list(self.channel_spec.keys())
assert sorted(channel_name_list) == sorted([CATEGORICAL_CHANNEL, NUMERICAL_CHANNEL, LABEL_CHANNEL])
categorical_features_list = self.channel_spec[CATEGORICAL_CHANNEL]
numerical_features_list = self.channel_spec[NUMERICAL_CHANNEL]
label_features_list = self.channel_spec[LABEL_CHANNEL]
set_of_categorical_features = set(categorical_features_list)
set_of_numerical_features = set(numerical_features_list)
# check that exactly one label feature is selected
assert len(label_features_list) == 1
label_feature_name = label_features_list[0]
# check that lists in channel spec contain unique names
assert sorted(list(set_of_categorical_features)) == sorted(categorical_features_list)
assert sorted(list(set_of_numerical_features)) == sorted(numerical_features_list)
# check that all features used in channel spec are exactly ones defined in feature_spec
feature_spec_features = list(self.feature_spec.keys())
channel_spec_features = list(set.union(set_of_categorical_features,
set_of_numerical_features,
{label_feature_name}))
assert sorted(feature_spec_features) == sorted(channel_spec_features)
# check that correct dtypes are provided for all features
for feature_dict in self.feature_spec.values():
assert DTYPE_SELECTOR in feature_dict
try:
np.dtype(feature_dict[DTYPE_SELECTOR])
except TypeError:
assert False, "Type not understood by numpy"
# check that categorical features have cardinality provided
for feature_name, feature_dict in self.feature_spec.items():
if feature_name in set_of_categorical_features:
assert CARDINALITY_SELECTOR in feature_dict
assert isinstance(feature_dict[CARDINALITY_SELECTOR], int)
for mapping_name in [TRAIN_MAPPING, TEST_MAPPING]:
mapping = self.source_spec[mapping_name]
mapping_features = set()
for chunk in mapping:
# check that chunk has the correct type
assert chunk[TYPE_SELECTOR] == SPLIT_BINARY
contained_features = chunk[FEATURES_SELECTOR]
containing_files = chunk[FILES_SELECTOR]
# check that features are unique in mapping
for feature in contained_features:
assert feature not in mapping_features
mapping_features.add(feature)
# check that chunk has at least one features
assert len(contained_features) >= 1
# check that chunk has exactly file
assert len(containing_files) == 1
first_feature = contained_features[0]
if first_feature in set_of_categorical_features:
# check that each categorical feature is in a different file
assert len(contained_features) == 1
elif first_feature in set_of_numerical_features:
# check that numerical features are all in one chunk
assert sorted(contained_features) == sorted(numerical_features_list)
# check that ordering is exactly same as in channel spec - required for performance
assert contained_features == numerical_features_list
# check numerical dtype
for feature in contained_features:
assert np.dtype(self.feature_spec[feature][DTYPE_SELECTOR]) == np.float16
elif first_feature == label_feature_name:
# check that label feature is in a separate file
assert len(contained_features) == 1
# check label dtype
assert np.dtype(self.feature_spec[first_feature][DTYPE_SELECTOR]) == bool
else:
assert False, "Feature of unknown type"
# check that all features appeared in mapping
assert sorted(mapping_features) == sorted(feature_spec_features)
@staticmethod
def get_default_feature_spec(number_of_numerical_features, categorical_feature_cardinalities):
numerical_feature_fstring = "num_{}"
categorical_feature_fstring = "cat_{}.bin"
label_feature_name = "label"
numerical_file_name = "numerical.bin"
categorical_file_fstring = "{}" # TODO remove .bin from feature name, add to file name
label_file_name = "label.bin"
number_of_categorical_features = len(categorical_feature_cardinalities)
numerical_feature_names = [numerical_feature_fstring.format(i) for i in range(number_of_numerical_features)]
categorical_feature_names = [categorical_feature_fstring.format(i) for i in
range(number_of_categorical_features)]
cat_feature_types = [get_categorical_feature_type(int(cat_size)) for cat_size in
categorical_feature_cardinalities]
feature_dict = {f_name: {DTYPE_SELECTOR: str(np.dtype(f_type)), CARDINALITY_SELECTOR: f_size}
for f_name, f_type, f_size in
zip(categorical_feature_names, cat_feature_types, categorical_feature_cardinalities)}
for f_name in numerical_feature_names:
feature_dict[f_name] = {DTYPE_SELECTOR: str(np.dtype(np.float16))}
feature_dict[label_feature_name] = {DTYPE_SELECTOR: str(np.dtype(bool))}
channel_spec = {CATEGORICAL_CHANNEL: categorical_feature_names,
NUMERICAL_CHANNEL: numerical_feature_names,
LABEL_CHANNEL: [label_feature_name]}
source_spec = {}
for filename in (TRAIN_MAPPING, TEST_MAPPING):
source_spec[filename] = []
dst_folder = filename
numerical_file_path = os.path.join(dst_folder, numerical_file_name)
source_spec[filename].append({TYPE_SELECTOR: SPLIT_BINARY,
FEATURES_SELECTOR: numerical_feature_names,
FILES_SELECTOR: [numerical_file_path]})
label_file_path = os.path.join(dst_folder, label_file_name)
source_spec[filename].append({TYPE_SELECTOR: SPLIT_BINARY,
FEATURES_SELECTOR: [label_feature_name],
FILES_SELECTOR: [label_file_path]})
for feature_name in categorical_feature_names:
categorical_file_name = categorical_file_fstring.format(feature_name)
categorical_file_path = os.path.join(dst_folder, categorical_file_name)
source_spec[filename].append({TYPE_SELECTOR: SPLIT_BINARY,
FEATURES_SELECTOR: [feature_name],
FILES_SELECTOR: [categorical_file_path]})
return FeatureSpec(feature_spec=feature_dict, source_spec=source_spec, channel_spec=channel_spec, metadata={})
def get_mapping_paths(self, mapping_name: str):
label_feature_name = self.channel_spec[LABEL_CHANNEL][0]
set_of_categorical_features = set(self.channel_spec[CATEGORICAL_CHANNEL])
set_of_numerical_features = set(self.channel_spec[NUMERICAL_CHANNEL])
label_path = None
numerical_path = None
categorical_paths = dict()
for chunk in self.source_spec[mapping_name]:
local_path = os.path.join(self.base_directory, chunk[FILES_SELECTOR][0])
if chunk[FEATURES_SELECTOR][0] in set_of_numerical_features:
numerical_path = local_path
elif chunk[FEATURES_SELECTOR][0] in set_of_categorical_features:
local_feature = chunk[FEATURES_SELECTOR][0]
categorical_paths[local_feature] = local_path
elif chunk[FEATURES_SELECTOR][0] == label_feature_name:
label_path = local_path
return label_path, numerical_path, categorical_paths
|
PyTorch/Recommendation/DLRM/dlrm/cuda_ext | cuda_ext | fused_gather_embedding | # Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Fused Buckle Embedding
"""
from absl import logging
import torch
from torch.autograd import Function
from dlrm.cuda_ext import fused_embedding
class BuckleEmbeddingFusedGatherFunction(Function):
"""Customized embedding gather """
@staticmethod
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
def forward(ctx, embedding, indices, offsets, amp_train):
output = fused_embedding.gather_gpu_fused_fwd(embedding, indices, offsets, amp_train)
ctx.save_for_backward(embedding, indices, offsets)
return output
@staticmethod
@torch.cuda.amp.custom_bwd
def backward(ctx, grad_output):
embedding, indices, offsets = ctx.saved_tensors
logging.log_first_n(logging.WARNING, "Highly specialized embedding for embedding_dim 128", 1)
grad_weights = fused_embedding.gather_gpu_fused_bwd(embedding, indices, offsets, grad_output)
return grad_weights, None, None, None
buckle_embedding_fused_gather = BuckleEmbeddingFusedGatherFunction.apply
|
TensorFlow2/Recommendation/DLRM_and_DCNv2 | DLRM_and_DCNv2 | dlrm | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# author: Tomasz Grel ([email protected])
from absl import app, flags
def define_dlrm_specific_flags():
flags.DEFINE_integer("batch_size", default=64 * 1024, help="Batch size used for training")
flags.DEFINE_integer("valid_batch_size", default=64 * 1024, help="Batch size used for validation")
flags.DEFINE_list("top_mlp_dims", [1024, 1024, 512, 256, 1], "Linear layer sizes for the top MLP")
flags.DEFINE_list("bottom_mlp_dims", [512, 256, 128], "Linear layer sizes for the bottom MLP")
flags.DEFINE_string("embedding_dim", default='128', help='Number of columns in the embedding tables')
flags.DEFINE_enum("optimizer", default="sgd", enum_values=['sgd', 'adam'],
help='The optimization algorithm to be used.')
flags.DEFINE_enum("interaction", default="dot_custom_cuda", enum_values=["dot_custom_cuda", "dot_tensorflow", "cross"],
help="Feature interaction implementation to use")
flags.DEFINE_float("learning_rate", default=24, help="Learning rate")
flags.DEFINE_float("beta1", default=0.9, help="Beta1 for the Adam optimizer")
flags.DEFINE_float("beta2", default=0.999, help="Bea2 for the Adam optimizer")
flags.DEFINE_integer("warmup_steps", default=8000,
help='Number of steps over which to linearly increase the LR at the beginning')
flags.DEFINE_integer("decay_start_step", default=48000, help='Optimization step at which to start the poly LR decay')
flags.DEFINE_integer("decay_steps", default=24000, help='Number of steps over which to decay from base LR to 0')
flags.DEFINE_integer("num_cross_layers", default=3, help='Number of cross layers for DCNv2')
flags.DEFINE_integer("cross_layer_projection_dim", default=512, help='Projection dimension used in the cross layers')
define_dlrm_specific_flags()
import main
def _main(argv):
main.main()
if __name__ == '__main__':
app.run(_main)
|
PyTorch/SpeechRecognition/wav2vec2/scripts | scripts | download_data | #!/usr/bin/env bash
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
: ${DATASET_DIR:=/datasets/}
: ${SUBSETS:="train-clean-100 train-clean-360 train-other-500 dev-clean dev-other test-clean test-other"}
python3 utils/download_librispeech.py $DATASET_DIR --subsets $SUBSETS
|
PyTorch/Segmentation/nnUNet/triton/deployment_toolkit | deployment_toolkit | args | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import inspect
import logging
from typing import Any, Callable, Dict, Optional, Union
from .core import GET_ARGPARSER_FN_NAME, load_from_file
LOGGER = logging.getLogger(__name__)
def str2bool(v):
if isinstance(v, bool):
return v
if v.lower() in ("yes", "true", "t", "y", "1"):
return True
elif v.lower() in ("no", "false", "f", "n", "0"):
return False
else:
raise argparse.ArgumentTypeError("Boolean value expected.")
def filter_fn_args(args: Union[dict, argparse.Namespace], fn: Callable) -> dict:
signature = inspect.signature(fn)
parameters_names = list(signature.parameters)
if isinstance(args, argparse.Namespace):
args = vars(args)
args = {k: v for k, v in args.items() if k in parameters_names}
return args
def add_args_for_fn_signature(parser, fn) -> argparse.ArgumentParser:
parser.conflict_handler = "resolve"
signature = inspect.signature(fn)
for parameter in signature.parameters.values():
if parameter.name in ["self", "args", "kwargs"]:
continue
argument_kwargs = {}
if parameter.annotation != inspect.Parameter.empty:
if parameter.annotation == bool:
argument_kwargs["type"] = str2bool
argument_kwargs["choices"] = [0, 1]
elif isinstance(parameter.annotation, type(Optional[Any])):
types = [type_ for type_ in parameter.annotation.__args__ if not isinstance(None, type_)]
if len(types) != 1:
raise RuntimeError(
f"Could not prepare argument parser for {parameter.name}: {parameter.annotation} in {fn}"
)
argument_kwargs["type"] = types[0]
else:
argument_kwargs["type"] = parameter.annotation
if parameter.default != inspect.Parameter.empty:
if parameter.annotation == bool:
argument_kwargs["default"] = str2bool(parameter.default)
else:
argument_kwargs["default"] = parameter.default
else:
argument_kwargs["required"] = True
name = parameter.name.replace("_", "-")
LOGGER.debug(f"Adding argument {name} with {argument_kwargs}")
parser.add_argument(f"--{name}", **argument_kwargs)
return parser
class ArgParserGenerator:
def __init__(self, cls_or_fn, module_path: Optional[str] = None):
self._cls_or_fn = cls_or_fn
self._handle = cls_or_fn if inspect.isfunction(cls_or_fn) else getattr(cls_or_fn, "__init__")
input_is_python_file = module_path and module_path.endswith(".py")
self._input_path = module_path if input_is_python_file else None
self._required_fn_name_for_signature_parsing = getattr(
cls_or_fn, "required_fn_name_for_signature_parsing", None
)
def update_argparser(self, parser):
name = self._handle.__name__
group_parser = parser.add_argument_group(name)
add_args_for_fn_signature(group_parser, fn=self._handle)
self._update_argparser(group_parser)
def get_args(self, args: argparse.Namespace):
filtered_args = filter_fn_args(args, fn=self._handle)
tmp_parser = argparse.ArgumentParser(allow_abbrev=False)
self._update_argparser(tmp_parser)
custom_names = [
p.dest.replace("-", "_") for p in tmp_parser._actions if not isinstance(p, argparse._HelpAction)
]
custom_params = {n: getattr(args, n) for n in custom_names}
filtered_args = {**filtered_args, **custom_params}
return filtered_args
def from_args(self, args: Union[argparse.Namespace, Dict]):
args = self.get_args(args)
LOGGER.info(f"Initializing {self._cls_or_fn.__name__}({args})")
return self._cls_or_fn(**args)
def _update_argparser(self, parser):
label = "argparser_update"
if self._input_path:
update_argparser_handle = load_from_file(self._input_path, label=label, target=GET_ARGPARSER_FN_NAME)
if update_argparser_handle:
update_argparser_handle(parser)
elif self._required_fn_name_for_signature_parsing:
fn_handle = load_from_file(
self._input_path, label=label, target=self._required_fn_name_for_signature_parsing
)
if fn_handle:
add_args_for_fn_signature(parser, fn_handle)
|
TensorFlow2/Segmentation/UNet_Medical/examples | examples | unet_INFER_TF-AMP | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script launches U-Net run in FP16 on 1 GPU for inference batch_size 1. Usage:
# bash unet_INFER_TF-AMP.sh <path to dataset> <path to results directory> <fold>
horovodrun -np 1 python main.py --data_dir $1 --model_dir $2 --batch_size 1 --exec_mode predict --xla --amp --fold $3
|
PyTorch/SpeechRecognition/QuartzNet/platform | platform | DGXA100_QuartzNet_AMP_8GPU | #!/bin/bash
set -a
: ${NUM_GPUS:=8}
: ${GPU_BATCH_SIZE:=72}
: ${GRAD_ACCUMULATION:=2}
: ${AMP=:true}
bash scripts/train.sh "$@"
|
PyTorch/Classification/GPUNet/triton/runner/maintainer/docker | docker | __init__ | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
TensorFlow2/Detection/Efficientdet/model | model | coco_metric | # Copyright 2020 Google Research. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""COCO-style evaluation metrics.
Implements the interface of COCO API and metric_fn in tf.TPUEstimator.
COCO API: github.com/cocodataset/cocoapi/
"""
import json
import os
from absl import logging
import numpy as np
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
import tensorflow as tf
import horovod.tensorflow.keras as hvd
from model import label_util
class EvaluationMetric():
"""COCO evaluation metric class.
This class cannot inherit from tf.keras.metrics.Metric due to numpy.
"""
def __init__(self, filename=None, testdev_dir=None, label_map=None):
"""Constructs COCO evaluation class.
The class provides the interface to metrics_fn in TPUEstimator. The
_update_op() takes detections from each image and push them to
self.detections. The _evaluate() loads a JSON file in COCO annotation format
as the groundtruth and runs COCO evaluation.
Args:
filename: Ground truth JSON file name. If filename is None, use
groundtruth data passed from the dataloader for evaluation. filename is
ignored if testdev_dir is not None.
testdev_dir: folder name for testdev data. If None, run eval without
groundtruth, and filename will be ignored.
label_map: a dict from id to class name. Used for per-class AP.
"""
self.label_map = label_map
self.filename = filename
self.testdev_dir = testdev_dir
self.metric_names = ['AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'ARmax1',
'ARmax10', 'ARmax100', 'ARs', 'ARm', 'ARl']
self.reset_states()
def reset_states(self):
"""Reset COCO API object."""
self.detections = []
self.dataset = {
'images': [],
'annotations': [],
'categories': []
}
self.image_id = 1
self.annotation_id = 1
self.category_ids = []
self.metric_values = None
def evaluate(self):
"""Evaluates with detections from all images with COCO API.
Returns:
coco_metric: float numpy array with shape [12] representing the
coco-style evaluation metrics.
"""
if self.filename:
coco_gt = COCO(self.filename)
else:
coco_gt = COCO()
coco_gt.dataset = self.dataset
coco_gt.createIndex()
if self.testdev_dir:
# Run on test-dev dataset.
box_result_list = []
for det in self.detections:
box_result_list.append({
'image_id': int(det[0]),
'category_id': int(det[6]),
'bbox': np.around(
det[1:5].astype(np.float64), decimals=2).tolist(),
'score': float(np.around(det[5], decimals=3)),
})
json.encoder.FLOAT_REPR = lambda o: format(o, '.3f')
# Must be in the formst of 'detections_test-dev2017_xxx_results'.
fname = 'detections_test-dev2017_test_results'
output_path = os.path.join(self.testdev_dir, fname + '.json')
logging.info('Writing output json file to: %s', output_path)
with tf.io.gfile.GFile(output_path, 'w') as fid:
json.dump(box_result_list, fid)
return np.array([-1.], dtype=np.float32)
else:
# Run on validation dataset.
detections = np.array(self.detections)
image_ids = list(set(detections[:, 0]))
coco_dt = coco_gt.loadRes(detections)
coco_eval = COCOeval(coco_gt, coco_dt, iouType='bbox')
coco_eval.params.imgIds = image_ids
coco_eval.evaluate()
coco_eval.accumulate()
coco_eval.summarize()
coco_metrics = coco_eval.stats
if self.label_map:
# Get per_class AP, see pycocotools/cocoeval.py:334
# TxRxKxAxM: iouThrs x recThrs x catIds x areaRng x maxDets
# Use areaRng_id=0 ('all') and maxDets_id=-1 (200) in default
precision = coco_eval.eval['precision'][:, :, :, 0, -1]
# Ideally, label_map should match the eval set, but it is possible that
# some classes has no data in the eval set.
ap_perclass = [0] * max(precision.shape[-1], len(self.label_map))
for c in range(precision.shape[-1]): # iterate over all classes
precision_c = precision[:, :, c]
# Only consider values if > -1.
precision_c = precision_c[precision_c > -1]
ap_c = np.mean(precision_c) if precision_c.size else -1.
ap_perclass[c] = ap_c
coco_metrics = np.concatenate((coco_metrics, ap_perclass))
# Return the concat normal and per-class AP.
return np.array(coco_metrics, dtype=np.float32)
def result(self):
"""Return the metric values (and compute it if needed)."""
if self.metric_values is None:
self.metric_values = self.evaluate()
return self.metric_values
def update_state(self, groundtruth_data, detections):
"""Update detection results and groundtruth data.
Append detection results to self.detections to aggregate results from
all validation set. The groundtruth_data is parsed and added into a
dictionary with the same format as COCO dataset, which can be used for
evaluation.
Args:
groundtruth_data: Groundtruth annotations in a tensor with each row
representing [y1, x1, y2, x2, is_crowd, area, class].
detections: Detection results in a tensor with each row representing
[image_id, x, y, width, height, score, class].
"""
for i, det in enumerate(detections):
# Filter out detections with predicted class label = -1.
indices = np.where(det[:, -1] > -1)[0]
det = det[indices]
if det.shape[0] == 0:
continue
# Append groundtruth annotations to create COCO dataset object.
# Add images.
image_id = det[0, 0]
if image_id == -1:
image_id = self.image_id
det[:, 0] = image_id
self.detections.extend(det)
if not self.filename and not self.testdev_dir:
# process groudtruth data only if filename is empty and no test_dev.
self.dataset['images'].append({
'id': int(image_id),
})
# Add annotations.
indices = np.where(groundtruth_data[i, :, -1] > -1)[0]
for data in groundtruth_data[i, indices]:
box = data[0:4]
is_crowd = data[4]
area = (box[3] - box[1]) * (box[2] - box[0])
category_id = data[6]
if category_id < 0:
break
self.dataset['annotations'].append({
'id': int(self.annotation_id),
'image_id': int(image_id),
'category_id': int(category_id),
'bbox': [box[1], box[0], box[3] - box[1], box[2] - box[0]],
'area': area,
'iscrowd': int(is_crowd)
})
self.annotation_id += 1
self.category_ids.append(category_id)
self.image_id += 1
if not self.filename:
self.category_ids = list(set(self.category_ids))
self.dataset['categories'] = [
{'id': int(category_id)} for category_id in self.category_ids
]
def gather(self):
self.detections = hvd.allgather(self.detections)
def estimator_metric_fn(self, detections, groundtruth_data):
"""Constructs the metric function for tf.TPUEstimator.
For each metric, we return the evaluation op and an update op; the update op
is shared across all metrics and simply appends the set of detections to the
`self.detections` list. The metric op is invoked after all examples have
been seen and computes the aggregate COCO metrics. Please find details API
in: https://www.tensorflow.org/api_docs/python/tf/contrib/learn/MetricSpec
Args:
detections: Detection results in a tensor with each row representing
[image_id, x, y, width, height, score, class]
groundtruth_data: Groundtruth annotations in a tensor with each row
representing [y1, x1, y2, x2, is_crowd, area, class].
Returns:
metrics_dict: A dictionary mapping from evaluation name to a tuple of
operations (`metric_op`, `update_op`). `update_op` appends the
detections for the metric to the `self.detections` list.
"""
with tf.name_scope('coco_metric'):
if self.testdev_dir:
update_op = tf.numpy_function(self.update_state,
[groundtruth_data, detections], [])
metrics = tf.numpy_function(self.result, [], tf.float32)
metrics_dict = {'AP': (metrics, update_op)}
return metrics_dict
else:
update_op = tf.numpy_function(self.update_state,
[groundtruth_data, detections], [])
metrics = tf.numpy_function(self.result, [], tf.float32)
metrics_dict = {}
for i, name in enumerate(self.metric_names):
metrics_dict[name] = (metrics[i], update_op)
if self.label_map:
# process per-class AP.
label_map = label_util.get_label_map(self.label_map)
for i, cid in enumerate(sorted(label_map.keys())):
name = 'AP_/%s' % label_map[cid]
metrics_dict[name] = (metrics[i + len(self.metric_names)],
update_op)
return metrics_dict
|
TensorFlow2/Recommendation/DLRM_and_DCNv2/preproc | preproc | verify_criteo_downloaded | # Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#! /bin/bash
set -e
set -x
download_dir=${1:-'/data/dlrm/criteo'}
cd ${download_dir}
for i in $(seq 0 23); do
filename=day_${i}
if [ -f $filename ]; then
echo "$filename exists, OK"
else
echo "$filename does not exist. Please follow the instructions at: http://labs.criteo.com/2013/12/download-terabyte-click-logs/ to download it"
exit 1
fi
done
cd -
echo "Criteo data verified"
|
TensorFlow2/Segmentation/nnUNet | nnUNet | README | # nnU-Net For TensorFlow 2
This repository provides a script and recipe to train the nnU-Net model to achieve state-of-the-art accuracy. The content of this repository is tested and maintained by NVIDIA.
## Table Of Contents
- [Model overview](#model-overview)
* [Model architecture](#model-architecture)
* [Default configuration](#default-configuration)
* [Feature support matrix](#feature-support-matrix)
* [Features](#features)
* [Mixed precision training](#mixed-precision-training)
* [Enabling mixed precision](#enabling-mixed-precision)
* [TF32](#tf32)
* [Glossary](#glossary)
- [Setup](#setup)
* [Requirements](#requirements)
- [Quick Start Guide](#quick-start-guide)
- [Advanced](#advanced)
* [Scripts and sample code](#scripts-and-sample-code)
* [Command-line options](#command-line-options)
* [Getting the data](#getting-the-data)
* [Dataset guidelines](#dataset-guidelines)
* [Multi-dataset](#multi-dataset)
* [Training process](#training-process)
* [Inference process](#inference-process)
- [Performance](#performance)
* [Benchmarking](#benchmarking)
* [Training performance benchmark](#training-performance-benchmark)
* [Inference performance benchmark](#inference-performance-benchmark)
* [Results](#results)
* [Training accuracy results](#training-accuracy-results)
* [Training accuracy: NVIDIA DGX A100 (8x A100 80G)](#training-accuracy-nvidia-dgx-a100-8x-a100-80g)
* [Training accuracy: NVIDIA DGX-1 (8x V100 32G)](#training-accuracy-nvidia-dgx-1-8x-v100-32G)
* [Training performance results](#training-performance-results)
* [Training performance: NVIDIA DGX A100 (8x A100 80G)](#training-performance-nvidia-dgx-a100-8x-a100-80g)
* [Training performance: NVIDIA DGX-1 (8x V100 32G)](#training-performance-nvidia-dgx-1-8x-v100-32G)
* [Inference performance results](#inference-performance-results)
* [Inference performance: NVIDIA DGX A100 (1x A100 80G)](#inference-performance-nvidia-dgx-a100-1x-a100-80g)
* [Inference performance: NVIDIA DGX-1 (1x V100 32G)](#inference-performance-nvidia-dgx-1-1x-v100-32G)
- [Known issues](#known-issues)
- [Release notes](#release-notes)
* [Changelog](#changelog)
* [Known issues](#known-issues)
## Model overview
The nnU-Net ("no-new-Net") refers to a robust and self-adapting framework for U-Net based medical image segmentation. This repository contains a nnU-Net implementation as described in the paper: [nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation](https://arxiv.org/abs/1809.10486).
The differences between this nnU-net and [the original model](https://github.com/MIC-DKFZ/nnUNet) are:
- Dynamic selection of patch size is not supported, and it has to be set in `data_preprocessing/configs.py` file.
- Cascaded U-Net is not supported.
- The following data augmentations are not used: rotation, simulation of low resolution, gamma augmentation.
This model is trained with mixed precision using Tensor Cores on Volta, Turing, and the NVIDIA Ampere GPU architectures. Therefore, researchers can get results 2x faster than training without Tensor Cores, while experiencing the benefits of mixed precision training. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.
### Model architecture
The nnU-Net allows training two types of networks: 2D U-Net and 3D U-Net to perform semantic segmentation of 2D or 3D images, with high accuracy and performance.
The following figure shows the architecture of the 3D U-Net model and its different components. U-Net is composed of a contractive and an expanding path, that aims at building a bottleneck in its centermost part through a combination of convolution, instance norm, and leaky ReLU operations. After this bottleneck, the image is reconstructed through a combination of convolutions and upsampling. Skip connections are added with the goal of helping the backward flow of gradients to improve the training.
<img src="images/unet3d.png" width="900"/>
*Figure 1: The 3D U-Net architecture*
### Default configuration
All convolution blocks in U-Net in both encoder and decoder are using two convolution layers followed by instance normalization and a leaky ReLU nonlinearity. For downsampling, we are using stride convolution whereas transposed convolution is used for upsampling.
All models were trained with the Adam optimizer. For loss function we use the average of [cross-entropy](https://en.wikipedia.org/wiki/Cross_entropy) and [dice coefficient](https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient).
Used data augmentation: crop with oversampling the foreground class, mirroring, zoom, Gaussian noise, Gaussian blur, brightness.
### Feature support matrix
The following features are supported by this model:
| Feature | nnUNet
|-----------------------|--------------------------
|[DALI](https://docs.nvidia.com/deeplearning/dali/release-notes/index.html) | Yes
|Automatic mixed precision (AMP) | Yes
|Horovod Multi-GPU (NCCL) | Yes
|[XLA](https://www.tensorflow.org/xla) | Yes
#### Features
**DALI**
NVIDIA Data Loading Library (DALI) is a collection of optimized building blocks, and an execution engine, to speed up the pre-processing of the input data for deep learning applications. DALI provides both the performance and the flexibility for accelerating different data pipelines as a single library. This single library can then be integrated into different deep learning training and inference applications. For details, refer to example sources in this repository or refer to the [DALI documentation](https://docs.nvidia.com/deeplearning/dali/index.html).
**Automatic Mixed Precision (AMP)**
Computation graphs can be modified by TensorFlow during runtime to support mixed precision training, which allows using FP16 training with FP32 master weights. A detailed explanation of mixed precision can be found in the next section.
**Multi-GPU training with Horovod**
Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information about how to get started with Horovod, refer to the [Horovod: Official repository](https://github.com/horovod/horovod).
Our model uses Horovod to implement efficient multi-GPU training with NCCL. For details, refer to example scripts in this repository or refer to the [TensorFlow tutorial](https://github.com/horovod/horovod/#usage).
**XLA**
XLA (Accelerated Linear Algebra) is a compiler that can speed up TensorFlow networks by model-specific optimizations i.e. fusing many GPU operations together.
Operations fused into a single GPU kernel do not have to use extra memory to store intermediate values by keeping them in GPU registers, thus reducing memory operations and improving performance. For details refer to the [TensorFlow documentation](https://www.tensorflow.org/xla).
### Mixed precision training
Mixed precision is the combined use of different numerical precisions in a computational method. [Mixed precision](https://arxiv.org/abs/1710.03740) training offers significant computational speedup by performing operations in the half-precision format while storing minimal information in single-precision to keep as much information as possible in critical parts of the network. Since the introduction of [Tensor Cores](https://developer.nvidia.com/tensor-cores) in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x speedup on the most intense model architectures. Using mixed precision training requires two steps:
1. Porting the model to use the FP16 data type where appropriate.
2. Adding loss scaling to preserve small gradient values.
This can now be achieved using Automatic Mixed Precision (AMP) for TensorFlow to enable the full [mixed precision methodology](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#tensorflow) in your existing TensorFlow model code. AMP enables mixed precision training on NVIDIA Volta, NVIDIA Turing, and NVIDIA Ampere GPU architectures automatically. The TensorFlow framework code makes all necessary model changes internally.
In TF-AMP, the computational graph is optimized to use as few casts as necessary and maximize the use of FP16, and the loss scaling is automatically applied inside of supported optimizers. AMP can be configured to work with the existing tf.contrib loss scaling manager by disabling the AMP scaling with a single environment variable to perform only the automatic mixed-precision optimization. It accomplishes this by automatically rewriting all computation graphs with the necessary operations to enable mixed precision training and automatic loss scaling.
For information about:
* How to train using mixed precision, see the [Mixed Precision Training](https://arxiv.org/abs/1710.03740) paper and [Training With Mixed Precision](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html) documentation.
* Techniques used for mixed precision training, see the [Mixed-Precision Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/) blog.
#### Enabling mixed precision
Mixed precision is enabled in TensorFlow by using the Automatic Mixed Precision (TF-AMP) extension which casts variables to half-precision upon retrieval, while storing variables in single-precision format. Furthermore, to preserve small gradient magnitudes in backpropagation, a [loss scaling](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#lossscaling) step must be included when applying gradients. In TensorFlow, loss scaling can be applied statically by using simple multiplication of loss by a constant value or automatically, by TF-AMP. Automatic mixed precision makes all the adjustments internally in TensorFlow, providing two benefits over manual operations. First, programmers need not modify network model code, reducing development and maintenance efforts. Second, using AMP maintains forward and backward compatibility with all the APIs for defining and running TensorFlow models.
Example nnU-Net scripts for training, inference, and benchmarking from the `scripts/` directory enable mixed precision if the `--amp` command line flag is used.
Internally, mixed precision is enabled by setting `keras.mixed_precision` policy to `mixed_float16`. Additionally, our custom training loop uses a `LossScaleOptimizer` wrapper for the optimizer. For more information see the [Mixed precision guide](#mixed-precision-training).
#### TF32
TensorFloat-32 (TF32) is the new math mode in [NVIDIA A100](https://www.nvidia.com/en-us/data-center/a100/) GPUs for handling the matrix math, also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on NVIDIA Volta GPUs.
TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require a high dynamic range for weights or activations.
For more information, refer to the [TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) blog post.
TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.
### Glossary
**Deep supervision**
Deep supervision is a technique that adds auxiliary loss outputs to the U-Net decoder layers. For nnU-Net, we add auxiliary losses to the three latest decoder levels. The final loss is a weighted average of the obtained loss values. Deep supervision can be enabled by adding the `--deep-supervision` flag.
**Test time augmentation**
Test time augmentation is an inference technique that averages predictions from augmented images with its prediction. As a result, predictions are more accurate, but with the cost of a slower inference process. For nnU-Net, we use all possible flip combinations for image augmenting. Test time augmentation can be enabled by adding the `--tta` flag to the training or inference script invocation.
**Sliding window inference**
During inference, this method replaces an arbitrary resolution input image with a batch of overlapping windows, that cover the whole input. After passing this batch through the network a prediction with the original resolution is reassembled. Predicted values inside overlapped regions are obtained from a weighted average.
Overlap ratio and weights for the average (i.e. blending mode) can be adjusted with the `--overlap` and `--blend-mode` options.
## Setup
The following section lists the requirements that you need to meet in order to start training the nnU-Net model.
### Requirements
This repository contains Dockerfile which extends the TensorFlow 2 NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components:
- [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
- TensorFlow2 22.11-py3+ NGC container
- Supported GPUs:
- [NVIDIA Volta architecture](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/)
- [NVIDIA Turing architecture](https://www.nvidia.com/en-us/geforce/turing/)
- [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/)
For more information about how to get started with NGC containers, see the following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning Documentation:
- [Getting Started Using NVIDIA GPU Cloud](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html)
- [Accessing And Pulling From The NGC Container Registry](https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html#accessing_registry)
- Running [TensorFlow](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/running.html#running)
For those unable to use the PyTorch NGC container, to set up the required environment or create your own container, see the versioned [NVIDIA Container Support Matrix](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html).
## Quick Start Guide
To train your model using mixed or TF32 precision with Tensor Cores or using FP32, perform the following steps using the default parameters of the nnUNet model on the [Medical Segmentation Decathlon](http://medicaldecathlon.com/) dataset. For the specifics on training and inference, see the [Advanced](#advanced) section.
1. Clone the repository.
Executing this command will create your local repository with all the code to run nnU-Net.
```
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/TensorFlow2/Segmentation/nnUNet
```
2. Build the nnU-Net TensorFlow2 NGC container.
This command will use the Dockerfile to create a Docker image named `nnunet`, downloading all the required components.
```
docker build -t nnunet .
```
The NGC container contains all the components optimized for usage on NVIDIA hardware.
3. Start an interactive session in the NGC container to run preprocessing/training/inference.
The following command will launch the container and mount the `./data` directory as a volume to the `/data` directory inside the container, and `./results` directory to the `/results` directory in the container.
```
mkdir data results
docker run -it --privileged --runtime=nvidia --shm-size=8g --ulimit memlock=-1 --ulimit stack=67108864 --rm -v ${PWD}/data:/data -v ${PWD}/results:/results nnunet:latest /bin/bash
```
4. Prepare the BraTS (MSD 01 task) dataset.
To download and preprocess the data run:
```
python download.py --task 01
python preprocess.py --task 01 --dim 3
python preprocess.py --task 01 --dim 2
```
Then `ls /data` should print:
```
01_3d_tf2 01_2d_tf2 Task01_BrainTumour
```
For the specifics on data preprocessing, see the [Getting the data](#getting-the-data) section.
5. Start training.
Training can be started with:
```
python scripts/train.py --gpus <gpus> --fold <fold> --dim <dim> [--amp]
```
To see descriptions of the train script arguments run `python scripts/train.py --help`. You can customize the training process. For details, see the [Training process](#training-process) section.
6. Start benchmarking.
The training and inference performance can be evaluated by using benchmarking scripts, such as:
```
python scripts/benchmark.py --mode {train,predict} --gpus <ngpus> --dim {2,3} --batch-size <bsize> [--amp]
```
To see descriptions of the benchmark script arguments run `python scripts/benchmark.py --help`.
7. Start inference/predictions.
Inference can be started with:
```
python scripts/inference.py --data <path/to/data> --dim <dim> --fold <fold> --ckpt-dir <path/to/checkpoint> [--amp] [--tta] [--save-preds]
```
Note: You have to prepare either validation or test dataset to run this script by running `python preprocess.py --task 01 --dim {2,3} --exec_mode {val,test}`. After preprocessing inside a given task directory (e.g. `/data/01_3d` for task 01 and dim 3) it will create a `val` or `test` directory with preprocessed data ready for inference. Possible workflow:
```
python preprocess.py --task 01 --dim 3 --exec_mode val
python scripts/inference.py --data /data/01_3d/val --dim 3 --fold 0 --ckpt-dir <path/to/checkpoint> --amp --tta --save-preds
```
Then if you have labels for predicted images you can evaluate them with `evaluate.py` script. For example:
```
python evaluate.py --preds /results/preds_task_01_dim_3_fold_0_amp_tta --lbls /data/Task01_BrainTumour/labelsTr
```
To see descriptions of the inference script arguments run `python scripts/inference.py --help`. You can customize the inference process. For details, see the [Inference process](#inference-process) section.
Now that you have your model trained and evaluated, you can choose to compare your training results with our [Training accuracy results](#training-accuracy-results). You can also choose to benchmark yours performance to [Training performance benchmark](#training-performance-results), or [Inference performance benchmark](#inference-performance-results). Following the steps in these sections will ensure that you achieve the same accuracy and performance results as stated in the [Results](#results) section.
## Advanced
The following sections provide greater details of the dataset, running training and inference, and the training results.
### Scripts and sample code
In the root directory, the most important files are:
* `main.py`: Entry point to the application. Runs training, evaluation, inference or benchmarking.
* `preprocess.py`: Entry point to data preprocessing.
* `download.py`: Downloads given dataset from [Medical Segmentation Decathlon](http://medicaldecathlon.com/).
* `Dockerfile`: Container with the basic set of dependencies to run nnU-Net.
* `requirements.txt:` Set of extra requirements for running nnU-Net.
* `evaluate.py`: Compare predictions with ground truth and get the final score.
The `data_preprocessing` folder contains information about the data preprocessing used by nnU-Net. Its contents are:
* `configs.py`: Defines dataset configuration like patch size or spacing.
* `preprocessor.py`: Implements data preprocessing pipeline.
The `data_loading` folder contains information about the data-loading pipeline used by nnU-Net. Its contents are:
* `data_module.py`: Defines a data module managing datasets and splits (similar to PyTorch Lightning `DataModule`)
* `dali_loader.py`: Implements DALI data loading pipelines.
* `utils.py`: Defines auxiliary functions used for data loading.
The `models` folder contains information about the building blocks of nnU-Net and the way they are assembled. Its contents are:
* `layers.py`: Implements convolution blocks used by the U-Net template.
* `nn_unet.py`: Implements training/validation/test logic and dynamic creation of U-Net architecture used by nnU-Net.
* `sliding_window.py`: Implements sliding window inference used by evaluation and prediction loops.
* `unet.py`: Implements the U-Net template.
The `runtime` folder contains information about training, inference, and evaluation logic. Its contents are:
* `args.py`: Defines command line arguments.
* `checkpoint.py`: Implements checkpoint saving.
* `logging.py`: Defines logging utilities along with wandb.io integration.
* `losses.py`: Implements loss functions.
* `metrics.py`: Implements dice metric and metric management.
* `run.py`: Implements training loop and inference and evaluation logic.
* `utils.py`: Defines auxiliary functions used during runtime.
Other folders included in the root directory are:
* `images/`: Contains a model diagram.
* `scripts/`: Provides scripts for training, benchmarking, and inference of nnU-Net.
### Command-line options
To see the full list of available options and their descriptions, use the `-h` or `--help` command-line option, for example:
`python main.py --help`
The following example output is printed when running the model:
```
usage: main.py [-h] [--exec-mode {train,evaluate,predict,export,nav}] [--gpus GPUS] [--data DATA] [--task TASK] [--dim {2,3}]
[--seed SEED] [--benchmark] [--tta [BOOLEAN]] [--save-preds [BOOLEAN]] [--sw-benchmark [BOOLEAN]]
[--results RESULTS] [--logname LOGNAME] [--quiet] [--use-dllogger [BOOLEAN]] [--amp [BOOLEAN]] [--xla [BOOLEAN]]
[--read-roi [BOOLEAN]] [--batch-size BATCH_SIZE] [--learning-rate LEARNING_RATE] [--momentum MOMENTUM]
[--scheduler {none,poly,cosine,cosine_annealing}] [--end-learning-rate END_LEARNING_RATE]
[--cosine-annealing-first-cycle-steps COSINE_ANNEALING_FIRST_CYCLE_STEPS]
[--cosine-annealing-peak-decay COSINE_ANNEALING_PEAK_DECAY] [--optimizer {sgd,adam,radam}]
[--deep-supervision [BOOLEAN]] [--lookahead [BOOLEAN]] [--weight-decay WEIGHT_DECAY]
[--loss-batch-reduction [BOOLEAN]] [--loss-include-background [BOOLEAN]] [--negative-slope NEGATIVE_SLOPE]
[--norm {instance,batch,group,none}] [--ckpt-strategy {last_and_best,last_only,none}] [--ckpt-dir CKPT_DIR]
[--saved-model-dir SAVED_MODEL_DIR] [--resume-training] [--load_sm [BOOLEAN]] [--validate [BOOLEAN]] [--nvol NVOL]
[--oversampling OVERSAMPLING] [--num-workers NUM_WORKERS] [--sw-batch-size SW_BATCH_SIZE] [--overlap OVERLAP]
[--blend {gaussian,constant}] [--nfolds NFOLDS] [--fold FOLD] [--epochs EPOCHS] [--skip-eval SKIP_EVAL]
[--steps-per-epoch STEPS_PER_EPOCH] [--bench-steps BENCH_STEPS] [--warmup-steps WARMUP_STEPS]
optional arguments:
-h, --help show this help message and exit
--exec-mode {train,evaluate,predict,export,nav}, --exec_mode {train,evaluate,predict,export,nav}
Execution mode to run the model (default: train)
--gpus GPUS
--data DATA Path to data directory (default: /data)
--task TASK Task number, MSD uses numbers 01-10 (default: 01)
--dim {2,3} UNet dimension (default: 3)
--seed SEED Random seed (default: None)
--benchmark Run model benchmarking (default: False)
--tta [BOOLEAN] Enable test time augmentation (default: False)
--save-preds [BOOLEAN], --save_preds [BOOLEAN]
Save predictions (default: False)
--sw-benchmark [BOOLEAN], --sw_benchmark [BOOLEAN]
--results RESULTS Path to results directory (default: /results)
--logname LOGNAME DLLogger output filename (default: dllogger.json)
--quiet Minimalize stdout/stderr output (default: False)
--use-dllogger [BOOLEAN], --use_dllogger [BOOLEAN]
Use DLLogger logging (default: True)
--amp [BOOLEAN] Enable automatic mixed precision (default: False)
--xla [BOOLEAN] Enable XLA compiling (default: False)
--read-roi [BOOLEAN], --read_roi [BOOLEAN]
Use DALI direct ROI loading feature (default: False)
--batch-size BATCH_SIZE, --batch_size BATCH_SIZE
Batch size (default: 2)
--learning-rate LEARNING_RATE, --learning_rate LEARNING_RATE
Learning rate (default: 0.0003)
--momentum MOMENTUM Momentum factor (SGD only) (default: 0.99)
--scheduler {none,poly,cosine,cosine_annealing}
Learning rate scheduler (default: none)
--end-learning-rate END_LEARNING_RATE
End learning rate for poly scheduler (default: 5e-05)
--cosine-annealing-first-cycle-steps COSINE_ANNEALING_FIRST_CYCLE_STEPS
Length of a cosine decay cycle in steps, only with 'cosine_annealing' scheduler (default: 512)
--cosine-annealing-peak-decay COSINE_ANNEALING_PEAK_DECAY
Multiplier reducing initial learning rate (default: 0.95)
--optimizer {sgd,adam,radam}
Optimizer (default: adam)
--deep-supervision [BOOLEAN], --deep_supervision [BOOLEAN]
Use deep supervision. (default: False)
--lookahead [BOOLEAN]
Use Lookahead with the optimizer (default: False)
--weight-decay WEIGHT_DECAY, --weight_decay WEIGHT_DECAY
Weight decay (L2 penalty) (default: 0.0001)
--loss-batch-reduction [BOOLEAN]
Reduce batch dimension first during loss calculation (default: True)
--loss-include-background [BOOLEAN]
Include background class to loss calculation (default: False)
--negative-slope NEGATIVE_SLOPE
Negative slope for LeakyReLU (default: 0.01)
--norm {instance,batch,group,none}
Type of normalization layers (default: instance)
--ckpt-strategy {last_and_best,last_only,none}
Strategy how to save checkpoints (default: last_and_best)
--ckpt-dir CKPT_DIR Path to checkpoint directory (default: /results/ckpt)
--saved-model-dir SAVED_MODEL_DIR
Path to saved model directory (for evaluation and prediction) (default: None)
--resume-training, --resume_training
Resume training from the last checkpoint (default: False)
--load_sm [BOOLEAN] Load exported savedmodel (default: False)
--validate [BOOLEAN] Validate exported savedmodel (default: False)
--nvol NVOL Number of volumes which come into single batch size for 2D model (default: 2)
--oversampling OVERSAMPLING
Probability of crop to have some region with positive label (default: 0.33)
--num-workers NUM_WORKERS
Number of subprocesses to use for data loading (default: 8)
--sw-batch-size SW_BATCH_SIZE
Sliding window inference batch size (default: 2)
--overlap OVERLAP Amount of overlap between scans during sliding window inference (default: 0.5)
--blend {gaussian,constant}, --blend-mode {gaussian,constant}
How to blend output of overlapping windows (default: gaussian)
--nfolds NFOLDS Number of cross-validation folds (default: 5)
--fold FOLD Fold number (default: 0)
--epochs EPOCHS Number of epochs (default: 1000)
--skip-eval SKIP_EVAL
Skip evaluation for the first N epochs. (default: 0)
--steps-per-epoch STEPS_PER_EPOCH
Steps per epoch. By default ceil(training_dataset_size / batch_size / gpus) (default: None)
--bench-steps BENCH_STEPS
Number of benchmarked steps in total (default: 100)
--warmup-steps WARMUP_STEPS
Number of warmup steps before collecting benchmarking statistics (default: 25)
```
### Getting the data
The nnU-Net model was trained on the [Medical Segmentation Decathlon](http://medicaldecathlon.com/) datasets. All datasets are in Neuroimaging Informatics Technology Initiative (NIfTI) format.
#### Dataset guidelines
To train nnU-Net you will need to preprocess your dataset as the first step with `preprocess.py` script. Run `python scripts/preprocess.py --help` to see descriptions of the preprocess script arguments.
For example to preprocess data for 3D U-Net run: `python preprocess.py --task 01 --dim 3`.
In `data_preprocessing/configs.py` for each [Medical Segmentation Decathlon](http://medicaldecathlon.com/) task, there are defined: patch sizes, precomputed spacings and statistics for CT datasets.
The preprocessing pipeline consists of the following steps:
1. Cropping to the region of non-zero values.
2. Resampling to the median voxel spacing of their respective dataset (exception for anisotropic datasets where the lowest resolution axis is selected to be the 10th percentile of the spacings).
3. Padding volumes so that dimensions are at least as patch size.
4. Normalizing:
* For CT modalities the voxel values are clipped to 0.5 and 99.5 percentiles of the foreground voxels and then data is normalized with mean and standard deviation collected from foreground voxels.
* For MRI modalities z-score normalization is applied.
#### Multi-dataset
It is possible to run nnUNet on a custom dataset. If your dataset corresponds to [Medical Segmentation Decathlon](http://medicaldecathlon.com/) (i.e. data should be in `NIfTi` format and there should be `dataset.json` file where you need to provide fields: modality, labels, and at least one of training, test) you need to perform the following:
1. Mount your dataset to `/data` directory.
2. In `data_preprocessing/config.py`:
- Add to the `task_dir` dictionary your dataset directory name. For example, for the Brain Tumour dataset, it corresponds to `"01": "Task01_BrainTumour"`.
- Add the patch size that you want to use for training to the `patch_size` dictionary. For example, for the Brain Tumour dataset it corresponds to `"01_3d": [128, 128, 128]` for 3D U-Net and `"01_2d": [192, 160]` for 2D U-Net. There are three types of suffixes `_3d, _2d` corresponding to 3D UNet and 2D U-Net.
3. Preprocess your data with `preprocess.py` scripts. For example, to preprocess the Brain Tumour dataset for 2D U-Net you should run `python preprocess.py --task 01 --dim 2`.
### Training process
The model trains for at most `--epochs` epochs. After each epoch evaluation, the validation set is done and validation metrics are monitored for checkpoint updating
(see `--ckpt-strategy` flag). Default training settings are:
* The Adam optimizer with a learning rate of 0.0003 and weight decay of 0.0001.
* Training batch size is set to 2.
This default parametrization is applied when running scripts from the `scripts/` directory and when running `main.py` without explicitly overriding these parameters. By default, using scripts from the `scripts` directory will not use AMP. To enable AMP, pass the `--amp` flag. AMP can be enabled for every mode of execution. However, a custom invocation of the `main.py` script will turn on AMP by default. To turn it off use `main.py --amp false`.
The default configuration minimizes a function `L = (1 - dice_coefficient) + cross_entropy` during training and reports achieved convergence as [dice coefficient](https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient) per class. The training, with a combination of dice and cross-entropy has been proven to achieve better convergence than training using only dice.
The training can be run without using the predefined scripts. The name of the training script is `main.py`. For example:
```
python main.py --exec-mode train --task 01 --fold 0
```
Training artifacts will be saved to `/results` in the container. Some important artifacts are:
* `/results/dllogger.json`: Collected dice scores and loss values evaluated after each epoch during training on a validation set.
* `/results/ckpt/`: Saved checkpoints. By default, two checkpoints are saved - one after each epoch and one with the highest validation dice (saved in the `/results/ckpt/best/` subdirectory). You can change this behavior by modifying the `--ckpt-strategy` parameter.
To load the pretrained model, provide `--ckpt-dir <path/to/checkpoint/directory>` and use `--resume-training` if you intend to continue training.
To use multi-GPU training with the `main.py` script prepend the command with `horovodrun -np <ngpus>`, for example with 8 GPUs use:
```
horovodrun -np 8 python main.py --exec-mode train --task 01 --fold 0
```
### Inference process
Inference can be launched by passing the `--exec-mode predict` flag. For example:
```
python main.py --exec-mode predict --xla --task 01 --fold 0 --gpus 1 --amp --tta --save-preds --ckpt-dir <path/to/checkpoint/dir>
```
The script will then:
* Load the checkpoint from the directory specified by the `<path/to/checkpoint/dir>` directory
* Run inference on the preprocessed validation dataset corresponding to fold 0
* If `--save-preds` is provided then resulting masks in the NumPy format will be saved in the `/results` directory
## Performance
### Benchmarking
The following section shows how to run benchmarks to measure the model performance in training and inference modes.
#### Training performance benchmark
To benchmark training, run the `scripts/benchmark.py` script with `--mode train`:
```
python scripts/benchmark.py --xla --mode train --gpus <ngpus> --dim {2,3} --batch-size <bsize> [--amp]
```
For example, to benchmark 3D U-Net training using mixed-precision on 8 GPUs with batch size of 2, run:
```
python scripts/benchmark.py --xla --mode train --gpus 8 --dim 3 --batch-size 2 --amp
```
Each of these scripts will by default run a warm-up for 100 iterations and then start benchmarking for another 100 steps.
You can adjust these settings with `--warmup-steps` and `--bench-steps` parameters.
At the end of the script, a line reporting the training throughput and latency will be printed.
#### Inference performance benchmark
To benchmark inference, run the `scripts/benchmark.py` script with `--mode predict`:
```
python scripts/benchmark.py --xla --mode predict --gpus <ngpus> --dim {2,3} --batch-size <bsize> [--amp]
```
For example, to benchmark inference using mixed-precision for 3D U-Net on 1 GPU, with a batch size of 4, run:
```
python scripts/benchmark.py --xla --mode predict --gpus 1 --dim 3 --batch-size 4 --amp
```
Each of these scripts will by default run a warm-up for 100 iterations and then start benchmarking for another 100 steps.
You can adjust these settings with `--warmup-steps` and `--bench-steps` parameters.
At the end of the script, a line reporting the inference throughput and latency will be printed.
*Note that this benchmark reports performance numbers for iterations over samples with fixed patch sizes.
The real inference process uses [sliding window](#glossary) for input images with arbitrary resolution and performance may vary for images with different resolutions.*
### Results
The following sections provide details on how to achieve the same performance and accuracy in training and inference.
#### Training accuracy results
##### Training accuracy: NVIDIA DGX A100 (8xA100 80G)
Our results were obtained by running the `python scripts/train.py --xla --gpus {1,8} --fold {0,1,2,3,4} --dim {2,3} --learning_rate lr [--amp] --seed n` training scripts and averaging results in the TensorFlow 22.11 NGC container on NVIDIA DGX with (8x A100 80G) GPUs.
| Dimension | GPUs | Batch size / GPU | Dice - mixed precision | Accuracy - FP32 | Time to train - mixed precision | Time to train - TF32 | Time to train speedup (TF32 to mixed precision)
|:-:|:-:|:--:|:-----:|:-----:|:--------:|:---------:|:----:|
| 2 | 1 | 64 | 0.7312 | 0.7302 | 29 min | 40 min | 1.38 |
| 2 | 8 | 64 | 0.7322 | 0.7310 | 8 min | 10 min | 1.22 |
| 3 | 1 | 2 | 0.7435 | 0.7441 | 85 min | 153 min | 1.79 |
| 3 | 8 | 2 | 0.7440 | 0.7438 | 19 min | 33 min | 1.69 |
Reported dice score is the average over 5 folds from the best run for grid search over learning rates {1e-4, 2e-4, ..., 9e-4} and seed {1, 3, 5}.
##### Training accuracy: NVIDIA DGX-1 (8xV100 32G)
Our results were obtained by running the `python scripts/train.py --xla --gpus {1,8} --fold {0,1,2,3,4} --dim {2,3} [--amp] --seed n` training scripts and averaging results in the TensorFlow 22.11 NGC container on NVIDIA DGX-1 with (8x V100 32G) GPUs.
| Dimension | GPUs | Batch size / GPU | Dice - mixed precision | Accuracy - FP32 | Time to train - mixed precision | Time to train - FP32 | Time to train speedup (FP32 to mixed precision)
|:-:|:-:|:--:|:-----:|:-----:|:---------:|:---------:|:----:|
| 2 | 1 | 64 | 0.7315 | 0.7311 | 52 min | 102 min | 1.96 |
| 2 | 8 | 64 | 0.7312 | 0.7316 | 12 min | 17 min | 1.41 |
| 3 | 1 | 2 | 0.7435 | 0.7441 | 181 min | 580 min | 3.20 |
| 3 | 8 | 2 | 0.7434 | 0.7440 | 35 min | 131 min | 3.74 |
Reported dice score is the average over 5 folds from the best run for grid search over learning rates {1e-4, 2e-4, ..., 9e-4} and seed {1, 3, 5}.
#### Training performance results
##### Training performance: NVIDIA DGX A100 (8xA100 80G)
Our results were obtained by running the `python scripts/benchmark.py --xla --mode train --gpus {1,8} --dim {2,3} --batch-size <bsize> [--amp]` training script in the NGC container on NVIDIA DGX A100 (8x A100 80G) GPUs. Performance numbers (in volumes per second) were averaged over an entire training epoch.
Note: We recommend using `--bind` flag for multi-GPU settings to increase the throughput. To launch multi-GPU with `--bind` you will have to add `--horovod` e.g., `python scripts/benchmark.py --xla --mode train --gpus 8 --dim 3 --amp --batch-size 2 --bind --horovod` for the interactive session, or use regular command when launching with SLURM's sbatch.
| Dimension | GPUs | Batch size / GPU | Throughput - mixed precision [img/s] | Throughput - TF32 [img/s] | Throughput speedup (TF32 - mixed precision) | Weak scaling - mixed precision | Weak scaling - TF32 |
|:-:|:-:|:--:|:------:|:------:|:-----:|:-----:|:-----:|
| 2 | 1 | 32 | 1347.19 | 748.56 | 1.80 | - | - |
| 2 | 1 | 64 | 1662.8 | 804.23 | 2.07 | - | - |
| 2 | 1 | 128 | 1844.7 | 881.87 | 2.09 | - | - |
| 2 | 8 | 32 | 9056.45 | 5420.51 | 1.67 | 6.72 | 6.91 |
| 2 | 8 | 64 | 11687.11 | 6250.52 | 1.87 | 7.03 | 7.49 |
| 2 | 8 | 128 | 13679.76 | 6841.78 | 2.00 | 7.42 | 7.66 |
| 3 | 1 | 1 | 27.02 | 11.63 | 2.32 | - | - |
| 3 | 1 | 2 | 29.3 | 11.81 | 2.48 | - | - |
| 3 | 1 | 4 | 31.87 | 12.17 | 2.62 | - | - |
| 3 | 8 | 1 | 186.84 | 91.11 | 2.05 | 7.24 | 7.83 |
| 3 | 8 | 2 | 219.34 | 92.91 | 2.36 | 7.77 | 7.87 |
| 3 | 8 | 4 | 244.01 | 96.52 | 2.53 | 7.76 | 7.93 |
To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide).
##### Training performance: NVIDIA DGX-1 (8xV100 32G)
Our results were obtained by running the `python scripts/benchmark.py --xla --mode train --gpus {1,8} --dim {2,3} --batch-size <bsize> [--amp]` training script in the TensorFlow 22.11 NGC container on NVIDIA DGX-1 with (8x V100 32G) GPUs. Performance numbers (in volumes per second) were averaged over an entire training epoch.
Note: We recommend using `--bind` flag for multi-GPU settings to increase the throughput. To launch multi-GPU with `--bind` you will have to add `--horovod` e.g., `python scripts/benchmark.py --xla --mode train --gpus 8 --dim 3 --amp --batch-size 2 --bind --horovod` for the interactive session, or use regular command when launching with SLURM's sbatch.
| Dimension | GPUs | Batch size / GPU | Throughput - mixed precision [img/s] | Throughput - FP32 [img/s] | Throughput speedup (FP32 - mixed precision) | Weak scaling - mixed precision | Weak scaling - FP32 |
|:-:|:-:|:---:|:---------:|:-----------:|:--------:|:---------:|:-------------:|
| 2 | 1 | 32 | 697.36 | 312.51 | 2.23 | - | - |
| 2 | 1 | 64 | 819.15 | 337.42 | 2.43 | - | - |
| 2 | 1 | 128 | 894.94 | 352.32 | 2.54 | - | - |
| 2 | 8 | 32 | 4355.65 | 2260.37 | 1.93 | 6.25 | 7.23 |
| 2 | 8 | 64 | 5696.41 | 2585.65 | 2.20 | 6.95 | 7.66 |
| 2 | 8 | 128 | 6714.96 | 2779.25 | 2.42 | 7.50 | 7.89 |
| 3 | 1 | 1 | 12.15 | 2.08 | 5.84 | - | - |
| 3 | 1 | 2 | 13.13 | 2.5 | 5.25 | - | - |
| 3 | 8 | 1 | 82.62 | 16.59 | 4.98 | 6.80 | 7.98 |
| 3 | 8 | 2 | 97.68 | 19.91 | 4.91 | 7.44 | 7.96 |
To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide).
#### Inference performance results
##### Inference performance: NVIDIA DGX A100 (1xA100 80G)
Our results were obtained by running the `python scripts/benchmark.py --xla --mode predict --dim {2,3} --batch-size <bsize> [--amp]` inferencing benchmarking script in the TensorFlow 22.11 NGC container on NVIDIA DGX A100 (1x A100 80G) GPU.
FP16
| Dimension | Batch size |Resolution| Throughput Avg [img/s] | Latency Avg [ms] | Latency 90% [ms] | Latency 95% [ms] | Latency 99% [ms] |
|:----------:|:---------:|:-------------:|:----------------------:|:----------------:|:----------------:|:----------------:|:----------------:|
| 2 | 32 | 192x160 | 1728.03 | 18.52 | 22.55 | 23.18 | 24.82 |
| 2 | 64 | 192x160 | 4160.91 | 15.38 | 17.49 | 18.53 | 19.88 |
| 2 | 128 | 192x160 | 4672.52 | 27.39 | 27.68 | 27.79 | 27.87 |
| 3 | 1 | 128x128x128 | 78.2 | 12.79 | 14.29 | 14.87 | 15.25 |
| 3 | 2 | 128x128x128 | 63.76 | 31.37 | 36.07 | 40.02 | 42.44 |
| 3 | 4 | 128x128x128 | 83.17 | 48.1 | 50.96 | 52.08 | 52.56 |
TF32
| Dimension | Batch size |Resolution| Throughput Avg [img/s] | Latency Avg [ms] | Latency 90% [ms] | Latency 95% [ms] | Latency 99% [ms] |
|:----------:|:---------:|:-------------:|:----------------------:|:----------------:|:----------------:|:----------------:|:----------------:|
| 2 | 32 | 192x160 | 2067.63 | 15.48 | 17.97 | 19.12 | 19.77 |
| 2 | 64 | 192x160 | 2447 | 26.15 | 26.43 | 26.48 | 26.62 |
| 2 | 128 | 192x160 | 2514.75 | 50.9 | 51.15 | 51.23 | 51.28 |
| 3 | 1 | 128x128x128 | 38.85 | 25.74 | 26.04 | 26.19 | 27.41 |
| 3 | 2 | 128x128x128 | 40.1 | 49.87 | 50.31 | 50.44 | 50.57 |
| 3 | 4 | 128x128x128 | 41.69 | 95.95 | 97.09 | 97.41 | 98.03 |
Throughput is reported in images per second. Latency is reported in milliseconds per batch.
To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide).
##### Inference performance: NVIDIA DGX-1 (1xV100 32G)
Our results were obtained by running the `python scripts/benchmark.py --mode predict --dim {2,3} --batch-size <bsize> [--amp]` inferencing benchmarking script in the TensorFlow 22.11 NGC container on NVIDIA DGX-1 with (1x V100 32G) GPU.
FP16
| Dimension | Batch size |Resolution| Throughput Avg [img/s] | Latency Avg [ms] | Latency 90% [ms] | Latency 95% [ms] | Latency 99% [ms] |
|:----------:|:---------:|:-------------:|:----------------------:|:----------------:|:----------------:|:----------------:|:----------------:|
| 2 | 32 | 192x160 | 1166.83 | 27.42 | 28.76 | 28.91 | 29.16 |
| 2 | 64 | 192x160 | 2263.21 | 28.28 | 30.63 | 31.83 | 32.5 |
| 2 | 128 | 192x160 | 2387.06 | 53.62 | 53.97 | 54.07 | 54.3 |
| 3 | 1 | 128x128x128 | 36.87 | 27.12 | 27.32 | 27.37 | 27.42 |
| 3 | 2 | 128x128x128 | 37.65 | 53.12 | 53.49 | 53.59 | 53.71 |
| 3 | 4 | 128x128x128 | 38.8 | 103.11 | 104.16 | 104.3 | 104.75 |
FP32
| Dimension | Batch size |Resolution| Throughput Avg [img/s] | Latency Avg [ms] | Latency 90% [ms] | Latency 95% [ms] | Latency 99% [ms] |
|:----------:|:---------:|:-------------:|:----------------------:|:----------------:|:----------------:|:----------------:|:----------------:|
| 2 | 32 | 192x160 | 990.61 | 32.3 | 32.46 | 32.51 | 32.78 |
| 2 | 64 | 192x160 | 1034.22 | 61.88 | 62.19 | 62.32 | 62.56 |
| 2 | 128 | 192x160 | 1084.21 | 118.06 | 118.45 | 118.6 | 118.95 |
| 3 | 1 | 128x128x128 | 9.65 | 103.62 | 104.46 | 104.52 | 104.63 |
| 3 | 2 | 128x128x128 | 9.96 | 200.75 | 202.51 | 202.74 | 202.86 |
| 3 | 4 | 128x128x128 | 10.13 | 394.74 | 396.74 | 397.0 | 397.82 |
Throughput is reported in images per second. Latency is reported in milliseconds per batch.
To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide).
### Known issues
There are no known issues in this release.
## Release notes
### Changelog
November 2022
- Container update to 22.11
- Use channel last layout for convolution with XLA
- Add support for GPU binding
May 2022
- Initial release
|
PyTorch/LanguageModeling/BERT/data | data | BooksDownloader | # Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import subprocess
class BooksDownloader:
def __init__(self, save_path):
self.save_path = save_path
pass
def download(self):
bookscorpus_download_command = 'python3 /workspace/bookcorpus/download_files.py --list /workspace/bookcorpus/url_list.jsonl --out'
bookscorpus_download_command += ' ' + self.save_path + '/bookscorpus'
bookscorpus_download_command += ' --trash-bad-count'
bookscorpus_download_process = subprocess.run(bookscorpus_download_command, shell=True, check=True)
|
TensorFlow/Detection/SSD/examples | examples | SSD320_FP16_8GPU | # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
CKPT_DIR=${1:-"/results/SSD320_FP16_8GPU"}
PIPELINE_CONFIG_PATH=${2:-"/workdir/models/research/configs"}"/ssd320_full_8gpus.config"
GPUS=8
TENSOR_OPS=0
export TF_ENABLE_CUBLAS_TENSOR_OP_MATH_FP32=${TENSOR_OPS}
export TF_ENABLE_CUDNN_TENSOR_OP_MATH_FP32=${TENSOR_OPS}
export TF_ENABLE_CUDNN_RNN_TENSOR_OP_MATH_FP32=${TENSOR_OPS}
mkdir -p $CKPT_DIR
time mpirun --allow-run-as-root \
-np $GPUS \
-H localhost:$GPUS \
-bind-to none \
-map-by slot \
-x NCCL_DEBUG=INFO \
-x LD_LIBRARY_PATH \
-x PATH \
-mca pml ob1 \
-mca btl ^openib \
python -u ./object_detection/model_main.py \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--model_dir=${CKPT_DIR} \
--alsologtostder \
--amp \
"${@:3}" 2>&1 | tee $CKPT_DIR/train_log
|
TensorFlow/Recommendation/VAE-CF/vae | vae | __init__ | # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
LOG = logging.getLogger("VAE")
_log_format = logging.Formatter("[%(name)s| %(levelname)s]: %(message)s")
_log_handler = logging.StreamHandler()
_log_handler.setFormatter(_log_format)
LOG.addHandler(_log_handler)
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/tacotron2 | tacotron2 | tacotron2Instance | /*
* Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the NVIDIA CORPORATION nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef TT2I_TACOTRON2INSTANCE_H
#define TT2I_TACOTRON2INSTANCE_H
#include "tacotron2StreamingInstance.h"
#include "timedObject.h"
#include "trtPtr.h"
#include <memory>
namespace nvinfer1
{
class ICudaEngine;
}
namespace tts
{
class Tacotron2Instance : public virtual TimedObject
{
public:
static constexpr const char* const ENGINE_NAME = "tacotron2";
/**
* @brief Create a new Tacotron2 instance.
*
* @param encoder The built encoder network.
* @param decoder The built decoder network without plugins.
* @param decoder The built decoder network with plugins.
* @param postnet The built postnet network.
*/
Tacotron2Instance(
TRTPtr<nvinfer1::ICudaEngine> encoder,
TRTPtr<nvinfer1::ICudaEngine> decoderPlain,
TRTPtr<nvinfer1::ICudaEngine> decoderPlugins,
TRTPtr<nvinfer1::ICudaEngine> postnet);
/**
* @brief Perform inference on a given batch of input data.
*
* @param batchSize The number of sequences in the batch.
* @param inputDevice The input for each item in the batch.
* @param inputSpacing The spacing between the start of each item in the
* batch.
* @param inputLength The length of each input.
* @param maxOutputLength The maximum length of output in frames.
* @param outputDevice The location to write the output tensor in batch,
* frame, channel order.
* @param outputLength The length of each output sequence.
*/
void infer(int batchSize, const int* inputDevice, int inputSpacing, const int* inputLength, int maxOutputLength,
float* outputDevice, int* outputLength);
/**
* @brief Set whether or not the decoder loop should exit when the stop
* criteria is satisfied, or the maximum number of iterations should be taken.
*
* @param earlyExit Set to true exit when the criteria is met, and false to
* only exit after all iterations are run.
*/
void setEarlyExit(bool earlyExit);
/**
* @brief The random seed to use for dropouts.
*
* @param seed The seed value.
*/
void setSeed(unsigned int seed);
/**
* @brief Get the number of channels each frame will have.
*
* @return The number of channels.
*/
int getNumMelChannels() const;
/**
* @brief Get the maximum length of an input sequence.
*
* @return The maximum length of the sequence.
*/
int getMaximumInputLength() const;
/**
* @brief Get the maximum batch size supported by this Tacotron2 instance.
*
* @return The maximum batch size.
*/
int getMaxBatchSize() const;
/**
* @brief Get the size of the `outputDevice` vector required for the given
* input parameters.
*
* @param batchSize The size of the batch.
* @param maxFrames The maximum number of frames for each item in the batch.
*
* @return The required number of elements in the output vector.
*/
int getRequiredOutputSize(const int batchSize, const int maxFrames) const;
/**
* @brief Set whether or not to use plugins when possible.
*
* @param usePlugins True to use plugins, false to not.
*/
void usePlugins(bool usePlugins);
/**
* @brief Check whether or not plugins will be used for the given batch size.
*
* @param batchSize The batch size.
*
* @return True if plugins would be used.
*/
bool willUsePlugins(int batchSize) const;
private:
Tacotron2StreamingInstance mStreamingInstance;
std::vector<int> mChunkSize;
int mNumMelChunks;
bool mEarlyExit;
CudaMemory<float> mOutputShuffledDevice;
};
} // namespace tts
#endif
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/plugins/taco2DenoiseTransformPlugin | taco2DenoiseTransformPlugin | taco2DenoiseTransformLayerPluginCreator | /*
* Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the NVIDIA CORPORATION nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef TT2I_MAGNITUDEANDPHASELAYERPLUGINCREATOR_H
#define TT2I_MAGNITUDEANDPHASELAYERPLUGINCREATOR_H
#include "NvInfer.h"
#include <string>
#ifdef DEVEL
// The destructor of nvinfer1::IPluginCreator is non-virtual and public, so
// we need to supress the warning.
#pragma GCC diagnostic ignored "-Wnon-virtual-dtor"
#endif
namespace nvinfer1
{
namespace plugin
{
class Taco2DenoiseTransformLayerPluginCreator : public nvinfer1::IPluginCreator
{
public:
/**
* @brief Get the collection of fields for this plugin, with their names only.
*
* @return The collection of fields.
*/
static nvinfer1::PluginFieldCollection* getFields();
/**
* @brief Create a new Taco2DenoiseTransformLayerPluginCreator.
*/
Taco2DenoiseTransformLayerPluginCreator();
/**
* @brief Get the name of the plugin.
*
* @return The name of the plugin.
*/
const char* getPluginName() const override;
/**
* @brief Get the plugin version.
*
* @return The plugin version.
*/
const char* getPluginVersion() const override;
/**
* @brief Get the collection of fields for this plugin.
*
* @return The collection of fields.
*/
const nvinfer1::PluginFieldCollection* getFieldNames() override;
/**
* @brief Create a new Taco2DenoiseTransformLayerPlugin.
*
* @param name The name (unused currently).
* @param fc The collection of fields to initialize with.
*
* @return The created plugin.
*/
nvinfer1::IPluginV2* createPlugin(const char* name, const nvinfer1::PluginFieldCollection* fc) override;
/**
* @brief Create a custom layer by name from a data stream.
*
* @param layerName The name of the layer.
* @param serialData The serialized data for the layer.
* @param serialLength The length of the serialized data.
*
* @return The plugin. Clients must destroy the plugin once all consumers of
* it have been destroyed.
*/
nvinfer1::IPluginV2* deserializePlugin(const char* name, const void* serialData, size_t serialLength) override;
/**
* @brief Set the namespace for created plugins.
*
* @param pluginNamespace The namespace.
*/
void setPluginNamespace(const char* pluginNamespace) override;
/**
* @brief Get the namespace for created plugins.
*
* @return The namespace.
*/
const char* getPluginNamespace() const override;
private:
std::string mNamespace;
};
} // namespace plugin
} // namespace nvinfer1
#ifdef DEVEL
#pragma GCC diagnostic pop
#endif
#endif
|
PyTorch/SpeechSynthesis/Tacotron2/notebooks/conversationalai/client | client | start_jupyter | jupyter lab --allow-root --ip=0.0.0.0 --no-browser speech_ai_demo.ipynb
|
PyTorch/Forecasting/TFT | TFT | gpu_affinity | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import math
import os
import pathlib
import re
import pynvml
pynvml.nvmlInit()
def systemGetDriverVersion():
return pynvml.nvmlSystemGetDriverVersion()
def deviceGetCount():
return pynvml.nvmlDeviceGetCount()
class device:
# assume nvml returns list of 64 bit ints
_nvml_affinity_elements = math.ceil(os.cpu_count() / 64)
def __init__(self, device_idx):
super().__init__()
self.handle = pynvml.nvmlDeviceGetHandleByIndex(device_idx)
def getName(self):
return pynvml.nvmlDeviceGetName(self.handle)
def getCpuAffinity(self):
affinity_string = ''
for j in pynvml.nvmlDeviceGetCpuAffinity(
self.handle, device._nvml_affinity_elements
):
# assume nvml returns list of 64 bit ints
affinity_string = '{:064b}'.format(j) + affinity_string
affinity_list = [int(x) for x in affinity_string]
affinity_list.reverse() # so core 0 is in 0th element of list
ret = [i for i, e in enumerate(affinity_list) if e != 0]
return ret
def set_socket_affinity(gpu_id):
dev = device(gpu_id)
affinity = dev.getCpuAffinity()
os.sched_setaffinity(0, affinity)
def set_single_affinity(gpu_id):
dev = device(gpu_id)
affinity = dev.getCpuAffinity()
os.sched_setaffinity(0, affinity[:1])
def set_single_unique_affinity(gpu_id, nproc_per_node):
devices = [device(i) for i in range(nproc_per_node)]
socket_affinities = [dev.getCpuAffinity() for dev in devices]
siblings_list = get_thread_siblings_list()
siblings_dict = dict(siblings_list)
# remove siblings
for idx, socket_affinity in enumerate(socket_affinities):
socket_affinities[idx] = list(set(socket_affinity) - set(siblings_dict.values()))
affinities = []
assigned = []
for socket_affinity in socket_affinities:
for core in socket_affinity:
if core not in assigned:
affinities.append([core])
assigned.append(core)
break
os.sched_setaffinity(0, affinities[gpu_id])
def set_socket_unique_affinity(gpu_id, nproc_per_node, mode):
device_ids = [device(i) for i in range(nproc_per_node)]
socket_affinities = [dev.getCpuAffinity() for dev in device_ids]
siblings_list = get_thread_siblings_list()
siblings_dict = dict(siblings_list)
# remove siblings
for idx, socket_affinity in enumerate(socket_affinities):
socket_affinities[idx] = list(set(socket_affinity) - set(siblings_dict.values()))
socket_affinities_to_device_ids = collections.defaultdict(list)
for idx, socket_affinity in enumerate(socket_affinities):
socket_affinities_to_device_ids[tuple(socket_affinity)].append(idx)
for socket_affinity, device_ids in socket_affinities_to_device_ids.items():
devices_per_group = len(device_ids)
cores_per_device = len(socket_affinity) // devices_per_group
for group_id, device_id in enumerate(device_ids):
if device_id == gpu_id:
if mode == 'interleaved':
affinity = list(socket_affinity[group_id::devices_per_group])
elif mode == 'continuous':
affinity = list(socket_affinity[group_id*cores_per_device:(group_id+1)*cores_per_device])
else:
raise RuntimeError('Unknown set_socket_unique_affinity mode')
# reintroduce siblings
affinity += [siblings_dict[aff] for aff in affinity if aff in siblings_dict]
os.sched_setaffinity(0, affinity)
def get_thread_siblings_list():
path = '/sys/devices/system/cpu/cpu*/topology/thread_siblings_list'
thread_siblings_list = []
pattern = re.compile(r'(\d+)\D(\d+)')
for fname in pathlib.Path(path[0]).glob(path[1:]):
with open(fname) as f:
content = f.read().strip()
res = pattern.findall(content)
if res:
pair = tuple(map(int, res[0]))
thread_siblings_list.append(pair)
return thread_siblings_list
def set_affinity(gpu_id, nproc_per_node, mode='socket'):
if mode == 'socket':
set_socket_affinity(gpu_id)
elif mode == 'single':
set_single_affinity(gpu_id)
elif mode == 'single_unique':
set_single_unique_affinity(gpu_id, nproc_per_node)
elif mode == 'socket_unique_interleaved':
set_socket_unique_affinity(gpu_id, nproc_per_node, 'interleaved')
elif mode == 'socket_unique_continuous':
set_socket_unique_affinity(gpu_id, nproc_per_node, 'continuous')
else:
raise RuntimeError('Unknown affinity mode')
affinity = os.sched_getaffinity(0)
return affinity
|
TensorFlow2/LanguageModeling/BERT/official/utils/logs | logs | guidelines | # Logging in official models
This library adds logging functions that print or save tensor values. Official models should define all common hooks
(using hooks helper) and a benchmark logger.
1. **Training Hooks**
Hooks are a TensorFlow concept that define specific actions at certain points of the execution. We use them to obtain and log
tensor values during training.
hooks_helper.py provides an easy way to create common hooks. The following hooks are currently defined:
* LoggingTensorHook: Logs tensor values
* ProfilerHook: Writes a timeline json that can be loaded into chrome://tracing.
* ExamplesPerSecondHook: Logs the number of examples processed per second.
* LoggingMetricHook: Similar to LoggingTensorHook, except that the tensors are logged in a format defined by our data
anaylsis pipeline.
2. **Benchmarks**
The benchmark logger provides useful functions for logging environment information, and evaluation results.
The module also contains a context which is used to update the status of the run.
Example usage:
```
from absl import app as absl_app
from official.utils.logs import hooks_helper
from official.utils.logs import logger
def model_main(flags_obj):
estimator = ...
benchmark_logger = logger.get_benchmark_logger()
benchmark_logger.log_run_info(...)
train_hooks = hooks_helper.get_train_hooks(...)
for epoch in range(10):
estimator.train(..., hooks=train_hooks)
eval_results = estimator.evaluate(...)
# Log a dictionary of metrics
benchmark_logger.log_evaluation_result(eval_results)
# Log an individual metric
benchmark_logger.log_metric(...)
def main(_):
with logger.benchmark_context(flags.FLAGS):
model_main(flags.FLAGS)
if __name__ == "__main__":
# define flags
absl_app.run(main)
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.