relative_path
stringclasses 812
values | section
stringclasses 339
values | filename
stringlengths 2
61
| text
stringlengths 6
1.76M
|
---|---|---|---|
TensorFlow2/Recommendation/WideAndDeep/triton/deployment_toolkit/library | library | __init__ | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
PyTorch/LanguageModeling/BERT/triton/dist4l/runner | runner | config_NVIDIA-A30 | checkpoints:
- name: dist-4l-qa
url: https://api.ngc.nvidia.com/v2/models/nvidia/dle/bert_pyt_ckpt_distilled_4l_288d_qa_squad11_amp/versions/21.11.0/zip
configurations:
- accelerator: none
accelerator_precision: fp16
batch_size:
- 1
batch_sizes: '1'
capture_cuda_graph: 0
checkpoint_variant: dist-4l-qa
export_format: onnx
export_precision: fp16
format: onnx
max_batch_size: 1
max_seq_length: 384
precision: fp16
triton_gpu_engine_count: 1
triton_max_queue_delay: 1
triton_preferred_batch_sizes: '1'
- accelerator: none
accelerator_precision: fp16
batch_size:
- 16
batch_sizes: '16'
capture_cuda_graph: 0
checkpoint_variant: dist-4l-qa
export_format: onnx
export_precision: fp16
format: onnx
max_batch_size: 16
max_seq_length: 384
precision: fp16
triton_gpu_engine_count: 1
triton_max_queue_delay: 1
triton_preferred_batch_sizes: 8 16
- accelerator: none
accelerator_precision: fp16
batch_size:
- 8
batch_sizes: '8'
capture_cuda_graph: 0
checkpoint_variant: dist-4l-qa
export_format: onnx
export_precision: fp16
format: onnx
max_batch_size: 8
max_seq_length: 384
precision: fp16
triton_gpu_engine_count: 1
triton_max_queue_delay: 1
triton_preferred_batch_sizes: 4 8
- accelerator: trt
accelerator_precision: fp16
batch_size:
- 1
batch_sizes: '1'
capture_cuda_graph: 0
checkpoint_variant: dist-4l-qa
export_format: onnx
export_precision: fp16
format: onnx
max_batch_size: 1
max_seq_length: 384
precision: fp16
triton_gpu_engine_count: 1
triton_max_queue_delay: 1
triton_preferred_batch_sizes: '1'
- accelerator: trt
accelerator_precision: fp16
batch_size:
- 16
batch_sizes: '16'
capture_cuda_graph: 0
checkpoint_variant: dist-4l-qa
export_format: onnx
export_precision: fp16
format: onnx
max_batch_size: 16
max_seq_length: 384
precision: fp16
triton_gpu_engine_count: 1
triton_max_queue_delay: 1
triton_preferred_batch_sizes: 8 16
- accelerator: trt
accelerator_precision: fp16
batch_size:
- 8
batch_sizes: '8'
capture_cuda_graph: 0
checkpoint_variant: dist-4l-qa
export_format: onnx
export_precision: fp16
format: onnx
max_batch_size: 8
max_seq_length: 384
precision: fp16
triton_gpu_engine_count: 1
triton_max_queue_delay: 1
triton_preferred_batch_sizes: 4 8
- accelerator: none
accelerator_precision: fp16
batch_size:
- 1
batch_sizes: '1'
capture_cuda_graph: 0
checkpoint_variant: dist-4l-qa
export_format: onnx
export_precision: fp16
format: trt
max_batch_size: 1
max_seq_length: 384
precision: fp16
triton_gpu_engine_count: 1
triton_max_queue_delay: 1
triton_preferred_batch_sizes: '1'
- accelerator: none
accelerator_precision: fp16
batch_size:
- 16
batch_sizes: '16'
capture_cuda_graph: 0
checkpoint_variant: dist-4l-qa
export_format: onnx
export_precision: fp16
format: trt
max_batch_size: 16
max_seq_length: 384
precision: fp16
triton_gpu_engine_count: 1
triton_max_queue_delay: 1
triton_preferred_batch_sizes: 8 16
- accelerator: none
accelerator_precision: fp16
batch_size:
- 8
batch_sizes: '8'
capture_cuda_graph: 0
checkpoint_variant: dist-4l-qa
export_format: onnx
export_precision: fp16
format: trt
max_batch_size: 8
max_seq_length: 384
precision: fp16
triton_gpu_engine_count: 1
triton_max_queue_delay: 1
triton_preferred_batch_sizes: 4 8
- accelerator: none
accelerator_precision: fp16
batch_size:
- 1
- 8
- 16
batch_sizes: 1 8 16
capture_cuda_graph: 0
checkpoint_variant: dist-4l-qa
export_format: ts-trace
export_precision: fp16
format: ts-trace
max_batch_size: 16
max_seq_length: 384
precision: fp16
triton_gpu_engine_count: 1
triton_max_queue_delay: 1
triton_preferred_batch_sizes: 8 16
container_version: '21.10'
datasets:
- name: data
datasets_dir: datasets
framework: PyTorch
model_name: BERT
triton_container_image: null
triton_custom_operations: null
triton_dockerfile: null
triton_load_model_method: explicit
|
TensorFlow2/Detection/Efficientdet/scripts/docker | docker | build | #!/bin/bash
docker build --rm -t effdet_tf2 . -f Dockerfile |
CUDA-Optimized/FastSpeech/fastspeech/model | model | __init__ | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/util | util | engineDriver | /*
* Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the NVIDIA CORPORATION nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef TT2I_ENGINEDRIVER_H
#define TT2I_ENGINEDRIVER_H
#include "trtPtr.h"
#include "NvInfer.h"
namespace tts
{
/**
* @brief This class acts a parent for classes depending on TRT engines.
*/
class EngineDriver
{
public:
/**
* @brief Create a new EngineDriver class.
*
* @param engine The engine to wrap.
*/
EngineDriver(TRTPtr<nvinfer1::ICudaEngine> engine);
/**
* @brief Virtual destructor.
*/
virtual ~EngineDriver() = default;
/**
* @brief Get the wrapped engine in a non-mutable state.
*
* @return The engine.
*/
const nvinfer1::ICudaEngine& getEngine() const;
/**
* @brief Get the wrapped engine in a mutable state.
*
* @return The engine.
*/
nvinfer1::ICudaEngine& getEngine();
/**
* @brief Get the maximum batch size supported by the wrapped engine.
*
* @return The maximum batch size.
*/
int getMaxBatchSize() const;
private:
TRTPtr<nvinfer1::ICudaEngine> mEngine;
};
} // namespace tts
#endif
|
PyTorch/Detection/Efficientdet/scripts/D0 | D0 | train-benchmark_AMP_A100-80G | #!/bin/bash
function get_dataloader_workers {
gpus=$(nvidia-smi -i 0 --query-gpu=count --format=csv,noheader)
core=$(nproc --all)
workers=$((core/gpus-2))
workers=$((workers>16?16:workers))
echo ${workers}
}
WORKERS=$(get_dataloader_workers)
./distributed_train.sh ${NUM_PROC:-8} /workspace/object_detection/datasets/coco --model efficientdet_d0 -b 150 --lr 1.63 --amp --opt fusedmomentum --warmup-epochs 50 --lr-noise 0.4 0.9 --output /model --worker ${WORKERS} --fill-color mean --model-ema --model-ema-decay 0.999 --eval-after 200 --epochs 5 --resume --smoothing 0.0 --pretrained-backbone-path /backbone_checkpoints/jocbackbone_statedict_B0.pth --memory-format nchw --sync-bn --fused-focal-loss --seed 12711 --benchmark-steps 500 --benchmark |
PyTorch/LanguageModeling/BERT/distillation/utils | utils | utils | # coding=utf-8
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
import torch.distributed as dist
import random
import numpy as np
from pathlib import Path
def unwrap_ddp(model):
if isinstance(model, torch.nn.parallel.distributed.DistributedDataParallel):
return model.module
return model
def get_rank():
if not dist.is_available():
return 0
if not dist.is_initialized():
return 0
return dist.get_rank()
def get_world_size():
if not dist.is_available():
return 1
if not dist.is_initialized():
return 1
return dist.get_world_size()
def is_main_process():
return get_rank() == 0
def barrier():
if dist.is_available() and dist.is_initialized():
dist.barrier()
def format_step(step):
if isinstance(step, str):
return step
s = ""
if len(step) > 0:
s += "Training Epoch: {} ".format(step[0])
if len(step) > 1:
s += "Training Iteration: {} ".format(step[1])
if len(step) > 2:
s += "Validation Iteration: {} ".format(step[2])
return s
def mkdir(path):
Path(path).mkdir(parents=True, exist_ok=True)
def mkdir_by_main_process(path):
if is_main_process():
mkdir(path)
barrier()
def set_seed(seed, n_gpu):
random.seed(seed + get_rank())
np.random.seed(seed + get_rank())
torch.manual_seed(seed + get_rank())
if n_gpu > 0:
torch.cuda.manual_seed_all(seed + get_rank())
|
TensorFlow/Classification/ConvNets/triton/deployment_toolkit | deployment_toolkit | core | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import importlib
import logging
import os
from enum import Enum
from pathlib import Path
from typing import Any, Dict, List, NamedTuple, Optional, Tuple, Union
import numpy as np
LOGGER = logging.getLogger(__name__)
DATALOADER_FN_NAME = "get_dataloader_fn"
GET_MODEL_FN_NAME = "get_model"
GET_SERVING_INPUT_RECEIVER_FN = "get_serving_input_receiver_fn"
GET_ARGPARSER_FN_NAME = "update_argparser"
class TensorSpec(NamedTuple):
name: str
dtype: str
shape: Tuple
class Parameter(Enum):
def __lt__(self, other: "Parameter") -> bool:
return self.value < other.value
def __str__(self):
return self.value
class Accelerator(Parameter):
AMP = "amp"
NONE = "none"
TRT = "trt"
class Precision(Parameter):
FP16 = "fp16"
FP32 = "fp32"
TF32 = "tf32" # Deprecated
class Format(Parameter):
TF_GRAPHDEF = "tf-graphdef"
TF_SAVEDMODEL = "tf-savedmodel"
TF_TRT = "tf-trt"
TF_ESTIMATOR = "tf-estimator"
TF_KERAS = "tf-keras"
ONNX = "onnx"
TRT = "trt"
TS_SCRIPT = "ts-script"
TS_TRACE = "ts-trace"
PYT = "pyt"
class Model(NamedTuple):
handle: object
precision: Optional[Precision]
inputs: Dict[str, TensorSpec]
outputs: Dict[str, TensorSpec]
def load_from_file(file_path, label, target):
spec = importlib.util.spec_from_file_location(name=label, location=file_path)
my_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(my_module) # pytype: disable=attribute-error
return getattr(my_module, target, None)
class BaseLoader(abc.ABC):
required_fn_name_for_signature_parsing: Optional[str] = None
@abc.abstractmethod
def load(self, model_path: Union[str, Path], **kwargs) -> Model:
"""
Loads and process model from file based on given set of args
"""
pass
class BaseSaver(abc.ABC):
required_fn_name_for_signature_parsing: Optional[str] = None
@abc.abstractmethod
def save(self, model: Model, model_path: Union[str, Path]) -> None:
"""
Save model to file
"""
pass
class BaseRunner(abc.ABC):
required_fn_name_for_signature_parsing: Optional[str] = None
@abc.abstractmethod
def init_inference(self, model: Model):
raise NotImplementedError
class BaseRunnerSession(abc.ABC):
def __init__(self, model: Model):
self._model = model
@abc.abstractmethod
def __enter__(self):
raise NotImplementedError()
@abc.abstractmethod
def __exit__(self, exc_type, exc_value, traceback):
raise NotImplementedError()
@abc.abstractmethod
def __call__(self, x: Dict[str, object]):
raise NotImplementedError()
def _set_env_variables(self) -> Dict[str, object]:
"""this method not remove values; fix it if needed"""
to_set = {}
old_values = {k: os.environ.pop(k, None) for k in to_set}
os.environ.update(to_set)
return old_values
def _recover_env_variables(self, old_envs: Dict[str, object]):
for name, value in old_envs.items():
if value is None:
del os.environ[name]
else:
os.environ[name] = str(value)
class BaseConverter(abc.ABC):
required_fn_name_for_signature_parsing: Optional[str] = None
@abc.abstractmethod
def convert(self, model: Model, dataloader_fn) -> Model:
raise NotImplementedError()
@staticmethod
def required_source_model_precision(requested_model_precision: Precision) -> Precision:
return requested_model_precision
class BaseMetricsCalculator(abc.ABC):
required_fn_name_for_signature_parsing: Optional[str] = None
def calc(
self,
*,
ids: List[Any],
y_pred: Dict[str, np.ndarray],
x: Optional[Dict[str, np.ndarray]],
y_real: Optional[Dict[str, np.ndarray]],
) -> Dict[str, float]:
"""
Calculates error/accuracy metrics
Args:
ids: List of ids identifying each sample in the batch
y_pred: model output as dict where key is output name and value is output value
x: model input as dict where key is input name and value is input value
y_real: input ground truth as dict where key is output name and value is output value
Returns:
dictionary where key is metric name and value is its value
"""
pass
@abc.abstractmethod
def update(
self,
ids: List[Any],
y_pred: Dict[str, np.ndarray],
x: Optional[Dict[str, np.ndarray]],
y_real: Optional[Dict[str, np.ndarray]],
):
pass
@property
@abc.abstractmethod
def metrics(self) -> Dict[str, Any]:
pass
class ShapeSpec(NamedTuple):
min: Tuple
opt: Tuple
max: Tuple
|
TensorFlow/Detection | Detection | README | # Object Detection
A natural progression from image classification would be classification and localization of the subject of the image. We can take this idea one step further and localize objects in a given image. Simply put, object detection refers to identifying which object(s) are there in an image.

Source: [Joseph Redmon, Ali Farhadi, “YOLO9000:Better, Faster, Stronger”](https://arxiv.org/abs/1612.08242)
## Introduction to Object Detection
In this section we will try to answer the following questions:
- What is object detection?
- Why is object detection important?
Object Detection is about not only detecting the presence and location of objects in images and videos, but also categorizing them into everyday objects. Oftentimes, there is a confusion between Image Classification and Object Detection. Simply put, the difference between them is the same as the difference between saying “This is a cat” and pointing to a cat and saying “There is the cat”.
To build autonomous systems, perception is the main challenge to be solved. Perception, in terms of autonomous systems refers to the ability of understanding the surroundings of the autonomous agent. This means that the agent needs to be able to figure out where and what objects are in its immediate vicinity.
Object detection can help keep humans away from toxic environments and hazardous situations. Challenges like garbage segregation, oil rig monitoring, nightly surveillance, cargo port maintenance and other high risk applications can be aided by robots/cameras which can detect objects. Essentially, any environment that requires visual inspection or analysis and is too dangerous for humans, object detection pipelines can be used to shield from any onsite hazard.
## How does it work?
While this has been a topic of research since before Deep Learning became mainstream, the best performing models today use one or more Deep Neural Networks.
Many architectures have networks pretrained on a different, simpler task, like Image Classification. As one can imagine, the inputs to this task can be images or videos, and the outputs are usually a set of bounding box coordinates that enclose each of the detected objects, as well as a class label for each detected object. With advances in research and the use of GPUs, it is possible to have object detection in real time with really impressive accuracies!

Source: [Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg, “SSD: Single Shot MultiBox Detector”](https://arxiv.org/abs/1512.02325)
Single Shot Detector(SSD) is one of the state-of-the-art models for object detection and localization. It is based on a feed-forward convolutional neural network which always yields a fixed set of bounding boxes and a confidence score which represents how confident the network is about the bounding box containing an object. This is followed by a non maximum suppression step which outputs the final detections.
This network can be understood as two networks stacked on top of each other. The first network is a simple convolutional neural network which “extracts important features” which is the same as the image classification networks.
The second network is a multiscale feature map network built using another set of convolutional layers which are progressively smaller in size to allow detections on multiple scales. Simply put, the progressively smaller layers help detect objects of different sizes. Each layer in this set of layers outputs a number of detections and the final layer passes the output to a non maxima suppression which yields a final set of detections.
This Collection contains models and containers for object detection achieving state-of-the-art accuracies, tested and maintained by Nvidia.
## Applications and Use cases
### Autonomous Vehicles
Autonomous vehicles need to perceive and interact with real world objects in order to blend in with the environment. For instance a self-driving car needs to detect other vehicles, pedestrians, objects on the road, traffic signals and any and all obstacles on road and also understand the exact location of these objects. This perception information helps the agent avoid obstacles and understand how to interact with objects like traffic lights.
### Warehouses
Warehouses have many conveyor belts and segregation platforms. These tasks have traditionally been handled manually. As factories and warehouses scale, manually sorting and managing inventory cannot be scaled proportionally. Object detection pipelines deployed on robots can reduce operational friction and enable easy scale up solutions for businesses.
### Surveillance
Surveillance systems typically accumulate large volumes of video data which needs to be analyzed for all sorts of anomalies. Given the number of video sources even a small store has, analysing surveillance data from a large operation is a challenge. Object detection networks can help automate much of the pipeline to highlight sections where there is an object of interest. It can also be trained to identify anomalies in video streams.
### Hazardous tasks
Humans work at waste processing plants, nuclear power plants, oil rigs and around heavy machinery, which tend to be extremely hazardous and dangerous which pose health risks. These tasks essentially require human presence for visual tasks and confirmations which revolve around recognizing objects and relaying locations of objects. Risky tasks like these can be completed with a help of a object detection pipeline deployed on a camera or a robot which can reduce operational risks and costs. |
TensorFlow2/Recommendation/DLRM_and_DCNv2/tensorflow-dot-based-interact/tensorflow_dot_based_interact/cc/kernels/cuda_kernels | cuda_kernels | dot_based_interact_fp32 | // Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <cuda.h>
#include <cuda_fp16.h>
#include <cuda_runtime_api.h>
#include <device_launch_parameters.h>
#include <mma.h>
#include <cuda_fp16.hpp>
#include <math.h>
#include <fstream>
#include <iomanip>
#include <iostream>
#include <vector>
#include "dot_based_interact_shared_utils.cuh"
template <uint THREADBLOCK_SIZE>
__launch_bounds__(THREADBLOCK_SIZE) __global__
void dotBasedInteractF32FwdKernelNonAligned(const float *__restrict input,
float *__restrict output,
uint batch_size,
uint num_rows,
uint num_cols,
uint input_size,
uint output_size,
uint interaction_output_size) {
extern __shared__ float smem_f32_fwd[];
float *smem_in = &smem_f32_fwd[0];
uint input_batch_offset = blockIdx.x * input_size;
const float *gmem_in = &input[input_batch_offset];
uint output_batch_offset = blockIdx.x * output_size;
float *gmem_out_bottom_mlp = &output[output_batch_offset];
float *gmem_out_interaction = &output[output_batch_offset + num_cols];
// Load the input - one sample per block
for (uint idx = threadIdx.x; idx < input_size; idx += blockDim.x) {
smem_in[idx] = gmem_in[idx];
}
__syncthreads();
// Copy bottom MLP output to output
for (uint idx = threadIdx.x; idx < num_cols; idx += blockDim.x) {
gmem_out_bottom_mlp[idx] = smem_in[idx];
}
for (uint idx = threadIdx.x; idx < (interaction_output_size); idx += blockDim.x) {
uint elems_per_row = 1;
uint index = idx;
while (index >= elems_per_row) {
index -= elems_per_row;
elems_per_row++;
}
uint target_row = elems_per_row;
uint target_col = index;
float sum = 0;
for (uint i = 0; i < num_cols; i++) {
float tmp1 = smem_in[target_row * num_cols + i];
float tmp2 = smem_in[target_col * num_cols + i];
sum = fmaf(tmp1, tmp2, sum);
}
gmem_out_interaction[idx] = sum;
}
// Zero out the padding
uint zeroout_index = num_cols + interaction_output_size + threadIdx.x;
if(zeroout_index < output_size){
gmem_out_bottom_mlp[zeroout_index] = 0;
}
}
template <uint THREADBLOCK_SIZE>
__launch_bounds__(THREADBLOCK_SIZE) __global__ void dotBasedInteractF32FwdKernel(const float *__restrict input,
float *__restrict output,
uint batch_size,
uint num_rows,
uint num_cols,
uint input_size,
uint output_size,
uint interaction_output_size) {
extern __shared__ float smem_f32_fwd[];
float *smem_in = &smem_f32_fwd[0];
uint input_batch_offset = blockIdx.x * input_size;
const float *gmem_in = &input[input_batch_offset];
uint output_batch_offset = blockIdx.x * output_size;
float *gmem_out_bottom_mlp = &output[output_batch_offset];
float *gmem_out_interaction = &output[output_batch_offset + num_cols];
// Load the input - one sample per block
uint input_size_float4 = input_size >> 2;
for (uint idx = threadIdx.x; idx < input_size_float4; idx += blockDim.x) {
((float4 *)smem_in)[idx] = ((float4 *)gmem_in)[idx];
}
__syncthreads();
// Copy bottom MLP output to output
uint btm_mlp_out_size_float4 = num_cols >> 2;
for (uint idx = threadIdx.x; idx < btm_mlp_out_size_float4; idx += blockDim.x) {
((float4 *)gmem_out_bottom_mlp)[idx] = ((float4 *)smem_in)[idx];
}
for (uint idx = threadIdx.x; idx < (interaction_output_size); idx += blockDim.x) {
uint elems_per_row = 1;
uint index = idx;
while (index >= elems_per_row) {
index -= elems_per_row;
elems_per_row++;
}
uint target_row = elems_per_row;
uint target_col = index;
float4 sum;
sum.x = 0;
sum.y = 0;
sum.z = 0;
sum.w = 0;
uint num_cols_float4 = num_cols >> 2;
for (uint i = 0; i < num_cols_float4; i++) {
float4 tmp1 = ((float4 *)smem_in)[target_row * num_cols_float4 + i];
float4 tmp2 = ((float4 *)smem_in)[target_col * num_cols_float4 + i];
sum.x = fmaf(tmp1.x, tmp2.x, sum.x);
sum.y = fmaf(tmp1.y, tmp2.y, sum.y);
sum.z = fmaf(tmp1.z, tmp2.z, sum.z);
sum.w = fmaf(tmp1.w, tmp2.w, sum.w);
}
gmem_out_interaction[idx] = sum.x + sum.y + sum.z + sum.w;
}
// Zero out the padding
uint zeroout_index = num_cols + interaction_output_size + threadIdx.x;
if(zeroout_index < output_size){
gmem_out_bottom_mlp[zeroout_index] = 0;
}
}
template <uint THREADBLOCK_SIZE>
__launch_bounds__(THREADBLOCK_SIZE) __global__
void dotBasedInteractF32BwdKernelNonAligned(const float *__restrict input,
const float *__restrict upstream_grad,
float *__restrict grad,
float *__restrict bottom_mlp_grad,
uint batch_size,
uint num_rows,
uint num_cols,
uint input_size,
uint ugrad_size,
uint interaction_ugrad_size) {
extern __shared__ float smem_f32_bwd[];
float *smem_in = &smem_f32_bwd[0];
float *smem_interaction_ugrad = &smem_f32_bwd[input_size];
// Input
uint input_batch_offset = blockIdx.x * input_size;
const float *gmem_in = &input[input_batch_offset];
// Gradient
const uint &grad_batch_offset = input_batch_offset;
float *gmem_mlp_grad = &bottom_mlp_grad[blockIdx.x * num_cols];
float *gmem_interaction_grad = &grad[grad_batch_offset];
// Upstream Gradient
uint upstream_grad_batch_offset = blockIdx.x * ugrad_size;
const float *gmem_mlp_ugrad = &upstream_grad[upstream_grad_batch_offset];
const float *gmem_interaction_ugrad = &upstream_grad[upstream_grad_batch_offset + num_cols];
// input -> shared memory
for (uint idx = threadIdx.x; idx < input_size; idx += blockDim.x) {
smem_in[idx] = gmem_in[idx];
}
// Interaction Upstream Grad -> Shared Memory
for (uint idx = threadIdx.x; idx < interaction_ugrad_size; idx += blockDim.x) {
smem_interaction_ugrad[idx] = gmem_interaction_ugrad[idx];
}
__syncthreads();
// Copy the upstream gradient w.r.t to mlp to it's corresponding memory location.
for (uint idx = threadIdx.x; idx < num_cols; idx += blockDim.x) {
gmem_mlp_grad[idx] = gmem_mlp_ugrad[idx];
}
for (uint idx = threadIdx.x; idx < num_cols; idx += blockDim.x) {
size_t grad_idx = idx;
for (uint row_idx = 0; row_idx < num_rows; row_idx++) {
float sum = 0;
size_t upstream_grad_offset = (row_idx * (row_idx - 1)) >> 1;
for (int k = 0; k < row_idx; k++) {
sum = fmaf(smem_in[k * num_cols + idx], smem_interaction_ugrad[upstream_grad_offset + k], sum);
}
for (int k = row_idx + 1; k < num_rows; k++) {
upstream_grad_offset = (k * (k - 1)) >> 1; // TODO: this can become a sum
sum = fmaf(smem_in[k * num_cols + idx], smem_interaction_ugrad[upstream_grad_offset + row_idx], sum);
}
gmem_interaction_grad[grad_idx] = sum;
grad_idx += num_cols;
}
}
}
template <uint THREADBLOCK_SIZE>
__launch_bounds__(THREADBLOCK_SIZE) __global__ void dotBasedInteractF32BwdKernel(const float *__restrict input,
const float *__restrict upstream_grad,
float *__restrict grad,
float *__restrict bottom_mlp_grad,
uint batch_size,
uint num_rows,
uint num_cols,
uint input_size,
uint ugrad_size,
uint interaction_ugrad_size) {
extern __shared__ float smem_f32_bwd[];
float *smem_in = &smem_f32_bwd[0];
float *smem_interaction_ugrad = &smem_f32_bwd[input_size];
// Input
uint input_batch_offset = blockIdx.x * input_size;
const float *gmem_in = &input[input_batch_offset];
// Gradient
const uint &grad_batch_offset = input_batch_offset;
float *gmem_mlp_grad = &bottom_mlp_grad[blockIdx.x * num_cols];
float *gmem_interaction_grad = &grad[grad_batch_offset];
// Upstream Gradient
uint upstream_grad_batch_offset = blockIdx.x * ugrad_size;
const float *gmem_mlp_ugrad = &upstream_grad[upstream_grad_batch_offset];
const float *gmem_interaction_ugrad = &upstream_grad[upstream_grad_batch_offset + num_cols];
// input -> shared memory
uint input_size_float4 = input_size >> 2;
for (uint idx = threadIdx.x; idx < input_size_float4; idx += blockDim.x) {
((float4 *)smem_in)[idx] = ((float4 *)gmem_in)[idx];
}
// Interaction Upstream Grad -> Shared Memory
uint upstream_grad_size_float4 = interaction_ugrad_size >> 2;
for (uint idx = threadIdx.x; idx < upstream_grad_size_float4; idx += blockDim.x) {
((float4 *)smem_interaction_ugrad)[idx] = ((float4 *)gmem_interaction_ugrad)[idx];
}
uint vectorized_load_offset = (upstream_grad_size_float4 << 2);
for (uint idx = vectorized_load_offset + threadIdx.x; idx < interaction_ugrad_size; idx += blockDim.x) {
smem_interaction_ugrad[idx] = gmem_interaction_ugrad[idx];
}
__syncthreads();
// Copy the upstream gradient w.r.t to mlp to it's corresponding memory location.
for (uint idx = threadIdx.x; idx < (num_cols >> 2); idx += blockDim.x) {
((float4 *)gmem_mlp_grad)[idx] = ((float4 *)gmem_mlp_ugrad)[idx];
}
for (uint idx = threadIdx.x; idx < num_cols; idx += blockDim.x) {
size_t grad_idx = idx;
for (uint row_idx = 0; row_idx < num_rows; row_idx++) {
float sum = 0;
size_t upstream_grad_offset = (row_idx * (row_idx - 1)) >> 1;
for (int k = 0; k < row_idx; k++) {
sum = fmaf(smem_in[k * num_cols + idx], smem_interaction_ugrad[upstream_grad_offset + k], sum);
}
for (int k = row_idx + 1; k < num_rows; k++) {
upstream_grad_offset = (k * (k - 1)) >> 1; // TODO: this can become a sum
sum = fmaf(smem_in[k * num_cols + idx], smem_interaction_ugrad[upstream_grad_offset + row_idx], sum);
}
gmem_interaction_grad[grad_idx] = sum;
grad_idx += num_cols;
}
}
}
|
TensorFlow/Segmentation/UNet_Industrial/scripts | scripts | UNet_EVAL_XLA | #!/usr/bin/env bash
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script launches UNet evaluation in FP32 on 1 GPUs using 16 batch size
# Usage ./UNet_FP32_EVAL_XLA.sh <path to result repository> <path to dataset> <dagm classID (1-10)>
BASEDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
export TF_CPP_MIN_LOG_LEVEL=3
python "${BASEDIR}/../main.py" \
--unet_variant='tinyUNet' \
--activation_fn='relu' \
--exec_mode='evaluate' \
--iter_unit='epoch' \
--num_iter=1 \
--batch_size=16 \
--warmup_step=10 \
--results_dir="${1}" \
--data_dir="${2}" \
--dataset_name='DAGM2007' \
--dataset_classID="${3}" \
--data_format='NCHW' \
--use_auto_loss_scaling \
--noamp \
--xla \
--learning_rate=1e-4 \
--learning_rate_decay_factor=0.8 \
--learning_rate_decay_steps=500 \
--rmsprop_decay=0.9 \
--rmsprop_momentum=0.8 \
--loss_fn_name='adaptive_loss' \
--weight_decay=1e-5 \
--weight_init_method='he_uniform' \
--augment_data \
--display_every=50 \
--debug_verbosity=0
|
TensorFlow/Detection/SSD/examples | examples | SSD320_FP32_8GPU_BENCHMARK | # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
CKPT_DIR=${1:-"/results/SSD320_FP32_8GPU"}
PIPELINE_CONFIG_PATH=${2:-"/workdir/models/research/configs"}"/ssd320_bench.config"
GPUS=8
TENSOR_OPS=0
export TF_ENABLE_CUBLAS_TENSOR_OP_MATH_FP32=${TENSOR_OPS}
export TF_ENABLE_CUDNN_TENSOR_OP_MATH_FP32=${TENSOR_OPS}
export TF_ENABLE_CUDNN_RNN_TENSOR_OP_MATH_FP32=${TENSOR_OPS}
TRAIN_LOG=$(mpirun --allow-run-as-root \
-np $GPUS \
-H localhost:$GPUS \
-bind-to none \
-map-by slot \
-x NCCL_DEBUG=INFO \
-x LD_LIBRARY_PATH \
-x PATH \
-mca pml ob1 \
-mca btl ^openib \
python -u ./object_detection/model_main.py \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--model_dir=${CKPT_DIR} \
--alsologtostder \
"${@:3}" 2>&1)
PERF=$(echo "$TRAIN_LOG" | sed -n 's|.*global_step/sec: \(\S\+\).*|\1|p' | python -c "import sys; x = sys.stdin.readlines(); x = [float(a) for a in x[int(len(x)*3/4):]]; print(32*$GPUS*sum(x)/len(x), 'img/s')")
mkdir -p $CKPT_DIR
echo "$GPUS GPUs single precision training performance: $PERF" | tee $CKPT_DIR/train_log
echo "$TRAIN_LOG" >> $CKPT_DIR/train_log
|
Tools/DGLPyTorch/SyntheticGraphGeneration/demos/performance | performance | tabular_generator | #!/usr/bin/env python
# coding: utf-8
# Copyright 2023 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# # Tabular data generation performance demo
# ## Overview
#
# In this notebbok we compare the performance (throughput) of tabular data generators presented in the SynGen tool.
#
# Available generators:
#
# 1. [KDE (Kernel Density Estimation)](#1)
# 1. [Uniform](#2)
# 1. [Gaussian](#3)
# 1. [Random](#4)
# ### Imports
# In[1]:
# preprocessing
from syngen.preprocessing.datasets import IEEEPreprocessing
# generators
from syngen.generator.tabular import (
KDEGenerator,
UniformGenerator,
GaussianGenerator,
RandomMVGenerator,
)
# Others
import time
import pandas as pd
from collections import defaultdict
from syngen.utils.types import MetaData
# ### Helper function
# In[2]:
def measure_throughput(generator, n=10, samples = 100000, gpu=False):
times = []
for _ in range(n):
start = time.perf_counter()
generator.sample(samples, gpu=gpu)
elapsed = time.perf_counter() - start
times.append(elapsed)
return int((samples * n) / sum(times))
# ### Load tabular features
# In[3]:
data_path = '/workspace/data/ieee-fraud'
preprocessed_path = '/workspace/data/ieee_preprocessed'
# In[4]:
preprocessing = IEEEPreprocessing(source_path=data_path, destination_path=preprocessed_path)
# In[5]:
feature_spec_original = preprocessing.transform(use_cache=True)
# In[6]:
original_tabular_data, categorical_features = feature_spec_original.get_tabular_data(MetaData.EDGES, 'user-product', return_cat_feats=True)
# In[7]:
results_dict = defaultdict(dict)
# <a id="1"></a>
# ## KDE (Kernel Density Estimation) Generator
#
# In[8]:
kde_generator = KDEGenerator()
kde_generator.fit(original_tabular_data, categorical_columns=categorical_features)
results_dict['kde-cpu'] = measure_throughput(kde_generator, gpu=False)
results_dict['kde-gpu'] = measure_throughput(kde_generator, gpu=True)
print(f"avg throughput: {results_dict['kde-cpu']}, {results_dict['kde-gpu']}")
# <a id="2"></a>
# ## Uniform Generator
# In[9]:
uniform_generator = UniformGenerator()
uniform_generator.fit(original_tabular_data, categorical_columns=categorical_features)
results_dict['uniform-cpu'] = measure_throughput(uniform_generator, gpu=False)
results_dict['uniform-gpu'] = measure_throughput(uniform_generator, gpu=True)
print(f"avg throughput: {results_dict['uniform-cpu']}, {results_dict['uniform-gpu']}")
# <a id="3"></a>
# ## Gaussian Generator
# In[10]:
gaussian_generator = GaussianGenerator()
gaussian_generator.fit(original_tabular_data, categorical_columns=categorical_features)
results_dict['gaussian-cpu'] = measure_throughput(gaussian_generator, gpu=False)
results_dict['gaussian-gpu'] = measure_throughput(gaussian_generator, gpu=True)
print(f"avg throughput: {results_dict['gaussian-cpu']}, {results_dict['gaussian-gpu']}")
# <a id="4"></a>
# ## Random Generator
# In[11]:
random_generator = RandomMVGenerator()
random_generator.fit(original_tabular_data, categorical_columns=categorical_features)
results_dict['random-cpu'] = measure_throughput(random_generator, gpu=False)
results_dict['random-gpu'] = measure_throughput(random_generator, gpu=True)
print(f"avg throughput: {results_dict['random-cpu']}, {results_dict['random-gpu']}")
# ## Results
# In[12]:
pd.DataFrame(results_dict, index=['ieee'])
# In[ ]:
|
TensorFlow2/Segmentation/Contrib/UNet3P | UNet3P | requirements | hydra-core
opencv-python
jupyter
matplotlib
tqdm
nibabel
numba |
PyTorch/Classification/ConvNets/triton/deployment_toolkit/library | library | pyt | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from collections import Counter
from pathlib import Path
from typing import Dict, Iterable, NamedTuple, Optional, Union
import torch # pytype: disable=import-error
import yaml
from ..core import (
GET_MODEL_FN_NAME,
BaseConverter,
BaseLoader,
BaseRunner,
BaseRunnerSession,
BaseSaver,
Format,
Model,
Precision,
TensorSpec,
load_from_file,
)
from ..extensions import converters, loaders, runners, savers
from .utils import get_dynamic_axes, get_input_shapes, get_shapes_with_dynamic_axes
LOGGER = logging.getLogger(__name__)
class InputOutputSpec(NamedTuple):
inputs: Dict[str, TensorSpec]
outputs: Dict[str, TensorSpec]
def get_sample_input(dataloader, device):
for batch in dataloader:
_, x, _ = batch
break
if isinstance(x, dict):
sample_input = list(x.values())
elif isinstance(x, list):
sample_input = x
else:
raise TypeError("The first element (x) of batch returned by dataloader must be a list or a dict")
for idx, s in enumerate(sample_input):
sample_input[idx] = torch.from_numpy(s).to(device)
return tuple(sample_input)
def get_model_device(torch_model):
if next(torch_model.parameters()).is_cuda:
return "cuda"
else:
return "cpu"
def infer_model_precision(model):
counter = Counter()
for param in model.parameters():
counter[param.dtype] += 1
if counter[torch.float16] > 0:
return Precision.FP16
else:
return Precision.FP32
def _get_tensor_dtypes(dataloader, precision):
def _get_dtypes(t):
dtypes = {}
for k, v in t.items():
dtype = str(v.dtype)
if dtype == "float64":
dtype = "float32"
if precision == Precision.FP16 and dtype == "float32":
dtype = "float16"
dtypes[k] = dtype
return dtypes
input_dtypes = {}
output_dtypes = {}
for batch in dataloader:
_, x, y = batch
input_dtypes = _get_dtypes(x)
output_dtypes = _get_dtypes(y)
break
return input_dtypes, output_dtypes
### TODO assumption: floating point input
### type has same precision as the model
def _get_io_spec(model, dataloader_fn):
precision = model.precision
dataloader = dataloader_fn()
input_dtypes, output_dtypes = _get_tensor_dtypes(dataloader, precision)
input_shapes, output_shapes = get_shapes_with_dynamic_axes(dataloader)
inputs = {
name: TensorSpec(name=name, dtype=input_dtypes[name], shape=tuple(input_shapes[name])) for name in model.inputs
}
outputs = {
name: TensorSpec(name=name, dtype=output_dtypes[name], shape=tuple(output_shapes[name]))
for name in model.outputs
}
return InputOutputSpec(inputs, outputs)
class PyTorchModelLoader(BaseLoader):
required_fn_name_for_signature_parsing: Optional[str] = GET_MODEL_FN_NAME
def __init__(self, **kwargs):
self._model_args = kwargs
def load(self, model_path: Union[str, Path], **_) -> Model:
if isinstance(model_path, Path):
model_path = model_path.as_posix()
get_model = load_from_file(model_path, "model", GET_MODEL_FN_NAME)
model, tensor_infos = get_model(**self._model_args)
io_spec = InputOutputSpec(tensor_infos["inputs"], tensor_infos["outputs"])
precision = infer_model_precision(model)
return Model(handle=model, precision=precision, inputs=io_spec.inputs, outputs=io_spec.outputs)
class TorchScriptLoader(BaseLoader):
def __init__(self, tensor_names_path: str = None, **kwargs):
self._model_args = kwargs
self._io_spec = None
if tensor_names_path is not None:
with Path(tensor_names_path).open("r") as fh:
tensor_infos = yaml.load(fh, Loader=yaml.SafeLoader)
self._io_spec = InputOutputSpec(tensor_infos["inputs"], tensor_infos["outputs"])
def load(self, model_path: Union[str, Path], **_) -> Model:
if not isinstance(model_path, Path):
model_path = Path(model_path)
model = torch.jit.load(model_path.as_posix())
precision = infer_model_precision(model)
io_spec = self._io_spec
if not io_spec:
yaml_path = model_path.parent / f"{model_path.stem}.yaml"
if not yaml_path.is_file():
raise ValueError(
f"If `--tensor-names-path is not provided, "
f"TorchScript model loader expects file {yaml_path} with tensor information."
)
with yaml_path.open("r") as fh:
tensor_info = yaml.load(fh, Loader=yaml.SafeLoader)
io_spec = InputOutputSpec(tensor_info["inputs"], tensor_info["outputs"])
return Model(handle=model, precision=precision, inputs=io_spec.inputs, outputs=io_spec.outputs)
class TorchScriptTraceConverter(BaseConverter):
def __init__(self):
pass
def convert(self, model: Model, dataloader_fn) -> Model:
device = get_model_device(model.handle)
dummy_input = get_sample_input(dataloader_fn(), device)
converted_model = torch.jit.trace_module(model.handle, {"forward": dummy_input})
io_spec = _get_io_spec(model, dataloader_fn)
return Model(converted_model, precision=model.precision, inputs=io_spec.inputs, outputs=io_spec.outputs)
class TorchScriptScriptConverter(BaseConverter):
def __init__(self):
pass
def convert(self, model: Model, dataloader_fn) -> Model:
converted_model = torch.jit.script(model.handle)
io_spec = _get_io_spec(model, dataloader_fn)
return Model(converted_model, precision=model.precision, inputs=io_spec.inputs, outputs=io_spec.outputs)
class PYT2ONNXConverter(BaseConverter):
def __init__(self, onnx_opset: int = None):
self._onnx_opset = onnx_opset
def convert(self, model: Model, dataloader_fn) -> Model:
import tempfile
import onnx # pytype: disable=import-error
assert isinstance(model.handle, torch.jit.ScriptModule) or isinstance(
model.handle, torch.nn.Module
), "The model must be of type 'torch.jit.ScriptModule' or 'torch.nn.Module'. Converter aborted."
dynamic_axes = get_dynamic_axes(dataloader_fn())
device = get_model_device(model.handle)
dummy_input = get_sample_input(dataloader_fn(), device)
with tempfile.TemporaryDirectory() as tmpdirname:
export_path = os.path.join(tmpdirname, "model.onnx")
with torch.no_grad():
torch.onnx.export(
model.handle,
dummy_input,
export_path,
do_constant_folding=True,
input_names=list(model.inputs),
output_names=list(model.outputs),
dynamic_axes=dynamic_axes,
opset_version=self._onnx_opset,
enable_onnx_checker=True,
)
onnx_model = onnx.load(export_path)
onnx.checker.check_model(onnx_model)
onnx.helper.strip_doc_string(onnx_model)
onnx_model = onnx.shape_inference.infer_shapes(onnx_model)
return Model(
handle=onnx_model,
precision=model.precision,
inputs=model.inputs,
outputs=model.outputs,
)
class PYT2TensorRTConverter(BaseConverter):
def __init__(self, max_batch_size: int, max_workspace_size: int, onnx_opset: int, precision: str):
self._max_batch_size = max_batch_size
self._max_workspace_size = max_workspace_size
self._onnx_opset = onnx_opset
self._precision = Precision(precision)
def convert(self, model: Model, dataloader_fn) -> Model:
from .onnx import _infer_graph_precision
from .onnx2trt_conv import onnx2trt
pyt2onnx_converter = PYT2ONNXConverter(self._onnx_opset)
onnx_model = pyt2onnx_converter.convert(model, dataloader_fn).handle
precision = _infer_graph_precision(onnx_model.graph)
input_shapes = get_input_shapes(dataloader_fn(), self._max_batch_size)
cuda_engine = onnx2trt(
onnx_model,
shapes=input_shapes,
max_workspace_size=self._max_workspace_size,
max_batch_size=self._max_batch_size,
model_precision=self._precision.value,
)
return Model(
handle=cuda_engine,
precision=model.precision,
inputs=model.inputs,
outputs=model.outputs,
)
@staticmethod
def required_source_model_precision(requested_model_precision: Precision) -> Precision:
# TensorRT requires source models to be in FP32 precision
return Precision.FP32
class TorchScriptSaver(BaseSaver):
def save(self, model: Model, model_path: Union[str, Path]) -> None:
if not isinstance(model_path, Path):
model_path = Path(model_path)
if isinstance(model.handle, torch.jit.ScriptModule):
torch.jit.save(model.handle, model_path.as_posix())
else:
print("The model must be of type 'torch.jit.ScriptModule'. Saving aborted.")
assert False # temporary error handling
def _format_tensor_spec(tensor_spec):
# wrapping shape with list and whole tensor_spec with dict() is required for correct yaml dump
tensor_spec = tensor_spec._replace(shape=list(tensor_spec.shape))
tensor_spec = dict(tensor_spec._asdict())
return tensor_spec
# store TensorSpecs from inputs and outputs in a yaml file
tensor_specs = {
"inputs": {k: _format_tensor_spec(v) for k, v in model.inputs.items()},
"outputs": {k: _format_tensor_spec(v) for k, v in model.outputs.items()},
}
yaml_path = model_path.parent / f"{model_path.stem}.yaml"
with Path(yaml_path).open("w") as fh:
yaml.dump(tensor_specs, fh, indent=4)
class PyTorchRunner(BaseRunner):
def __init__(self):
pass
def init_inference(self, model: Model):
return PyTorchRunnerSession(model=model)
class PyTorchRunnerSession(BaseRunnerSession):
def __init__(self, model: Model):
super().__init__(model)
assert isinstance(model.handle, torch.jit.ScriptModule) or isinstance(
model.handle, torch.nn.Module
), "The model must be of type 'torch.jit.ScriptModule' or 'torch.nn.Module'. Runner aborted."
self._model = model
self._output_names = None
def __enter__(self):
self._output_names = list(self._model.outputs)
return self
def __exit__(self, exc_type, exc_value, traceback):
self._output_names = None
self._model = None
def __call__(self, x: Dict[str, object]):
with torch.no_grad():
feed_list = [torch.from_numpy(v).cuda() for k, v in x.items()]
y_pred = self._model.handle(*feed_list)
if isinstance(y_pred, torch.Tensor):
y_pred = (y_pred,)
y_pred = [t.cpu().numpy() for t in y_pred]
y_pred = dict(zip(self._output_names, y_pred))
return y_pred
loaders.register_extension(Format.PYT.value, PyTorchModelLoader)
loaders.register_extension(Format.TS_TRACE.value, TorchScriptLoader)
loaders.register_extension(Format.TS_SCRIPT.value, TorchScriptLoader)
converters.register_extension(f"{Format.PYT.value}--{Format.TS_SCRIPT.value}", TorchScriptScriptConverter)
converters.register_extension(f"{Format.PYT.value}--{Format.TS_TRACE.value}", TorchScriptTraceConverter)
converters.register_extension(f"{Format.PYT.value}--{Format.ONNX.value}", PYT2ONNXConverter)
converters.register_extension(f"{Format.PYT.value}--{Format.TRT.value}", PYT2TensorRTConverter)
savers.register_extension(Format.TS_SCRIPT.value, TorchScriptSaver)
savers.register_extension(Format.TS_TRACE.value, TorchScriptSaver)
runners.register_extension(Format.PYT.value, PyTorchRunner)
runners.register_extension(Format.TS_SCRIPT.value, PyTorchRunner)
runners.register_extension(Format.TS_TRACE.value, PyTorchRunner)
|
CUDA-Optimized/FastSpeech/fastspeech/utils | utils | __init__ | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
TensorFlow/Detection/SSD/models/research/object_detection/core | core | preprocessor_test | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for object_detection.core.preprocessor."""
import numpy as np
import six
import tensorflow as tf
from object_detection.core import preprocessor
from object_detection.core import preprocessor_cache
from object_detection.core import standard_fields as fields
if six.PY2:
import mock # pylint: disable=g-import-not-at-top
else:
from unittest import mock # pylint: disable=g-import-not-at-top
class PreprocessorTest(tf.test.TestCase):
def createColorfulTestImage(self):
ch255 = tf.fill([1, 100, 200, 1], tf.constant(255, dtype=tf.uint8))
ch128 = tf.fill([1, 100, 200, 1], tf.constant(128, dtype=tf.uint8))
ch0 = tf.fill([1, 100, 200, 1], tf.constant(0, dtype=tf.uint8))
imr = tf.concat([ch255, ch0, ch0], 3)
img = tf.concat([ch255, ch255, ch0], 3)
imb = tf.concat([ch255, ch0, ch255], 3)
imw = tf.concat([ch128, ch128, ch128], 3)
imu = tf.concat([imr, img], 2)
imd = tf.concat([imb, imw], 2)
im = tf.concat([imu, imd], 1)
return im
def createTestImages(self):
images_r = tf.constant([[[128, 128, 128, 128], [0, 0, 128, 128],
[0, 128, 128, 128], [192, 192, 128, 128]]],
dtype=tf.uint8)
images_r = tf.expand_dims(images_r, 3)
images_g = tf.constant([[[0, 0, 128, 128], [0, 0, 128, 128],
[0, 128, 192, 192], [192, 192, 128, 192]]],
dtype=tf.uint8)
images_g = tf.expand_dims(images_g, 3)
images_b = tf.constant([[[128, 128, 192, 0], [0, 0, 128, 192],
[0, 128, 128, 0], [192, 192, 192, 128]]],
dtype=tf.uint8)
images_b = tf.expand_dims(images_b, 3)
images = tf.concat([images_r, images_g, images_b], 3)
return images
def createEmptyTestBoxes(self):
boxes = tf.constant([[]], dtype=tf.float32)
return boxes
def createTestBoxes(self):
boxes = tf.constant(
[[0.0, 0.25, 0.75, 1.0], [0.25, 0.5, 0.75, 1.0]], dtype=tf.float32)
return boxes
def createTestGroundtruthWeights(self):
return tf.constant([1.0, 0.5], dtype=tf.float32)
def createTestMasks(self):
mask = np.array([
[[255.0, 0.0, 0.0],
[255.0, 0.0, 0.0],
[255.0, 0.0, 0.0]],
[[255.0, 255.0, 0.0],
[255.0, 255.0, 0.0],
[255.0, 255.0, 0.0]]])
return tf.constant(mask, dtype=tf.float32)
def createTestKeypoints(self):
keypoints = np.array([
[[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]],
[[0.4, 0.4], [0.5, 0.5], [0.6, 0.6]],
])
return tf.constant(keypoints, dtype=tf.float32)
def createTestKeypointsInsideCrop(self):
keypoints = np.array([
[[0.4, 0.4], [0.5, 0.5], [0.6, 0.6]],
[[0.4, 0.4], [0.5, 0.5], [0.6, 0.6]],
])
return tf.constant(keypoints, dtype=tf.float32)
def createTestKeypointsOutsideCrop(self):
keypoints = np.array([
[[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]],
[[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]],
])
return tf.constant(keypoints, dtype=tf.float32)
def createKeypointFlipPermutation(self):
return np.array([0, 2, 1], dtype=np.int32)
def createTestLabels(self):
labels = tf.constant([1, 2], dtype=tf.int32)
return labels
def createTestBoxesOutOfImage(self):
boxes = tf.constant(
[[-0.1, 0.25, 0.75, 1], [0.25, 0.5, 0.75, 1.1]], dtype=tf.float32)
return boxes
def createTestMultiClassScores(self):
return tf.constant([[1.0, 0.0], [0.5, 0.5]], dtype=tf.float32)
def expectedImagesAfterNormalization(self):
images_r = tf.constant([[[0, 0, 0, 0], [-1, -1, 0, 0],
[-1, 0, 0, 0], [0.5, 0.5, 0, 0]]],
dtype=tf.float32)
images_r = tf.expand_dims(images_r, 3)
images_g = tf.constant([[[-1, -1, 0, 0], [-1, -1, 0, 0],
[-1, 0, 0.5, 0.5], [0.5, 0.5, 0, 0.5]]],
dtype=tf.float32)
images_g = tf.expand_dims(images_g, 3)
images_b = tf.constant([[[0, 0, 0.5, -1], [-1, -1, 0, 0.5],
[-1, 0, 0, -1], [0.5, 0.5, 0.5, 0]]],
dtype=tf.float32)
images_b = tf.expand_dims(images_b, 3)
images = tf.concat([images_r, images_g, images_b], 3)
return images
def expectedMaxImageAfterColorScale(self):
images_r = tf.constant([[[0.1, 0.1, 0.1, 0.1], [-0.9, -0.9, 0.1, 0.1],
[-0.9, 0.1, 0.1, 0.1], [0.6, 0.6, 0.1, 0.1]]],
dtype=tf.float32)
images_r = tf.expand_dims(images_r, 3)
images_g = tf.constant([[[-0.9, -0.9, 0.1, 0.1], [-0.9, -0.9, 0.1, 0.1],
[-0.9, 0.1, 0.6, 0.6], [0.6, 0.6, 0.1, 0.6]]],
dtype=tf.float32)
images_g = tf.expand_dims(images_g, 3)
images_b = tf.constant([[[0.1, 0.1, 0.6, -0.9], [-0.9, -0.9, 0.1, 0.6],
[-0.9, 0.1, 0.1, -0.9], [0.6, 0.6, 0.6, 0.1]]],
dtype=tf.float32)
images_b = tf.expand_dims(images_b, 3)
images = tf.concat([images_r, images_g, images_b], 3)
return images
def expectedMinImageAfterColorScale(self):
images_r = tf.constant([[[-0.1, -0.1, -0.1, -0.1], [-1, -1, -0.1, -0.1],
[-1, -0.1, -0.1, -0.1], [0.4, 0.4, -0.1, -0.1]]],
dtype=tf.float32)
images_r = tf.expand_dims(images_r, 3)
images_g = tf.constant([[[-1, -1, -0.1, -0.1], [-1, -1, -0.1, -0.1],
[-1, -0.1, 0.4, 0.4], [0.4, 0.4, -0.1, 0.4]]],
dtype=tf.float32)
images_g = tf.expand_dims(images_g, 3)
images_b = tf.constant([[[-0.1, -0.1, 0.4, -1], [-1, -1, -0.1, 0.4],
[-1, -0.1, -0.1, -1], [0.4, 0.4, 0.4, -0.1]]],
dtype=tf.float32)
images_b = tf.expand_dims(images_b, 3)
images = tf.concat([images_r, images_g, images_b], 3)
return images
def expectedImagesAfterLeftRightFlip(self):
images_r = tf.constant([[[0, 0, 0, 0], [0, 0, -1, -1],
[0, 0, 0, -1], [0, 0, 0.5, 0.5]]],
dtype=tf.float32)
images_r = tf.expand_dims(images_r, 3)
images_g = tf.constant([[[0, 0, -1, -1], [0, 0, -1, -1],
[0.5, 0.5, 0, -1], [0.5, 0, 0.5, 0.5]]],
dtype=tf.float32)
images_g = tf.expand_dims(images_g, 3)
images_b = tf.constant([[[-1, 0.5, 0, 0], [0.5, 0, -1, -1],
[-1, 0, 0, -1], [0, 0.5, 0.5, 0.5]]],
dtype=tf.float32)
images_b = tf.expand_dims(images_b, 3)
images = tf.concat([images_r, images_g, images_b], 3)
return images
def expectedImagesAfterUpDownFlip(self):
images_r = tf.constant([[[0.5, 0.5, 0, 0], [-1, 0, 0, 0],
[-1, -1, 0, 0], [0, 0, 0, 0]]],
dtype=tf.float32)
images_r = tf.expand_dims(images_r, 3)
images_g = tf.constant([[[0.5, 0.5, 0, 0.5], [-1, 0, 0.5, 0.5],
[-1, -1, 0, 0], [-1, -1, 0, 0]]],
dtype=tf.float32)
images_g = tf.expand_dims(images_g, 3)
images_b = tf.constant([[[0.5, 0.5, 0.5, 0], [-1, 0, 0, -1],
[-1, -1, 0, 0.5], [0, 0, 0.5, -1]]],
dtype=tf.float32)
images_b = tf.expand_dims(images_b, 3)
images = tf.concat([images_r, images_g, images_b], 3)
return images
def expectedImagesAfterRot90(self):
images_r = tf.constant([[[0, 0, 0, 0], [0, 0, 0, 0],
[0, -1, 0, 0.5], [0, -1, -1, 0.5]]],
dtype=tf.float32)
images_r = tf.expand_dims(images_r, 3)
images_g = tf.constant([[[0, 0, 0.5, 0.5], [0, 0, 0.5, 0],
[-1, -1, 0, 0.5], [-1, -1, -1, 0.5]]],
dtype=tf.float32)
images_g = tf.expand_dims(images_g, 3)
images_b = tf.constant([[[-1, 0.5, -1, 0], [0.5, 0, 0, 0.5],
[0, -1, 0, 0.5], [0, -1, -1, 0.5]]],
dtype=tf.float32)
images_b = tf.expand_dims(images_b, 3)
images = tf.concat([images_r, images_g, images_b], 3)
return images
def expectedBoxesAfterLeftRightFlip(self):
boxes = tf.constant([[0.0, 0.0, 0.75, 0.75], [0.25, 0.0, 0.75, 0.5]],
dtype=tf.float32)
return boxes
def expectedBoxesAfterUpDownFlip(self):
boxes = tf.constant([[0.25, 0.25, 1.0, 1.0], [0.25, 0.5, 0.75, 1.0]],
dtype=tf.float32)
return boxes
def expectedBoxesAfterRot90(self):
boxes = tf.constant(
[[0.0, 0.0, 0.75, 0.75], [0.0, 0.25, 0.5, 0.75]], dtype=tf.float32)
return boxes
def expectedMasksAfterLeftRightFlip(self):
mask = np.array([
[[0.0, 0.0, 255.0],
[0.0, 0.0, 255.0],
[0.0, 0.0, 255.0]],
[[0.0, 255.0, 255.0],
[0.0, 255.0, 255.0],
[0.0, 255.0, 255.0]]])
return tf.constant(mask, dtype=tf.float32)
def expectedMasksAfterUpDownFlip(self):
mask = np.array([
[[255.0, 0.0, 0.0],
[255.0, 0.0, 0.0],
[255.0, 0.0, 0.0]],
[[255.0, 255.0, 0.0],
[255.0, 255.0, 0.0],
[255.0, 255.0, 0.0]]])
return tf.constant(mask, dtype=tf.float32)
def expectedMasksAfterRot90(self):
mask = np.array([
[[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0],
[255.0, 255.0, 255.0]],
[[0.0, 0.0, 0.0],
[255.0, 255.0, 255.0],
[255.0, 255.0, 255.0]]])
return tf.constant(mask, dtype=tf.float32)
def expectedLabelScoresAfterThresholding(self):
return tf.constant([1.0], dtype=tf.float32)
def expectedBoxesAfterThresholding(self):
return tf.constant([[0.0, 0.25, 0.75, 1.0]], dtype=tf.float32)
def expectedLabelsAfterThresholding(self):
return tf.constant([1], dtype=tf.float32)
def expectedMultiClassScoresAfterThresholding(self):
return tf.constant([[1.0, 0.0]], dtype=tf.float32)
def expectedMasksAfterThresholding(self):
mask = np.array([
[[255.0, 0.0, 0.0],
[255.0, 0.0, 0.0],
[255.0, 0.0, 0.0]]])
return tf.constant(mask, dtype=tf.float32)
def expectedKeypointsAfterThresholding(self):
keypoints = np.array([
[[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]]
])
return tf.constant(keypoints, dtype=tf.float32)
def expectedLabelScoresAfterThresholdingWithMissingScore(self):
return tf.constant([np.nan], dtype=tf.float32)
def expectedBoxesAfterThresholdingWithMissingScore(self):
return tf.constant([[0.25, 0.5, 0.75, 1]], dtype=tf.float32)
def expectedLabelsAfterThresholdingWithMissingScore(self):
return tf.constant([2], dtype=tf.float32)
def testRgbToGrayscale(self):
images = self.createTestImages()
grayscale_images = preprocessor._rgb_to_grayscale(images)
expected_images = tf.image.rgb_to_grayscale(images)
with self.test_session() as sess:
(grayscale_images, expected_images) = sess.run(
[grayscale_images, expected_images])
self.assertAllEqual(expected_images, grayscale_images)
def testNormalizeImage(self):
preprocess_options = [(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 256,
'target_minval': -1,
'target_maxval': 1
})]
images = self.createTestImages()
tensor_dict = {fields.InputDataFields.image: images}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
images = tensor_dict[fields.InputDataFields.image]
images_expected = self.expectedImagesAfterNormalization()
with self.test_session() as sess:
(images_, images_expected_) = sess.run(
[images, images_expected])
images_shape_ = images_.shape
images_expected_shape_ = images_expected_.shape
expected_shape = [1, 4, 4, 3]
self.assertAllEqual(images_expected_shape_, images_shape_)
self.assertAllEqual(images_shape_, expected_shape)
self.assertAllClose(images_, images_expected_)
def testRetainBoxesAboveThreshold(self):
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
(retained_boxes, retained_labels,
retained_weights) = preprocessor.retain_boxes_above_threshold(
boxes, labels, weights, threshold=0.6)
with self.test_session() as sess:
(retained_boxes_, retained_labels_, retained_weights_,
expected_retained_boxes_, expected_retained_labels_,
expected_retained_weights_) = sess.run([
retained_boxes, retained_labels, retained_weights,
self.expectedBoxesAfterThresholding(),
self.expectedLabelsAfterThresholding(),
self.expectedLabelScoresAfterThresholding()])
self.assertAllClose(
retained_boxes_, expected_retained_boxes_)
self.assertAllClose(
retained_labels_, expected_retained_labels_)
self.assertAllClose(
retained_weights_, expected_retained_weights_)
def testRetainBoxesAboveThresholdWithMultiClassScores(self):
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
multiclass_scores = self.createTestMultiClassScores()
(_, _, _,
retained_multiclass_scores) = preprocessor.retain_boxes_above_threshold(
boxes,
labels,
weights,
multiclass_scores=multiclass_scores,
threshold=0.6)
with self.test_session() as sess:
(retained_multiclass_scores_,
expected_retained_multiclass_scores_) = sess.run([
retained_multiclass_scores,
self.expectedMultiClassScoresAfterThresholding()
])
self.assertAllClose(retained_multiclass_scores_,
expected_retained_multiclass_scores_)
def testRetainBoxesAboveThresholdWithMasks(self):
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
masks = self.createTestMasks()
_, _, _, retained_masks = preprocessor.retain_boxes_above_threshold(
boxes, labels, weights, masks, threshold=0.6)
with self.test_session() as sess:
retained_masks_, expected_retained_masks_ = sess.run([
retained_masks, self.expectedMasksAfterThresholding()])
self.assertAllClose(
retained_masks_, expected_retained_masks_)
def testRetainBoxesAboveThresholdWithKeypoints(self):
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
keypoints = self.createTestKeypoints()
(_, _, _, retained_keypoints) = preprocessor.retain_boxes_above_threshold(
boxes, labels, weights, keypoints=keypoints, threshold=0.6)
with self.test_session() as sess:
(retained_keypoints_,
expected_retained_keypoints_) = sess.run([
retained_keypoints,
self.expectedKeypointsAfterThresholding()])
self.assertAllClose(
retained_keypoints_, expected_retained_keypoints_)
def testFlipBoxesLeftRight(self):
boxes = self.createTestBoxes()
flipped_boxes = preprocessor._flip_boxes_left_right(boxes)
expected_boxes = self.expectedBoxesAfterLeftRightFlip()
with self.test_session() as sess:
flipped_boxes, expected_boxes = sess.run([flipped_boxes, expected_boxes])
self.assertAllEqual(flipped_boxes.flatten(), expected_boxes.flatten())
def testFlipBoxesUpDown(self):
boxes = self.createTestBoxes()
flipped_boxes = preprocessor._flip_boxes_up_down(boxes)
expected_boxes = self.expectedBoxesAfterUpDownFlip()
with self.test_session() as sess:
flipped_boxes, expected_boxes = sess.run([flipped_boxes, expected_boxes])
self.assertAllEqual(flipped_boxes.flatten(), expected_boxes.flatten())
def testRot90Boxes(self):
boxes = self.createTestBoxes()
rotated_boxes = preprocessor._rot90_boxes(boxes)
expected_boxes = self.expectedBoxesAfterRot90()
with self.test_session() as sess:
rotated_boxes, expected_boxes = sess.run([rotated_boxes, expected_boxes])
self.assertAllEqual(rotated_boxes.flatten(), expected_boxes.flatten())
def testFlipMasksLeftRight(self):
test_mask = self.createTestMasks()
flipped_mask = preprocessor._flip_masks_left_right(test_mask)
expected_mask = self.expectedMasksAfterLeftRightFlip()
with self.test_session() as sess:
flipped_mask, expected_mask = sess.run([flipped_mask, expected_mask])
self.assertAllEqual(flipped_mask.flatten(), expected_mask.flatten())
def testFlipMasksUpDown(self):
test_mask = self.createTestMasks()
flipped_mask = preprocessor._flip_masks_up_down(test_mask)
expected_mask = self.expectedMasksAfterUpDownFlip()
with self.test_session() as sess:
flipped_mask, expected_mask = sess.run([flipped_mask, expected_mask])
self.assertAllEqual(flipped_mask.flatten(), expected_mask.flatten())
def testRot90Masks(self):
test_mask = self.createTestMasks()
rotated_mask = preprocessor._rot90_masks(test_mask)
expected_mask = self.expectedMasksAfterRot90()
with self.test_session() as sess:
rotated_mask, expected_mask = sess.run([rotated_mask, expected_mask])
self.assertAllEqual(rotated_mask.flatten(), expected_mask.flatten())
def _testPreprocessorCache(self,
preprocess_options,
test_boxes=False,
test_masks=False,
test_keypoints=False,
num_runs=4):
cache = preprocessor_cache.PreprocessorCache()
images = self.createTestImages()
boxes = self.createTestBoxes()
weights = self.createTestGroundtruthWeights()
classes = self.createTestLabels()
masks = self.createTestMasks()
keypoints = self.createTestKeypoints()
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_instance_masks=test_masks, include_keypoints=test_keypoints)
out = []
for i in range(num_runs):
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_weights: weights
}
num_outputs = 1
if test_boxes:
tensor_dict[fields.InputDataFields.groundtruth_boxes] = boxes
tensor_dict[fields.InputDataFields.groundtruth_classes] = classes
num_outputs += 1
if test_masks:
tensor_dict[fields.InputDataFields.groundtruth_instance_masks] = masks
num_outputs += 1
if test_keypoints:
tensor_dict[fields.InputDataFields.groundtruth_keypoints] = keypoints
num_outputs += 1
out.append(preprocessor.preprocess(
tensor_dict, preprocess_options, preprocessor_arg_map, cache))
with self.test_session() as sess:
to_run = []
for i in range(num_runs):
to_run.append(out[i][fields.InputDataFields.image])
if test_boxes:
to_run.append(out[i][fields.InputDataFields.groundtruth_boxes])
if test_masks:
to_run.append(
out[i][fields.InputDataFields.groundtruth_instance_masks])
if test_keypoints:
to_run.append(out[i][fields.InputDataFields.groundtruth_keypoints])
out_array = sess.run(to_run)
for i in range(num_outputs, len(out_array)):
self.assertAllClose(out_array[i], out_array[i - num_outputs])
def testRandomHorizontalFlip(self):
preprocess_options = [(preprocessor.random_horizontal_flip, {})]
images = self.expectedImagesAfterNormalization()
boxes = self.createTestBoxes()
tensor_dict = {fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes}
images_expected1 = self.expectedImagesAfterLeftRightFlip()
boxes_expected1 = self.expectedBoxesAfterLeftRightFlip()
images_expected2 = images
boxes_expected2 = boxes
tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
images = tensor_dict[fields.InputDataFields.image]
boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
boxes_diff1 = tf.squared_difference(boxes, boxes_expected1)
boxes_diff2 = tf.squared_difference(boxes, boxes_expected2)
boxes_diff = tf.multiply(boxes_diff1, boxes_diff2)
boxes_diff_expected = tf.zeros_like(boxes_diff)
images_diff1 = tf.squared_difference(images, images_expected1)
images_diff2 = tf.squared_difference(images, images_expected2)
images_diff = tf.multiply(images_diff1, images_diff2)
images_diff_expected = tf.zeros_like(images_diff)
with self.test_session() as sess:
(images_diff_, images_diff_expected_, boxes_diff_,
boxes_diff_expected_) = sess.run([images_diff, images_diff_expected,
boxes_diff, boxes_diff_expected])
self.assertAllClose(boxes_diff_, boxes_diff_expected_)
self.assertAllClose(images_diff_, images_diff_expected_)
def testRandomHorizontalFlipWithEmptyBoxes(self):
preprocess_options = [(preprocessor.random_horizontal_flip, {})]
images = self.expectedImagesAfterNormalization()
boxes = self.createEmptyTestBoxes()
tensor_dict = {fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes}
images_expected1 = self.expectedImagesAfterLeftRightFlip()
boxes_expected = self.createEmptyTestBoxes()
images_expected2 = images
tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
images = tensor_dict[fields.InputDataFields.image]
boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
images_diff1 = tf.squared_difference(images, images_expected1)
images_diff2 = tf.squared_difference(images, images_expected2)
images_diff = tf.multiply(images_diff1, images_diff2)
images_diff_expected = tf.zeros_like(images_diff)
with self.test_session() as sess:
(images_diff_, images_diff_expected_, boxes_,
boxes_expected_) = sess.run([images_diff, images_diff_expected, boxes,
boxes_expected])
self.assertAllClose(boxes_, boxes_expected_)
self.assertAllClose(images_diff_, images_diff_expected_)
def testRandomHorizontalFlipWithCache(self):
keypoint_flip_permutation = self.createKeypointFlipPermutation()
preprocess_options = [
(preprocessor.random_horizontal_flip,
{'keypoint_flip_permutation': keypoint_flip_permutation})]
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=True,
test_keypoints=True)
def testRunRandomHorizontalFlipWithMaskAndKeypoints(self):
preprocess_options = [(preprocessor.random_horizontal_flip, {})]
image_height = 3
image_width = 3
images = tf.random_uniform([1, image_height, image_width, 3])
boxes = self.createTestBoxes()
masks = self.createTestMasks()
keypoints = self.createTestKeypoints()
keypoint_flip_permutation = self.createKeypointFlipPermutation()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_instance_masks: masks,
fields.InputDataFields.groundtruth_keypoints: keypoints
}
preprocess_options = [
(preprocessor.random_horizontal_flip,
{'keypoint_flip_permutation': keypoint_flip_permutation})]
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_instance_masks=True, include_keypoints=True)
tensor_dict = preprocessor.preprocess(
tensor_dict, preprocess_options, func_arg_map=preprocessor_arg_map)
boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
masks = tensor_dict[fields.InputDataFields.groundtruth_instance_masks]
keypoints = tensor_dict[fields.InputDataFields.groundtruth_keypoints]
with self.test_session() as sess:
boxes, masks, keypoints = sess.run([boxes, masks, keypoints])
self.assertTrue(boxes is not None)
self.assertTrue(masks is not None)
self.assertTrue(keypoints is not None)
def testRandomVerticalFlip(self):
preprocess_options = [(preprocessor.random_vertical_flip, {})]
images = self.expectedImagesAfterNormalization()
boxes = self.createTestBoxes()
tensor_dict = {fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes}
images_expected1 = self.expectedImagesAfterUpDownFlip()
boxes_expected1 = self.expectedBoxesAfterUpDownFlip()
images_expected2 = images
boxes_expected2 = boxes
tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
images = tensor_dict[fields.InputDataFields.image]
boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
boxes_diff1 = tf.squared_difference(boxes, boxes_expected1)
boxes_diff2 = tf.squared_difference(boxes, boxes_expected2)
boxes_diff = tf.multiply(boxes_diff1, boxes_diff2)
boxes_diff_expected = tf.zeros_like(boxes_diff)
images_diff1 = tf.squared_difference(images, images_expected1)
images_diff2 = tf.squared_difference(images, images_expected2)
images_diff = tf.multiply(images_diff1, images_diff2)
images_diff_expected = tf.zeros_like(images_diff)
with self.test_session() as sess:
(images_diff_, images_diff_expected_, boxes_diff_,
boxes_diff_expected_) = sess.run([images_diff, images_diff_expected,
boxes_diff, boxes_diff_expected])
self.assertAllClose(boxes_diff_, boxes_diff_expected_)
self.assertAllClose(images_diff_, images_diff_expected_)
def testRandomVerticalFlipWithEmptyBoxes(self):
preprocess_options = [(preprocessor.random_vertical_flip, {})]
images = self.expectedImagesAfterNormalization()
boxes = self.createEmptyTestBoxes()
tensor_dict = {fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes}
images_expected1 = self.expectedImagesAfterUpDownFlip()
boxes_expected = self.createEmptyTestBoxes()
images_expected2 = images
tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
images = tensor_dict[fields.InputDataFields.image]
boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
images_diff1 = tf.squared_difference(images, images_expected1)
images_diff2 = tf.squared_difference(images, images_expected2)
images_diff = tf.multiply(images_diff1, images_diff2)
images_diff_expected = tf.zeros_like(images_diff)
with self.test_session() as sess:
(images_diff_, images_diff_expected_, boxes_,
boxes_expected_) = sess.run([images_diff, images_diff_expected, boxes,
boxes_expected])
self.assertAllClose(boxes_, boxes_expected_)
self.assertAllClose(images_diff_, images_diff_expected_)
def testRandomVerticalFlipWithCache(self):
keypoint_flip_permutation = self.createKeypointFlipPermutation()
preprocess_options = [
(preprocessor.random_vertical_flip,
{'keypoint_flip_permutation': keypoint_flip_permutation})]
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=True,
test_keypoints=True)
def testRunRandomVerticalFlipWithMaskAndKeypoints(self):
preprocess_options = [(preprocessor.random_vertical_flip, {})]
image_height = 3
image_width = 3
images = tf.random_uniform([1, image_height, image_width, 3])
boxes = self.createTestBoxes()
masks = self.createTestMasks()
keypoints = self.createTestKeypoints()
keypoint_flip_permutation = self.createKeypointFlipPermutation()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_instance_masks: masks,
fields.InputDataFields.groundtruth_keypoints: keypoints
}
preprocess_options = [
(preprocessor.random_vertical_flip,
{'keypoint_flip_permutation': keypoint_flip_permutation})]
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_instance_masks=True, include_keypoints=True)
tensor_dict = preprocessor.preprocess(
tensor_dict, preprocess_options, func_arg_map=preprocessor_arg_map)
boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
masks = tensor_dict[fields.InputDataFields.groundtruth_instance_masks]
keypoints = tensor_dict[fields.InputDataFields.groundtruth_keypoints]
with self.test_session() as sess:
boxes, masks, keypoints = sess.run([boxes, masks, keypoints])
self.assertTrue(boxes is not None)
self.assertTrue(masks is not None)
self.assertTrue(keypoints is not None)
def testRandomRotation90(self):
preprocess_options = [(preprocessor.random_rotation90, {})]
images = self.expectedImagesAfterNormalization()
boxes = self.createTestBoxes()
tensor_dict = {fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes}
images_expected1 = self.expectedImagesAfterRot90()
boxes_expected1 = self.expectedBoxesAfterRot90()
images_expected2 = images
boxes_expected2 = boxes
tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
images = tensor_dict[fields.InputDataFields.image]
boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
boxes_diff1 = tf.squared_difference(boxes, boxes_expected1)
boxes_diff2 = tf.squared_difference(boxes, boxes_expected2)
boxes_diff = tf.multiply(boxes_diff1, boxes_diff2)
boxes_diff_expected = tf.zeros_like(boxes_diff)
images_diff1 = tf.squared_difference(images, images_expected1)
images_diff2 = tf.squared_difference(images, images_expected2)
images_diff = tf.multiply(images_diff1, images_diff2)
images_diff_expected = tf.zeros_like(images_diff)
with self.test_session() as sess:
(images_diff_, images_diff_expected_, boxes_diff_,
boxes_diff_expected_) = sess.run([images_diff, images_diff_expected,
boxes_diff, boxes_diff_expected])
self.assertAllClose(boxes_diff_, boxes_diff_expected_)
self.assertAllClose(images_diff_, images_diff_expected_)
def testRandomRotation90WithEmptyBoxes(self):
preprocess_options = [(preprocessor.random_rotation90, {})]
images = self.expectedImagesAfterNormalization()
boxes = self.createEmptyTestBoxes()
tensor_dict = {fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes}
images_expected1 = self.expectedImagesAfterRot90()
boxes_expected = self.createEmptyTestBoxes()
images_expected2 = images
tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
images = tensor_dict[fields.InputDataFields.image]
boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
images_diff1 = tf.squared_difference(images, images_expected1)
images_diff2 = tf.squared_difference(images, images_expected2)
images_diff = tf.multiply(images_diff1, images_diff2)
images_diff_expected = tf.zeros_like(images_diff)
with self.test_session() as sess:
(images_diff_, images_diff_expected_, boxes_,
boxes_expected_) = sess.run([images_diff, images_diff_expected, boxes,
boxes_expected])
self.assertAllClose(boxes_, boxes_expected_)
self.assertAllClose(images_diff_, images_diff_expected_)
def testRandomRotation90WithCache(self):
preprocess_options = [(preprocessor.random_rotation90, {})]
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=True,
test_keypoints=True)
def testRunRandomRotation90WithMaskAndKeypoints(self):
preprocess_options = [(preprocessor.random_rotation90, {})]
image_height = 3
image_width = 3
images = tf.random_uniform([1, image_height, image_width, 3])
boxes = self.createTestBoxes()
masks = self.createTestMasks()
keypoints = self.createTestKeypoints()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_instance_masks: masks,
fields.InputDataFields.groundtruth_keypoints: keypoints
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_instance_masks=True, include_keypoints=True)
tensor_dict = preprocessor.preprocess(
tensor_dict, preprocess_options, func_arg_map=preprocessor_arg_map)
boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
masks = tensor_dict[fields.InputDataFields.groundtruth_instance_masks]
keypoints = tensor_dict[fields.InputDataFields.groundtruth_keypoints]
with self.test_session() as sess:
boxes, masks, keypoints = sess.run([boxes, masks, keypoints])
self.assertTrue(boxes is not None)
self.assertTrue(masks is not None)
self.assertTrue(keypoints is not None)
def testRandomPixelValueScale(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocessing_options.append((preprocessor.random_pixel_value_scale, {}))
images = self.createTestImages()
tensor_dict = {fields.InputDataFields.image: images}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
images_min = tf.to_float(images) * 0.9 / 255.0
images_max = tf.to_float(images) * 1.1 / 255.0
images = tensor_dict[fields.InputDataFields.image]
values_greater = tf.greater_equal(images, images_min)
values_less = tf.less_equal(images, images_max)
values_true = tf.fill([1, 4, 4, 3], True)
with self.test_session() as sess:
(values_greater_, values_less_, values_true_) = sess.run(
[values_greater, values_less, values_true])
self.assertAllClose(values_greater_, values_true_)
self.assertAllClose(values_less_, values_true_)
def testRandomPixelValueScaleWithCache(self):
preprocess_options = []
preprocess_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocess_options.append((preprocessor.random_pixel_value_scale, {}))
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=False,
test_keypoints=False)
def testRandomImageScale(self):
preprocess_options = [(preprocessor.random_image_scale, {})]
images_original = self.createTestImages()
tensor_dict = {fields.InputDataFields.image: images_original}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
images_scaled = tensor_dict[fields.InputDataFields.image]
images_original_shape = tf.shape(images_original)
images_scaled_shape = tf.shape(images_scaled)
with self.test_session() as sess:
(images_original_shape_, images_scaled_shape_) = sess.run(
[images_original_shape, images_scaled_shape])
self.assertTrue(
images_original_shape_[1] * 0.5 <= images_scaled_shape_[1])
self.assertTrue(
images_original_shape_[1] * 2.0 >= images_scaled_shape_[1])
self.assertTrue(
images_original_shape_[2] * 0.5 <= images_scaled_shape_[2])
self.assertTrue(
images_original_shape_[2] * 2.0 >= images_scaled_shape_[2])
def testRandomImageScaleWithCache(self):
preprocess_options = [(preprocessor.random_image_scale, {})]
self._testPreprocessorCache(preprocess_options,
test_boxes=False,
test_masks=False,
test_keypoints=False)
def testRandomRGBtoGray(self):
preprocess_options = [(preprocessor.random_rgb_to_gray, {})]
images_original = self.createTestImages()
tensor_dict = {fields.InputDataFields.image: images_original}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
images_gray = tensor_dict[fields.InputDataFields.image]
images_gray_r, images_gray_g, images_gray_b = tf.split(
value=images_gray, num_or_size_splits=3, axis=3)
images_r, images_g, images_b = tf.split(
value=images_original, num_or_size_splits=3, axis=3)
images_r_diff1 = tf.squared_difference(tf.to_float(images_r),
tf.to_float(images_gray_r))
images_r_diff2 = tf.squared_difference(tf.to_float(images_gray_r),
tf.to_float(images_gray_g))
images_r_diff = tf.multiply(images_r_diff1, images_r_diff2)
images_g_diff1 = tf.squared_difference(tf.to_float(images_g),
tf.to_float(images_gray_g))
images_g_diff2 = tf.squared_difference(tf.to_float(images_gray_g),
tf.to_float(images_gray_b))
images_g_diff = tf.multiply(images_g_diff1, images_g_diff2)
images_b_diff1 = tf.squared_difference(tf.to_float(images_b),
tf.to_float(images_gray_b))
images_b_diff2 = tf.squared_difference(tf.to_float(images_gray_b),
tf.to_float(images_gray_r))
images_b_diff = tf.multiply(images_b_diff1, images_b_diff2)
image_zero1 = tf.constant(0, dtype=tf.float32, shape=[1, 4, 4, 1])
with self.test_session() as sess:
(images_r_diff_, images_g_diff_, images_b_diff_, image_zero1_) = sess.run(
[images_r_diff, images_g_diff, images_b_diff, image_zero1])
self.assertAllClose(images_r_diff_, image_zero1_)
self.assertAllClose(images_g_diff_, image_zero1_)
self.assertAllClose(images_b_diff_, image_zero1_)
def testRandomRGBtoGrayWithCache(self):
preprocess_options = [(
preprocessor.random_rgb_to_gray, {'probability': 0.5})]
self._testPreprocessorCache(preprocess_options,
test_boxes=False,
test_masks=False,
test_keypoints=False)
def testRandomAdjustBrightness(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocessing_options.append((preprocessor.random_adjust_brightness, {}))
images_original = self.createTestImages()
tensor_dict = {fields.InputDataFields.image: images_original}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
images_bright = tensor_dict[fields.InputDataFields.image]
image_original_shape = tf.shape(images_original)
image_bright_shape = tf.shape(images_bright)
with self.test_session() as sess:
(image_original_shape_, image_bright_shape_) = sess.run(
[image_original_shape, image_bright_shape])
self.assertAllEqual(image_original_shape_, image_bright_shape_)
def testRandomAdjustBrightnessWithCache(self):
preprocess_options = []
preprocess_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocess_options.append((preprocessor.random_adjust_brightness, {}))
self._testPreprocessorCache(preprocess_options,
test_boxes=False,
test_masks=False,
test_keypoints=False)
def testRandomAdjustContrast(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocessing_options.append((preprocessor.random_adjust_contrast, {}))
images_original = self.createTestImages()
tensor_dict = {fields.InputDataFields.image: images_original}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
images_contrast = tensor_dict[fields.InputDataFields.image]
image_original_shape = tf.shape(images_original)
image_contrast_shape = tf.shape(images_contrast)
with self.test_session() as sess:
(image_original_shape_, image_contrast_shape_) = sess.run(
[image_original_shape, image_contrast_shape])
self.assertAllEqual(image_original_shape_, image_contrast_shape_)
def testRandomAdjustContrastWithCache(self):
preprocess_options = []
preprocess_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocess_options.append((preprocessor.random_adjust_contrast, {}))
self._testPreprocessorCache(preprocess_options,
test_boxes=False,
test_masks=False,
test_keypoints=False)
def testRandomAdjustHue(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocessing_options.append((preprocessor.random_adjust_hue, {}))
images_original = self.createTestImages()
tensor_dict = {fields.InputDataFields.image: images_original}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
images_hue = tensor_dict[fields.InputDataFields.image]
image_original_shape = tf.shape(images_original)
image_hue_shape = tf.shape(images_hue)
with self.test_session() as sess:
(image_original_shape_, image_hue_shape_) = sess.run(
[image_original_shape, image_hue_shape])
self.assertAllEqual(image_original_shape_, image_hue_shape_)
def testRandomAdjustHueWithCache(self):
preprocess_options = []
preprocess_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocess_options.append((preprocessor.random_adjust_hue, {}))
self._testPreprocessorCache(preprocess_options,
test_boxes=False,
test_masks=False,
test_keypoints=False)
def testRandomDistortColor(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocessing_options.append((preprocessor.random_distort_color, {}))
images_original = self.createTestImages()
images_original_shape = tf.shape(images_original)
tensor_dict = {fields.InputDataFields.image: images_original}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
images_distorted_color = tensor_dict[fields.InputDataFields.image]
images_distorted_color_shape = tf.shape(images_distorted_color)
with self.test_session() as sess:
(images_original_shape_, images_distorted_color_shape_) = sess.run(
[images_original_shape, images_distorted_color_shape])
self.assertAllEqual(images_original_shape_, images_distorted_color_shape_)
def testRandomDistortColorWithCache(self):
preprocess_options = []
preprocess_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocess_options.append((preprocessor.random_distort_color, {}))
self._testPreprocessorCache(preprocess_options,
test_boxes=False,
test_masks=False,
test_keypoints=False)
def testRandomJitterBoxes(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.random_jitter_boxes, {}))
boxes = self.createTestBoxes()
boxes_shape = tf.shape(boxes)
tensor_dict = {fields.InputDataFields.groundtruth_boxes: boxes}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
distorted_boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
distorted_boxes_shape = tf.shape(distorted_boxes)
with self.test_session() as sess:
(boxes_shape_, distorted_boxes_shape_) = sess.run(
[boxes_shape, distorted_boxes_shape])
self.assertAllEqual(boxes_shape_, distorted_boxes_shape_)
def testRandomCropImage(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocessing_options.append((preprocessor.random_crop_image, {}))
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
}
distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
boxes_rank = tf.rank(boxes)
distorted_boxes_rank = tf.rank(distorted_boxes)
images_rank = tf.rank(images)
distorted_images_rank = tf.rank(distorted_images)
self.assertEqual(3, distorted_images.get_shape()[3])
with self.test_session() as sess:
(boxes_rank_, distorted_boxes_rank_, images_rank_,
distorted_images_rank_) = sess.run([
boxes_rank, distorted_boxes_rank, images_rank, distorted_images_rank
])
self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
self.assertAllEqual(images_rank_, distorted_images_rank_)
def testRandomCropImageWithCache(self):
preprocess_options = [(preprocessor.random_rgb_to_gray,
{'probability': 0.5}),
(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1,
}),
(preprocessor.random_crop_image, {})]
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=False,
test_keypoints=False)
def testRandomCropImageGrayscale(self):
preprocessing_options = [(preprocessor.rgb_to_gray, {}),
(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1,
}),
(preprocessor.random_crop_image, {})]
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
}
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options)
distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
boxes_rank = tf.rank(boxes)
distorted_boxes_rank = tf.rank(distorted_boxes)
images_rank = tf.rank(images)
distorted_images_rank = tf.rank(distorted_images)
self.assertEqual(1, distorted_images.get_shape()[3])
with self.test_session() as sess:
session_results = sess.run([
boxes_rank, distorted_boxes_rank, images_rank, distorted_images_rank
])
(boxes_rank_, distorted_boxes_rank_, images_rank_,
distorted_images_rank_) = session_results
self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
self.assertAllEqual(images_rank_, distorted_images_rank_)
def testRandomCropImageWithBoxOutOfImage(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocessing_options.append((preprocessor.random_crop_image, {}))
images = self.createTestImages()
boxes = self.createTestBoxesOutOfImage()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
}
distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
boxes_rank = tf.rank(boxes)
distorted_boxes_rank = tf.rank(distorted_boxes)
images_rank = tf.rank(images)
distorted_images_rank = tf.rank(distorted_images)
with self.test_session() as sess:
(boxes_rank_, distorted_boxes_rank_, images_rank_,
distorted_images_rank_) = sess.run(
[boxes_rank, distorted_boxes_rank, images_rank,
distorted_images_rank])
self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
self.assertAllEqual(images_rank_, distorted_images_rank_)
def testRandomCropImageWithRandomCoefOne(self):
preprocessing_options = [(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
})]
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights
}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
images = tensor_dict[fields.InputDataFields.image]
preprocessing_options = [(preprocessor.random_crop_image, {
'random_coef': 1.0
})]
distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
distorted_weights = distorted_tensor_dict[
fields.InputDataFields.groundtruth_weights]
boxes_shape = tf.shape(boxes)
distorted_boxes_shape = tf.shape(distorted_boxes)
images_shape = tf.shape(images)
distorted_images_shape = tf.shape(distorted_images)
with self.test_session() as sess:
(boxes_shape_, distorted_boxes_shape_, images_shape_,
distorted_images_shape_, images_, distorted_images_,
boxes_, distorted_boxes_, labels_, distorted_labels_,
weights_, distorted_weights_) = sess.run(
[boxes_shape, distorted_boxes_shape, images_shape,
distorted_images_shape, images, distorted_images,
boxes, distorted_boxes, labels, distorted_labels,
weights, distorted_weights])
self.assertAllEqual(boxes_shape_, distorted_boxes_shape_)
self.assertAllEqual(images_shape_, distorted_images_shape_)
self.assertAllClose(images_, distorted_images_)
self.assertAllClose(boxes_, distorted_boxes_)
self.assertAllEqual(labels_, distorted_labels_)
self.assertAllEqual(weights_, distorted_weights_)
def testRandomCropWithMockSampleDistortedBoundingBox(self):
preprocessing_options = [(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
})]
images = self.createColorfulTestImage()
boxes = tf.constant([[0.1, 0.1, 0.8, 0.3],
[0.2, 0.4, 0.75, 0.75],
[0.3, 0.1, 0.4, 0.7]], dtype=tf.float32)
labels = tf.constant([1, 7, 11], dtype=tf.int32)
weights = tf.constant([1.0, 0.5, 0.6], dtype=tf.float32)
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
images = tensor_dict[fields.InputDataFields.image]
preprocessing_options = [(preprocessor.random_crop_image, {})]
with mock.patch.object(
tf.image,
'sample_distorted_bounding_box') as mock_sample_distorted_bounding_box:
mock_sample_distorted_bounding_box.return_value = (tf.constant(
[6, 143, 0], dtype=tf.int32), tf.constant(
[190, 237, -1], dtype=tf.int32), tf.constant(
[[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
distorted_weights = distorted_tensor_dict[
fields.InputDataFields.groundtruth_weights]
expected_boxes = tf.constant([[0.178947, 0.07173, 0.75789469, 0.66244733],
[0.28421, 0.0, 0.38947365, 0.57805908]],
dtype=tf.float32)
expected_labels = tf.constant([7, 11], dtype=tf.int32)
expected_weights = tf.constant([0.5, 0.6], dtype=tf.float32)
with self.test_session() as sess:
(distorted_boxes_, distorted_labels_, distorted_weights_,
expected_boxes_, expected_labels_, expected_weights_) = sess.run(
[distorted_boxes, distorted_labels, distorted_weights,
expected_boxes, expected_labels, expected_weights])
self.assertAllClose(distorted_boxes_, expected_boxes_)
self.assertAllEqual(distorted_labels_, expected_labels_)
self.assertAllEqual(distorted_weights_, expected_weights_)
def testRandomCropWithoutClipBoxes(self):
preprocessing_options = [(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
})]
images = self.createColorfulTestImage()
boxes = tf.constant([[0.1, 0.1, 0.8, 0.3],
[0.2, 0.4, 0.75, 0.75],
[0.3, 0.1, 0.4, 0.7]], dtype=tf.float32)
keypoints = tf.constant([
[[0.1, 0.1], [0.8, 0.3]],
[[0.2, 0.4], [0.75, 0.75]],
[[0.3, 0.1], [0.4, 0.7]],
], dtype=tf.float32)
labels = tf.constant([1, 7, 11], dtype=tf.int32)
weights = tf.constant([1.0, 0.5, 0.6], dtype=tf.float32)
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_keypoints: keypoints,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
preprocessing_options = [(preprocessor.random_crop_image, {
'clip_boxes': False,
})]
with mock.patch.object(
tf.image,
'sample_distorted_bounding_box') as mock_sample_distorted_bounding_box:
mock_sample_distorted_bounding_box.return_value = (tf.constant(
[6, 143, 0], dtype=tf.int32), tf.constant(
[190, 237, -1], dtype=tf.int32), tf.constant(
[[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_keypoints=True)
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_keypoints = distorted_tensor_dict[
fields.InputDataFields.groundtruth_keypoints]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
distorted_weights = distorted_tensor_dict[
fields.InputDataFields.groundtruth_weights]
expected_boxes = tf.constant(
[[0.178947, 0.07173, 0.75789469, 0.66244733],
[0.28421, -0.434599, 0.38947365, 0.57805908]],
dtype=tf.float32)
expected_keypoints = tf.constant(
[[[0.178947, 0.07173], [0.75789469, 0.66244733]],
[[0.28421, -0.434599], [0.38947365, 0.57805908]]],
dtype=tf.float32)
expected_labels = tf.constant([7, 11], dtype=tf.int32)
expected_weights = tf.constant([0.5, 0.6], dtype=tf.float32)
with self.test_session() as sess:
(distorted_boxes_, distorted_keypoints_, distorted_labels_,
distorted_weights_, expected_boxes_, expected_keypoints_,
expected_labels_, expected_weights_) = sess.run(
[distorted_boxes, distorted_keypoints, distorted_labels,
distorted_weights, expected_boxes, expected_keypoints,
expected_labels, expected_weights])
self.assertAllClose(distorted_boxes_, expected_boxes_)
self.assertAllClose(distorted_keypoints_, expected_keypoints_)
self.assertAllEqual(distorted_labels_, expected_labels_)
self.assertAllEqual(distorted_weights_, expected_weights_)
def testRandomCropImageWithMultiClassScores(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocessing_options.append((preprocessor.random_crop_image, {}))
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
multiclass_scores = self.createTestMultiClassScores()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
fields.InputDataFields.multiclass_scores: multiclass_scores
}
distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_multiclass_scores = distorted_tensor_dict[
fields.InputDataFields.multiclass_scores]
boxes_rank = tf.rank(boxes)
distorted_boxes_rank = tf.rank(distorted_boxes)
images_rank = tf.rank(images)
distorted_images_rank = tf.rank(distorted_images)
multiclass_scores_rank = tf.rank(multiclass_scores)
distorted_multiclass_scores_rank = tf.rank(distorted_multiclass_scores)
with self.test_session() as sess:
(boxes_rank_, distorted_boxes_, distorted_boxes_rank_, images_rank_,
distorted_images_rank_, multiclass_scores_rank_,
distorted_multiclass_scores_rank_,
distorted_multiclass_scores_) = sess.run([
boxes_rank, distorted_boxes, distorted_boxes_rank, images_rank,
distorted_images_rank, multiclass_scores_rank,
distorted_multiclass_scores_rank, distorted_multiclass_scores
])
self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
self.assertAllEqual(images_rank_, distorted_images_rank_)
self.assertAllEqual(multiclass_scores_rank_,
distorted_multiclass_scores_rank_)
self.assertAllEqual(distorted_boxes_.shape[0],
distorted_multiclass_scores_.shape[0])
def testStrictRandomCropImageWithGroundtruthWeights(self):
image = self.createColorfulTestImage()[0]
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
with mock.patch.object(
tf.image,
'sample_distorted_bounding_box'
) as mock_sample_distorted_bounding_box:
mock_sample_distorted_bounding_box.return_value = (
tf.constant([6, 143, 0], dtype=tf.int32),
tf.constant([190, 237, -1], dtype=tf.int32),
tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
new_image, new_boxes, new_labels, new_groundtruth_weights = (
preprocessor._strict_random_crop_image(
image, boxes, labels, weights))
with self.test_session() as sess:
new_image, new_boxes, new_labels, new_groundtruth_weights = (
sess.run(
[new_image, new_boxes, new_labels, new_groundtruth_weights])
)
expected_boxes = np.array(
[[0.0, 0.0, 0.75789469, 1.0],
[0.23157893, 0.24050637, 0.75789469, 1.0]], dtype=np.float32)
self.assertAllEqual(new_image.shape, [190, 237, 3])
self.assertAllEqual(new_groundtruth_weights, [1.0, 0.5])
self.assertAllClose(
new_boxes.flatten(), expected_boxes.flatten())
def testStrictRandomCropImageWithMasks(self):
image = self.createColorfulTestImage()[0]
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
masks = tf.random_uniform([2, 200, 400], dtype=tf.float32)
with mock.patch.object(
tf.image,
'sample_distorted_bounding_box'
) as mock_sample_distorted_bounding_box:
mock_sample_distorted_bounding_box.return_value = (
tf.constant([6, 143, 0], dtype=tf.int32),
tf.constant([190, 237, -1], dtype=tf.int32),
tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
new_image, new_boxes, new_labels, new_weights, new_masks = (
preprocessor._strict_random_crop_image(
image, boxes, labels, weights, masks=masks))
with self.test_session() as sess:
new_image, new_boxes, new_labels, new_weights, new_masks = sess.run(
[new_image, new_boxes, new_labels, new_weights, new_masks])
expected_boxes = np.array(
[[0.0, 0.0, 0.75789469, 1.0],
[0.23157893, 0.24050637, 0.75789469, 1.0]], dtype=np.float32)
self.assertAllEqual(new_image.shape, [190, 237, 3])
self.assertAllEqual(new_masks.shape, [2, 190, 237])
self.assertAllClose(
new_boxes.flatten(), expected_boxes.flatten())
def testStrictRandomCropImageWithKeypoints(self):
image = self.createColorfulTestImage()[0]
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
keypoints = self.createTestKeypoints()
with mock.patch.object(
tf.image,
'sample_distorted_bounding_box'
) as mock_sample_distorted_bounding_box:
mock_sample_distorted_bounding_box.return_value = (
tf.constant([6, 143, 0], dtype=tf.int32),
tf.constant([190, 237, -1], dtype=tf.int32),
tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
new_image, new_boxes, new_labels, new_weights, new_keypoints = (
preprocessor._strict_random_crop_image(
image, boxes, labels, weights, keypoints=keypoints))
with self.test_session() as sess:
new_image, new_boxes, new_labels, new_weights, new_keypoints = sess.run(
[new_image, new_boxes, new_labels, new_weights, new_keypoints])
expected_boxes = np.array([
[0.0, 0.0, 0.75789469, 1.0],
[0.23157893, 0.24050637, 0.75789469, 1.0],], dtype=np.float32)
expected_keypoints = np.array([
[[np.nan, np.nan],
[np.nan, np.nan],
[np.nan, np.nan]],
[[0.38947368, 0.07173],
[0.49473682, 0.24050637],
[0.60000002, 0.40928277]]
], dtype=np.float32)
self.assertAllEqual(new_image.shape, [190, 237, 3])
self.assertAllClose(
new_boxes.flatten(), expected_boxes.flatten())
self.assertAllClose(
new_keypoints.flatten(), expected_keypoints.flatten())
def testRunRandomCropImageWithMasks(self):
image = self.createColorfulTestImage()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
masks = tf.random_uniform([2, 200, 400], dtype=tf.float32)
tensor_dict = {
fields.InputDataFields.image: image,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
fields.InputDataFields.groundtruth_instance_masks: masks,
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_instance_masks=True)
preprocessing_options = [(preprocessor.random_crop_image, {})]
with mock.patch.object(
tf.image,
'sample_distorted_bounding_box'
) as mock_sample_distorted_bounding_box:
mock_sample_distorted_bounding_box.return_value = (
tf.constant([6, 143, 0], dtype=tf.int32),
tf.constant([190, 237, -1], dtype=tf.int32),
tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
distorted_masks = distorted_tensor_dict[
fields.InputDataFields.groundtruth_instance_masks]
with self.test_session() as sess:
(distorted_image_, distorted_boxes_, distorted_labels_,
distorted_masks_) = sess.run(
[distorted_image, distorted_boxes, distorted_labels,
distorted_masks])
expected_boxes = np.array([
[0.0, 0.0, 0.75789469, 1.0],
[0.23157893, 0.24050637, 0.75789469, 1.0],
], dtype=np.float32)
self.assertAllEqual(distorted_image_.shape, [1, 190, 237, 3])
self.assertAllEqual(distorted_masks_.shape, [2, 190, 237])
self.assertAllEqual(distorted_labels_, [1, 2])
self.assertAllClose(
distorted_boxes_.flatten(), expected_boxes.flatten())
def testRunRandomCropImageWithKeypointsInsideCrop(self):
image = self.createColorfulTestImage()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
keypoints = self.createTestKeypointsInsideCrop()
tensor_dict = {
fields.InputDataFields.image: image,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_keypoints: keypoints,
fields.InputDataFields.groundtruth_weights: weights
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_keypoints=True)
preprocessing_options = [(preprocessor.random_crop_image, {})]
with mock.patch.object(
tf.image,
'sample_distorted_bounding_box'
) as mock_sample_distorted_bounding_box:
mock_sample_distorted_bounding_box.return_value = (
tf.constant([6, 143, 0], dtype=tf.int32),
tf.constant([190, 237, -1], dtype=tf.int32),
tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
distorted_keypoints = distorted_tensor_dict[
fields.InputDataFields.groundtruth_keypoints]
with self.test_session() as sess:
(distorted_image_, distorted_boxes_, distorted_labels_,
distorted_keypoints_) = sess.run(
[distorted_image, distorted_boxes, distorted_labels,
distorted_keypoints])
expected_boxes = np.array([
[0.0, 0.0, 0.75789469, 1.0],
[0.23157893, 0.24050637, 0.75789469, 1.0],
], dtype=np.float32)
expected_keypoints = np.array([
[[0.38947368, 0.07173],
[0.49473682, 0.24050637],
[0.60000002, 0.40928277]],
[[0.38947368, 0.07173],
[0.49473682, 0.24050637],
[0.60000002, 0.40928277]]
])
self.assertAllEqual(distorted_image_.shape, [1, 190, 237, 3])
self.assertAllEqual(distorted_labels_, [1, 2])
self.assertAllClose(
distorted_boxes_.flatten(), expected_boxes.flatten())
self.assertAllClose(
distorted_keypoints_.flatten(), expected_keypoints.flatten())
def testRunRandomCropImageWithKeypointsOutsideCrop(self):
image = self.createColorfulTestImage()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
keypoints = self.createTestKeypointsOutsideCrop()
tensor_dict = {
fields.InputDataFields.image: image,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
fields.InputDataFields.groundtruth_keypoints: keypoints
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_keypoints=True)
preprocessing_options = [(preprocessor.random_crop_image, {})]
with mock.patch.object(
tf.image,
'sample_distorted_bounding_box'
) as mock_sample_distorted_bounding_box:
mock_sample_distorted_bounding_box.return_value = (
tf.constant([6, 143, 0], dtype=tf.int32),
tf.constant([190, 237, -1], dtype=tf.int32),
tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
distorted_keypoints = distorted_tensor_dict[
fields.InputDataFields.groundtruth_keypoints]
with self.test_session() as sess:
(distorted_image_, distorted_boxes_, distorted_labels_,
distorted_keypoints_) = sess.run(
[distorted_image, distorted_boxes, distorted_labels,
distorted_keypoints])
expected_boxes = np.array([
[0.0, 0.0, 0.75789469, 1.0],
[0.23157893, 0.24050637, 0.75789469, 1.0],
], dtype=np.float32)
expected_keypoints = np.array([
[[np.nan, np.nan],
[np.nan, np.nan],
[np.nan, np.nan]],
[[np.nan, np.nan],
[np.nan, np.nan],
[np.nan, np.nan]],
])
self.assertAllEqual(distorted_image_.shape, [1, 190, 237, 3])
self.assertAllEqual(distorted_labels_, [1, 2])
self.assertAllClose(
distorted_boxes_.flatten(), expected_boxes.flatten())
self.assertAllClose(
distorted_keypoints_.flatten(), expected_keypoints.flatten())
def testRunRetainBoxesAboveThreshold(self):
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
tensor_dict = {
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
}
preprocessing_options = [
(preprocessor.retain_boxes_above_threshold, {'threshold': 0.6})
]
preprocessor_arg_map = preprocessor.get_default_func_arg_map()
retained_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
retained_boxes = retained_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
retained_labels = retained_tensor_dict[
fields.InputDataFields.groundtruth_classes]
retained_weights = retained_tensor_dict[
fields.InputDataFields.groundtruth_weights]
with self.test_session() as sess:
(retained_boxes_, retained_labels_,
retained_weights_, expected_retained_boxes_,
expected_retained_labels_, expected_retained_weights_) = sess.run(
[retained_boxes, retained_labels, retained_weights,
self.expectedBoxesAfterThresholding(),
self.expectedLabelsAfterThresholding(),
self.expectedLabelScoresAfterThresholding()])
self.assertAllClose(retained_boxes_, expected_retained_boxes_)
self.assertAllClose(retained_labels_, expected_retained_labels_)
self.assertAllClose(
retained_weights_, expected_retained_weights_)
def testRunRetainBoxesAboveThresholdWithMasks(self):
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
masks = self.createTestMasks()
tensor_dict = {
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
fields.InputDataFields.groundtruth_instance_masks: masks
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_label_weights=True,
include_instance_masks=True)
preprocessing_options = [
(preprocessor.retain_boxes_above_threshold, {'threshold': 0.6})
]
retained_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
retained_masks = retained_tensor_dict[
fields.InputDataFields.groundtruth_instance_masks]
with self.test_session() as sess:
(retained_masks_, expected_masks_) = sess.run(
[retained_masks,
self.expectedMasksAfterThresholding()])
self.assertAllClose(retained_masks_, expected_masks_)
def testRunRetainBoxesAboveThresholdWithKeypoints(self):
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
keypoints = self.createTestKeypoints()
tensor_dict = {
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
fields.InputDataFields.groundtruth_keypoints: keypoints
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_keypoints=True)
preprocessing_options = [
(preprocessor.retain_boxes_above_threshold, {'threshold': 0.6})
]
retained_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
retained_keypoints = retained_tensor_dict[
fields.InputDataFields.groundtruth_keypoints]
with self.test_session() as sess:
(retained_keypoints_, expected_keypoints_) = sess.run(
[retained_keypoints,
self.expectedKeypointsAfterThresholding()])
self.assertAllClose(retained_keypoints_, expected_keypoints_)
def testRandomCropToAspectRatioWithCache(self):
preprocess_options = [(preprocessor.random_crop_to_aspect_ratio, {})]
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=False,
test_keypoints=False)
def testRunRandomCropToAspectRatioWithMasks(self):
image = self.createColorfulTestImage()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
masks = tf.random_uniform([2, 200, 400], dtype=tf.float32)
tensor_dict = {
fields.InputDataFields.image: image,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
fields.InputDataFields.groundtruth_instance_masks: masks
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_instance_masks=True)
preprocessing_options = [(preprocessor.random_crop_to_aspect_ratio, {})]
with mock.patch.object(preprocessor,
'_random_integer') as mock_random_integer:
mock_random_integer.return_value = tf.constant(0, dtype=tf.int32)
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
distorted_masks = distorted_tensor_dict[
fields.InputDataFields.groundtruth_instance_masks]
with self.test_session() as sess:
(distorted_image_, distorted_boxes_, distorted_labels_,
distorted_masks_) = sess.run([
distorted_image, distorted_boxes, distorted_labels, distorted_masks
])
expected_boxes = np.array([0.0, 0.5, 0.75, 1.0], dtype=np.float32)
self.assertAllEqual(distorted_image_.shape, [1, 200, 200, 3])
self.assertAllEqual(distorted_labels_, [1])
self.assertAllClose(distorted_boxes_.flatten(),
expected_boxes.flatten())
self.assertAllEqual(distorted_masks_.shape, [1, 200, 200])
def testRunRandomCropToAspectRatioWithKeypoints(self):
image = self.createColorfulTestImage()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
keypoints = self.createTestKeypoints()
tensor_dict = {
fields.InputDataFields.image: image,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
fields.InputDataFields.groundtruth_keypoints: keypoints
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_keypoints=True)
preprocessing_options = [(preprocessor.random_crop_to_aspect_ratio, {})]
with mock.patch.object(preprocessor,
'_random_integer') as mock_random_integer:
mock_random_integer.return_value = tf.constant(0, dtype=tf.int32)
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
distorted_keypoints = distorted_tensor_dict[
fields.InputDataFields.groundtruth_keypoints]
with self.test_session() as sess:
(distorted_image_, distorted_boxes_, distorted_labels_,
distorted_keypoints_) = sess.run([
distorted_image, distorted_boxes, distorted_labels,
distorted_keypoints
])
expected_boxes = np.array([0.0, 0.5, 0.75, 1.0], dtype=np.float32)
expected_keypoints = np.array(
[[0.1, 0.2], [0.2, 0.4], [0.3, 0.6]], dtype=np.float32)
self.assertAllEqual(distorted_image_.shape, [1, 200, 200, 3])
self.assertAllEqual(distorted_labels_, [1])
self.assertAllClose(distorted_boxes_.flatten(),
expected_boxes.flatten())
self.assertAllClose(distorted_keypoints_.flatten(),
expected_keypoints.flatten())
def testRandomPadToAspectRatioWithCache(self):
preprocess_options = [(preprocessor.random_pad_to_aspect_ratio, {})]
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=True,
test_keypoints=True)
def testRunRandomPadToAspectRatioWithMinMaxPaddedSizeRatios(self):
image = self.createColorfulTestImage()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
tensor_dict = {
fields.InputDataFields.image: image,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map()
preprocessing_options = [(preprocessor.random_pad_to_aspect_ratio,
{'min_padded_size_ratio': (4.0, 4.0),
'max_padded_size_ratio': (4.0, 4.0)})]
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
with self.test_session() as sess:
distorted_image_, distorted_boxes_, distorted_labels_ = sess.run([
distorted_image, distorted_boxes, distorted_labels])
expected_boxes = np.array(
[[0.0, 0.125, 0.1875, 0.5], [0.0625, 0.25, 0.1875, 0.5]],
dtype=np.float32)
self.assertAllEqual(distorted_image_.shape, [1, 800, 800, 3])
self.assertAllEqual(distorted_labels_, [1, 2])
self.assertAllClose(distorted_boxes_.flatten(),
expected_boxes.flatten())
def testRunRandomPadToAspectRatioWithMasks(self):
image = self.createColorfulTestImage()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
masks = tf.random_uniform([2, 200, 400], dtype=tf.float32)
tensor_dict = {
fields.InputDataFields.image: image,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_instance_masks: masks
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_instance_masks=True)
preprocessing_options = [(preprocessor.random_pad_to_aspect_ratio, {})]
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
distorted_masks = distorted_tensor_dict[
fields.InputDataFields.groundtruth_instance_masks]
with self.test_session() as sess:
(distorted_image_, distorted_boxes_, distorted_labels_,
distorted_masks_) = sess.run([
distorted_image, distorted_boxes, distorted_labels, distorted_masks
])
expected_boxes = np.array(
[[0.0, 0.25, 0.375, 1.0], [0.125, 0.5, 0.375, 1.0]], dtype=np.float32)
self.assertAllEqual(distorted_image_.shape, [1, 400, 400, 3])
self.assertAllEqual(distorted_labels_, [1, 2])
self.assertAllClose(distorted_boxes_.flatten(),
expected_boxes.flatten())
self.assertAllEqual(distorted_masks_.shape, [2, 400, 400])
def testRunRandomPadToAspectRatioWithKeypoints(self):
image = self.createColorfulTestImage()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
keypoints = self.createTestKeypoints()
tensor_dict = {
fields.InputDataFields.image: image,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_keypoints: keypoints
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_keypoints=True)
preprocessing_options = [(preprocessor.random_pad_to_aspect_ratio, {})]
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_labels = distorted_tensor_dict[
fields.InputDataFields.groundtruth_classes]
distorted_keypoints = distorted_tensor_dict[
fields.InputDataFields.groundtruth_keypoints]
with self.test_session() as sess:
(distorted_image_, distorted_boxes_, distorted_labels_,
distorted_keypoints_) = sess.run([
distorted_image, distorted_boxes, distorted_labels,
distorted_keypoints
])
expected_boxes = np.array(
[[0.0, 0.25, 0.375, 1.0], [0.125, 0.5, 0.375, 1.0]], dtype=np.float32)
expected_keypoints = np.array([
[[0.05, 0.1], [0.1, 0.2], [0.15, 0.3]],
[[0.2, 0.4], [0.25, 0.5], [0.3, 0.6]],
], dtype=np.float32)
self.assertAllEqual(distorted_image_.shape, [1, 400, 400, 3])
self.assertAllEqual(distorted_labels_, [1, 2])
self.assertAllClose(distorted_boxes_.flatten(),
expected_boxes.flatten())
self.assertAllClose(distorted_keypoints_.flatten(),
expected_keypoints.flatten())
def testRandomPadImageWithCache(self):
preprocess_options = [(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1,}), (preprocessor.random_pad_image, {})]
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=True,
test_keypoints=True)
def testRandomPadImage(self):
preprocessing_options = [(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
})]
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
images = tensor_dict[fields.InputDataFields.image]
preprocessing_options = [(preprocessor.random_pad_image, {})]
padded_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
padded_images = padded_tensor_dict[fields.InputDataFields.image]
padded_boxes = padded_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
boxes_shape = tf.shape(boxes)
padded_boxes_shape = tf.shape(padded_boxes)
images_shape = tf.shape(images)
padded_images_shape = tf.shape(padded_images)
with self.test_session() as sess:
(boxes_shape_, padded_boxes_shape_, images_shape_,
padded_images_shape_, boxes_, padded_boxes_) = sess.run(
[boxes_shape, padded_boxes_shape, images_shape,
padded_images_shape, boxes, padded_boxes])
self.assertAllEqual(boxes_shape_, padded_boxes_shape_)
self.assertTrue((images_shape_[1] >= padded_images_shape_[1] * 0.5).all)
self.assertTrue((images_shape_[2] >= padded_images_shape_[2] * 0.5).all)
self.assertTrue((images_shape_[1] <= padded_images_shape_[1]).all)
self.assertTrue((images_shape_[2] <= padded_images_shape_[2]).all)
self.assertTrue(np.all((boxes_[:, 2] - boxes_[:, 0]) >= (
padded_boxes_[:, 2] - padded_boxes_[:, 0])))
self.assertTrue(np.all((boxes_[:, 3] - boxes_[:, 1]) >= (
padded_boxes_[:, 3] - padded_boxes_[:, 1])))
def testRandomCropPadImageWithCache(self):
preprocess_options = [(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1,}), (preprocessor.random_crop_pad_image, {})]
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=True,
test_keypoints=True)
def testRandomCropPadImageWithRandomCoefOne(self):
preprocessing_options = [(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
})]
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
}
tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
images = tensor_dict[fields.InputDataFields.image]
preprocessing_options = [(preprocessor.random_crop_pad_image, {
'random_coef': 1.0
})]
padded_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
padded_images = padded_tensor_dict[fields.InputDataFields.image]
padded_boxes = padded_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
boxes_shape = tf.shape(boxes)
padded_boxes_shape = tf.shape(padded_boxes)
images_shape = tf.shape(images)
padded_images_shape = tf.shape(padded_images)
with self.test_session() as sess:
(boxes_shape_, padded_boxes_shape_, images_shape_,
padded_images_shape_, boxes_, padded_boxes_) = sess.run(
[boxes_shape, padded_boxes_shape, images_shape,
padded_images_shape, boxes, padded_boxes])
self.assertAllEqual(boxes_shape_, padded_boxes_shape_)
self.assertTrue((images_shape_[1] >= padded_images_shape_[1] * 0.5).all)
self.assertTrue((images_shape_[2] >= padded_images_shape_[2] * 0.5).all)
self.assertTrue((images_shape_[1] <= padded_images_shape_[1]).all)
self.assertTrue((images_shape_[2] <= padded_images_shape_[2]).all)
self.assertTrue(np.all((boxes_[:, 2] - boxes_[:, 0]) >= (
padded_boxes_[:, 2] - padded_boxes_[:, 0])))
self.assertTrue(np.all((boxes_[:, 3] - boxes_[:, 1]) >= (
padded_boxes_[:, 3] - padded_boxes_[:, 1])))
def testRandomCropToAspectRatio(self):
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
}
tensor_dict = preprocessor.preprocess(tensor_dict, [])
images = tensor_dict[fields.InputDataFields.image]
preprocessing_options = [(preprocessor.random_crop_to_aspect_ratio, {
'aspect_ratio': 2.0
})]
cropped_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
cropped_images = cropped_tensor_dict[fields.InputDataFields.image]
cropped_boxes = cropped_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
boxes_shape = tf.shape(boxes)
cropped_boxes_shape = tf.shape(cropped_boxes)
images_shape = tf.shape(images)
cropped_images_shape = tf.shape(cropped_images)
with self.test_session() as sess:
(boxes_shape_, cropped_boxes_shape_, images_shape_,
cropped_images_shape_) = sess.run([
boxes_shape, cropped_boxes_shape, images_shape, cropped_images_shape
])
self.assertAllEqual(boxes_shape_, cropped_boxes_shape_)
self.assertEqual(images_shape_[1], cropped_images_shape_[1] * 2)
self.assertEqual(images_shape_[2], cropped_images_shape_[2])
def testRandomPadToAspectRatio(self):
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
}
tensor_dict = preprocessor.preprocess(tensor_dict, [])
images = tensor_dict[fields.InputDataFields.image]
preprocessing_options = [(preprocessor.random_pad_to_aspect_ratio, {
'aspect_ratio': 2.0
})]
padded_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
padded_images = padded_tensor_dict[fields.InputDataFields.image]
padded_boxes = padded_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
boxes_shape = tf.shape(boxes)
padded_boxes_shape = tf.shape(padded_boxes)
images_shape = tf.shape(images)
padded_images_shape = tf.shape(padded_images)
with self.test_session() as sess:
(boxes_shape_, padded_boxes_shape_, images_shape_,
padded_images_shape_) = sess.run([
boxes_shape, padded_boxes_shape, images_shape, padded_images_shape
])
self.assertAllEqual(boxes_shape_, padded_boxes_shape_)
self.assertEqual(images_shape_[1], padded_images_shape_[1])
self.assertEqual(2 * images_shape_[2], padded_images_shape_[2])
def testRandomBlackPatchesWithCache(self):
preprocess_options = []
preprocess_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocess_options.append((preprocessor.random_black_patches, {
'size_to_image_ratio': 0.5
}))
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=True,
test_keypoints=True)
def testRandomBlackPatches(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocessing_options.append((preprocessor.random_black_patches, {
'size_to_image_ratio': 0.5
}))
images = self.createTestImages()
tensor_dict = {fields.InputDataFields.image: images}
blacked_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
blacked_images = blacked_tensor_dict[fields.InputDataFields.image]
images_shape = tf.shape(images)
blacked_images_shape = tf.shape(blacked_images)
with self.test_session() as sess:
(images_shape_, blacked_images_shape_) = sess.run(
[images_shape, blacked_images_shape])
self.assertAllEqual(images_shape_, blacked_images_shape_)
def testRandomResizeMethodWithCache(self):
preprocess_options = []
preprocess_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocess_options.append((preprocessor.random_resize_method, {
'target_size': (75, 150)
}))
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=True,
test_keypoints=True)
def testRandomResizeMethod(self):
preprocessing_options = []
preprocessing_options.append((preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}))
preprocessing_options.append((preprocessor.random_resize_method, {
'target_size': (75, 150)
}))
images = self.createTestImages()
tensor_dict = {fields.InputDataFields.image: images}
resized_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
resized_images = resized_tensor_dict[fields.InputDataFields.image]
resized_images_shape = tf.shape(resized_images)
expected_images_shape = tf.constant([1, 75, 150, 3], dtype=tf.int32)
with self.test_session() as sess:
(expected_images_shape_, resized_images_shape_) = sess.run(
[expected_images_shape, resized_images_shape])
self.assertAllEqual(expected_images_shape_,
resized_images_shape_)
def testResizeImageWithMasks(self):
"""Tests image resizing, checking output sizes."""
in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
in_masks_shape_list = [[15, 60, 40], [10, 15, 30]]
height = 50
width = 100
expected_image_shape_list = [[50, 100, 3], [50, 100, 3]]
expected_masks_shape_list = [[15, 50, 100], [10, 50, 100]]
for (in_image_shape, expected_image_shape, in_masks_shape,
expected_mask_shape) in zip(in_image_shape_list,
expected_image_shape_list,
in_masks_shape_list,
expected_masks_shape_list):
in_image = tf.random_uniform(in_image_shape)
in_masks = tf.random_uniform(in_masks_shape)
out_image, out_masks, _ = preprocessor.resize_image(
in_image, in_masks, new_height=height, new_width=width)
out_image_shape = tf.shape(out_image)
out_masks_shape = tf.shape(out_masks)
with self.test_session() as sess:
out_image_shape, out_masks_shape = sess.run(
[out_image_shape, out_masks_shape])
self.assertAllEqual(out_image_shape, expected_image_shape)
self.assertAllEqual(out_masks_shape, expected_mask_shape)
def testResizeImageWithMasksTensorInputHeightAndWidth(self):
"""Tests image resizing, checking output sizes."""
in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
in_masks_shape_list = [[15, 60, 40], [10, 15, 30]]
height = tf.constant(50, dtype=tf.int32)
width = tf.constant(100, dtype=tf.int32)
expected_image_shape_list = [[50, 100, 3], [50, 100, 3]]
expected_masks_shape_list = [[15, 50, 100], [10, 50, 100]]
for (in_image_shape, expected_image_shape, in_masks_shape,
expected_mask_shape) in zip(in_image_shape_list,
expected_image_shape_list,
in_masks_shape_list,
expected_masks_shape_list):
in_image = tf.random_uniform(in_image_shape)
in_masks = tf.random_uniform(in_masks_shape)
out_image, out_masks, _ = preprocessor.resize_image(
in_image, in_masks, new_height=height, new_width=width)
out_image_shape = tf.shape(out_image)
out_masks_shape = tf.shape(out_masks)
with self.test_session() as sess:
out_image_shape, out_masks_shape = sess.run(
[out_image_shape, out_masks_shape])
self.assertAllEqual(out_image_shape, expected_image_shape)
self.assertAllEqual(out_masks_shape, expected_mask_shape)
def testResizeImageWithNoInstanceMask(self):
"""Tests image resizing, checking output sizes."""
in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
in_masks_shape_list = [[0, 60, 40], [0, 15, 30]]
height = 50
width = 100
expected_image_shape_list = [[50, 100, 3], [50, 100, 3]]
expected_masks_shape_list = [[0, 50, 100], [0, 50, 100]]
for (in_image_shape, expected_image_shape, in_masks_shape,
expected_mask_shape) in zip(in_image_shape_list,
expected_image_shape_list,
in_masks_shape_list,
expected_masks_shape_list):
in_image = tf.random_uniform(in_image_shape)
in_masks = tf.random_uniform(in_masks_shape)
out_image, out_masks, _ = preprocessor.resize_image(
in_image, in_masks, new_height=height, new_width=width)
out_image_shape = tf.shape(out_image)
out_masks_shape = tf.shape(out_masks)
with self.test_session() as sess:
out_image_shape, out_masks_shape = sess.run(
[out_image_shape, out_masks_shape])
self.assertAllEqual(out_image_shape, expected_image_shape)
self.assertAllEqual(out_masks_shape, expected_mask_shape)
def testResizeToRangePreservesStaticSpatialShape(self):
"""Tests image resizing, checking output sizes."""
in_shape_list = [[60, 40, 3], [15, 30, 3], [15, 50, 3]]
min_dim = 50
max_dim = 100
expected_shape_list = [[75, 50, 3], [50, 100, 3], [30, 100, 3]]
for in_shape, expected_shape in zip(in_shape_list, expected_shape_list):
in_image = tf.random_uniform(in_shape)
out_image, _ = preprocessor.resize_to_range(
in_image, min_dimension=min_dim, max_dimension=max_dim)
self.assertAllEqual(out_image.get_shape().as_list(), expected_shape)
def testResizeToRangeWithDynamicSpatialShape(self):
"""Tests image resizing, checking output sizes."""
in_shape_list = [[60, 40, 3], [15, 30, 3], [15, 50, 3]]
min_dim = 50
max_dim = 100
expected_shape_list = [[75, 50, 3], [50, 100, 3], [30, 100, 3]]
for in_shape, expected_shape in zip(in_shape_list, expected_shape_list):
in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
out_image, _ = preprocessor.resize_to_range(
in_image, min_dimension=min_dim, max_dimension=max_dim)
out_image_shape = tf.shape(out_image)
with self.test_session() as sess:
out_image_shape = sess.run(out_image_shape,
feed_dict={in_image:
np.random.randn(*in_shape)})
self.assertAllEqual(out_image_shape, expected_shape)
def testResizeToRangeWithPadToMaxDimensionReturnsCorrectShapes(self):
in_shape_list = [[60, 40, 3], [15, 30, 3], [15, 50, 3]]
min_dim = 50
max_dim = 100
expected_shape_list = [[100, 100, 3], [100, 100, 3], [100, 100, 3]]
for in_shape, expected_shape in zip(in_shape_list, expected_shape_list):
in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
out_image, _ = preprocessor.resize_to_range(
in_image,
min_dimension=min_dim,
max_dimension=max_dim,
pad_to_max_dimension=True)
self.assertAllEqual(out_image.shape.as_list(), expected_shape)
out_image_shape = tf.shape(out_image)
with self.test_session() as sess:
out_image_shape = sess.run(
out_image_shape, feed_dict={in_image: np.random.randn(*in_shape)})
self.assertAllEqual(out_image_shape, expected_shape)
def testResizeToRangeWithPadToMaxDimensionReturnsCorrectTensor(self):
in_image_np = np.array([[[0, 1, 2]]], np.float32)
ex_image_np = np.array(
[[[0, 1, 2], [123.68, 116.779, 103.939]],
[[123.68, 116.779, 103.939], [123.68, 116.779, 103.939]]], np.float32)
min_dim = 1
max_dim = 2
in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
out_image, _ = preprocessor.resize_to_range(
in_image,
min_dimension=min_dim,
max_dimension=max_dim,
pad_to_max_dimension=True,
per_channel_pad_value=(123.68, 116.779, 103.939))
with self.test_session() as sess:
out_image_np = sess.run(out_image, feed_dict={in_image: in_image_np})
self.assertAllClose(ex_image_np, out_image_np)
def testResizeToRangeWithMasksPreservesStaticSpatialShape(self):
"""Tests image resizing, checking output sizes."""
in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
in_masks_shape_list = [[15, 60, 40], [10, 15, 30]]
min_dim = 50
max_dim = 100
expected_image_shape_list = [[75, 50, 3], [50, 100, 3]]
expected_masks_shape_list = [[15, 75, 50], [10, 50, 100]]
for (in_image_shape, expected_image_shape, in_masks_shape,
expected_mask_shape) in zip(in_image_shape_list,
expected_image_shape_list,
in_masks_shape_list,
expected_masks_shape_list):
in_image = tf.random_uniform(in_image_shape)
in_masks = tf.random_uniform(in_masks_shape)
out_image, out_masks, _ = preprocessor.resize_to_range(
in_image, in_masks, min_dimension=min_dim, max_dimension=max_dim)
self.assertAllEqual(out_masks.get_shape().as_list(), expected_mask_shape)
self.assertAllEqual(out_image.get_shape().as_list(), expected_image_shape)
def testResizeToRangeWithMasksAndPadToMaxDimension(self):
"""Tests image resizing, checking output sizes."""
in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
in_masks_shape_list = [[15, 60, 40], [10, 15, 30]]
min_dim = 50
max_dim = 100
expected_image_shape_list = [[100, 100, 3], [100, 100, 3]]
expected_masks_shape_list = [[15, 100, 100], [10, 100, 100]]
for (in_image_shape,
expected_image_shape, in_masks_shape, expected_mask_shape) in zip(
in_image_shape_list, expected_image_shape_list,
in_masks_shape_list, expected_masks_shape_list):
in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
in_masks = tf.placeholder(tf.float32, shape=(None, None, None))
out_image, out_masks, _ = preprocessor.resize_to_range(
in_image,
in_masks,
min_dimension=min_dim,
max_dimension=max_dim,
pad_to_max_dimension=True)
out_image_shape = tf.shape(out_image)
out_masks_shape = tf.shape(out_masks)
with self.test_session() as sess:
out_image_shape, out_masks_shape = sess.run(
[out_image_shape, out_masks_shape],
feed_dict={
in_image: np.random.randn(*in_image_shape),
in_masks: np.random.randn(*in_masks_shape)
})
self.assertAllEqual(out_image_shape, expected_image_shape)
self.assertAllEqual(out_masks_shape, expected_mask_shape)
def testResizeToRangeWithMasksAndDynamicSpatialShape(self):
"""Tests image resizing, checking output sizes."""
in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
in_masks_shape_list = [[15, 60, 40], [10, 15, 30]]
min_dim = 50
max_dim = 100
expected_image_shape_list = [[75, 50, 3], [50, 100, 3]]
expected_masks_shape_list = [[15, 75, 50], [10, 50, 100]]
for (in_image_shape, expected_image_shape, in_masks_shape,
expected_mask_shape) in zip(in_image_shape_list,
expected_image_shape_list,
in_masks_shape_list,
expected_masks_shape_list):
in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
in_masks = tf.placeholder(tf.float32, shape=(None, None, None))
in_masks = tf.random_uniform(in_masks_shape)
out_image, out_masks, _ = preprocessor.resize_to_range(
in_image, in_masks, min_dimension=min_dim, max_dimension=max_dim)
out_image_shape = tf.shape(out_image)
out_masks_shape = tf.shape(out_masks)
with self.test_session() as sess:
out_image_shape, out_masks_shape = sess.run(
[out_image_shape, out_masks_shape],
feed_dict={
in_image: np.random.randn(*in_image_shape),
in_masks: np.random.randn(*in_masks_shape)
})
self.assertAllEqual(out_image_shape, expected_image_shape)
self.assertAllEqual(out_masks_shape, expected_mask_shape)
def testResizeToRangeWithInstanceMasksTensorOfSizeZero(self):
"""Tests image resizing, checking output sizes."""
in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
in_masks_shape_list = [[0, 60, 40], [0, 15, 30]]
min_dim = 50
max_dim = 100
expected_image_shape_list = [[75, 50, 3], [50, 100, 3]]
expected_masks_shape_list = [[0, 75, 50], [0, 50, 100]]
for (in_image_shape, expected_image_shape, in_masks_shape,
expected_mask_shape) in zip(in_image_shape_list,
expected_image_shape_list,
in_masks_shape_list,
expected_masks_shape_list):
in_image = tf.random_uniform(in_image_shape)
in_masks = tf.random_uniform(in_masks_shape)
out_image, out_masks, _ = preprocessor.resize_to_range(
in_image, in_masks, min_dimension=min_dim, max_dimension=max_dim)
out_image_shape = tf.shape(out_image)
out_masks_shape = tf.shape(out_masks)
with self.test_session() as sess:
out_image_shape, out_masks_shape = sess.run(
[out_image_shape, out_masks_shape])
self.assertAllEqual(out_image_shape, expected_image_shape)
self.assertAllEqual(out_masks_shape, expected_mask_shape)
def testResizeToRange4DImageTensor(self):
image = tf.random_uniform([1, 200, 300, 3])
with self.assertRaises(ValueError):
preprocessor.resize_to_range(image, 500, 600)
def testResizeToRangeSameMinMax(self):
"""Tests image resizing, checking output sizes."""
in_shape_list = [[312, 312, 3], [299, 299, 3]]
min_dim = 320
max_dim = 320
expected_shape_list = [[320, 320, 3], [320, 320, 3]]
for in_shape, expected_shape in zip(in_shape_list, expected_shape_list):
in_image = tf.random_uniform(in_shape)
out_image, _ = preprocessor.resize_to_range(
in_image, min_dimension=min_dim, max_dimension=max_dim)
out_image_shape = tf.shape(out_image)
with self.test_session() as sess:
out_image_shape = sess.run(out_image_shape)
self.assertAllEqual(out_image_shape, expected_shape)
def testResizeToMinDimensionTensorShapes(self):
in_image_shape_list = [[60, 55, 3], [15, 30, 3]]
in_masks_shape_list = [[15, 60, 55], [10, 15, 30]]
min_dim = 50
expected_image_shape_list = [[60, 55, 3], [50, 100, 3]]
expected_masks_shape_list = [[15, 60, 55], [10, 50, 100]]
for (in_image_shape, expected_image_shape, in_masks_shape,
expected_mask_shape) in zip(in_image_shape_list,
expected_image_shape_list,
in_masks_shape_list,
expected_masks_shape_list):
in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
in_masks = tf.placeholder(tf.float32, shape=(None, None, None))
in_masks = tf.random_uniform(in_masks_shape)
out_image, out_masks, _ = preprocessor.resize_to_min_dimension(
in_image, in_masks, min_dimension=min_dim)
out_image_shape = tf.shape(out_image)
out_masks_shape = tf.shape(out_masks)
with self.test_session() as sess:
out_image_shape, out_masks_shape = sess.run(
[out_image_shape, out_masks_shape],
feed_dict={
in_image: np.random.randn(*in_image_shape),
in_masks: np.random.randn(*in_masks_shape)
})
self.assertAllEqual(out_image_shape, expected_image_shape)
self.assertAllEqual(out_masks_shape, expected_mask_shape)
def testResizeToMinDimensionWithInstanceMasksTensorOfSizeZero(self):
"""Tests image resizing, checking output sizes."""
in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
in_masks_shape_list = [[0, 60, 40], [0, 15, 30]]
min_dim = 50
expected_image_shape_list = [[75, 50, 3], [50, 100, 3]]
expected_masks_shape_list = [[0, 75, 50], [0, 50, 100]]
for (in_image_shape, expected_image_shape, in_masks_shape,
expected_mask_shape) in zip(in_image_shape_list,
expected_image_shape_list,
in_masks_shape_list,
expected_masks_shape_list):
in_image = tf.random_uniform(in_image_shape)
in_masks = tf.random_uniform(in_masks_shape)
out_image, out_masks, _ = preprocessor.resize_to_min_dimension(
in_image, in_masks, min_dimension=min_dim)
out_image_shape = tf.shape(out_image)
out_masks_shape = tf.shape(out_masks)
with self.test_session() as sess:
out_image_shape, out_masks_shape = sess.run(
[out_image_shape, out_masks_shape])
self.assertAllEqual(out_image_shape, expected_image_shape)
self.assertAllEqual(out_masks_shape, expected_mask_shape)
def testResizeToMinDimensionRaisesErrorOn4DImage(self):
image = tf.random_uniform([1, 200, 300, 3])
with self.assertRaises(ValueError):
preprocessor.resize_to_min_dimension(image, 500)
def testScaleBoxesToPixelCoordinates(self):
"""Tests box scaling, checking scaled values."""
in_shape = [60, 40, 3]
in_boxes = [[0.1, 0.2, 0.4, 0.6],
[0.5, 0.3, 0.9, 0.7]]
expected_boxes = [[6., 8., 24., 24.],
[30., 12., 54., 28.]]
in_image = tf.random_uniform(in_shape)
in_boxes = tf.constant(in_boxes)
_, out_boxes = preprocessor.scale_boxes_to_pixel_coordinates(
in_image, boxes=in_boxes)
with self.test_session() as sess:
out_boxes = sess.run(out_boxes)
self.assertAllClose(out_boxes, expected_boxes)
def testScaleBoxesToPixelCoordinatesWithKeypoints(self):
"""Tests box and keypoint scaling, checking scaled values."""
in_shape = [60, 40, 3]
in_boxes = self.createTestBoxes()
in_keypoints = self.createTestKeypoints()
expected_boxes = [[0., 10., 45., 40.],
[15., 20., 45., 40.]]
expected_keypoints = [
[[6., 4.], [12., 8.], [18., 12.]],
[[24., 16.], [30., 20.], [36., 24.]],
]
in_image = tf.random_uniform(in_shape)
_, out_boxes, out_keypoints = preprocessor.scale_boxes_to_pixel_coordinates(
in_image, boxes=in_boxes, keypoints=in_keypoints)
with self.test_session() as sess:
out_boxes_, out_keypoints_ = sess.run([out_boxes, out_keypoints])
self.assertAllClose(out_boxes_, expected_boxes)
self.assertAllClose(out_keypoints_, expected_keypoints)
def testSubtractChannelMean(self):
"""Tests whether channel means have been subtracted."""
with self.test_session():
image = tf.zeros((240, 320, 3))
means = [1, 2, 3]
actual = preprocessor.subtract_channel_mean(image, means=means)
actual = actual.eval()
self.assertTrue((actual[:, :, 0] == -1).all())
self.assertTrue((actual[:, :, 1] == -2).all())
self.assertTrue((actual[:, :, 2] == -3).all())
def testOneHotEncoding(self):
"""Tests one hot encoding of multiclass labels."""
with self.test_session():
labels = tf.constant([1, 4, 2], dtype=tf.int32)
one_hot = preprocessor.one_hot_encoding(labels, num_classes=5)
one_hot = one_hot.eval()
self.assertAllEqual([0, 1, 1, 0, 1], one_hot)
def testSSDRandomCropWithCache(self):
preprocess_options = [
(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}),
(preprocessor.ssd_random_crop, {})]
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=False,
test_keypoints=False)
def testSSDRandomCrop(self):
preprocessing_options = [
(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}),
(preprocessor.ssd_random_crop, {})]
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
}
distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
images_rank = tf.rank(images)
distorted_images_rank = tf.rank(distorted_images)
boxes_rank = tf.rank(boxes)
distorted_boxes_rank = tf.rank(distorted_boxes)
with self.test_session() as sess:
(boxes_rank_, distorted_boxes_rank_, images_rank_,
distorted_images_rank_) = sess.run(
[boxes_rank, distorted_boxes_rank, images_rank,
distorted_images_rank])
self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
self.assertAllEqual(images_rank_, distorted_images_rank_)
def testSSDRandomCropWithMultiClassScores(self):
preprocessing_options = [(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}), (preprocessor.ssd_random_crop, {})]
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
multiclass_scores = self.createTestMultiClassScores()
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.multiclass_scores: multiclass_scores,
fields.InputDataFields.groundtruth_weights: weights,
}
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_multiclass_scores=True)
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
distorted_multiclass_scores = distorted_tensor_dict[
fields.InputDataFields.multiclass_scores]
images_rank = tf.rank(images)
distorted_images_rank = tf.rank(distorted_images)
boxes_rank = tf.rank(boxes)
distorted_boxes_rank = tf.rank(distorted_boxes)
multiclass_scores_rank = tf.rank(multiclass_scores)
distorted_multiclass_scores_rank = tf.rank(distorted_multiclass_scores)
with self.test_session() as sess:
(boxes_rank_, distorted_boxes_, distorted_boxes_rank_, images_rank_,
distorted_images_rank_, multiclass_scores_rank_,
distorted_multiclass_scores_,
distorted_multiclass_scores_rank_) = sess.run([
boxes_rank, distorted_boxes, distorted_boxes_rank, images_rank,
distorted_images_rank, multiclass_scores_rank,
distorted_multiclass_scores, distorted_multiclass_scores_rank
])
self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
self.assertAllEqual(images_rank_, distorted_images_rank_)
self.assertAllEqual(multiclass_scores_rank_,
distorted_multiclass_scores_rank_)
self.assertAllEqual(distorted_boxes_.shape[0],
distorted_multiclass_scores_.shape[0])
def testSSDRandomCropPad(self):
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
preprocessing_options = [
(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}),
(preprocessor.ssd_random_crop_pad, {})]
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights,
}
distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
preprocessing_options)
distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
images_rank = tf.rank(images)
distorted_images_rank = tf.rank(distorted_images)
boxes_rank = tf.rank(boxes)
distorted_boxes_rank = tf.rank(distorted_boxes)
with self.test_session() as sess:
(boxes_rank_, distorted_boxes_rank_, images_rank_,
distorted_images_rank_) = sess.run([
boxes_rank, distorted_boxes_rank, images_rank, distorted_images_rank
])
self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
self.assertAllEqual(images_rank_, distorted_images_rank_)
def testSSDRandomCropFixedAspectRatioWithCache(self):
preprocess_options = [
(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}),
(preprocessor.ssd_random_crop_fixed_aspect_ratio, {})]
self._testPreprocessorCache(preprocess_options,
test_boxes=True,
test_masks=False,
test_keypoints=False)
def _testSSDRandomCropFixedAspectRatio(self,
include_multiclass_scores,
include_instance_masks,
include_keypoints):
images = self.createTestImages()
boxes = self.createTestBoxes()
labels = self.createTestLabels()
weights = self.createTestGroundtruthWeights()
preprocessing_options = [(preprocessor.normalize_image, {
'original_minval': 0,
'original_maxval': 255,
'target_minval': 0,
'target_maxval': 1
}), (preprocessor.ssd_random_crop_fixed_aspect_ratio, {})]
tensor_dict = {
fields.InputDataFields.image: images,
fields.InputDataFields.groundtruth_boxes: boxes,
fields.InputDataFields.groundtruth_classes: labels,
fields.InputDataFields.groundtruth_weights: weights
}
if include_multiclass_scores:
multiclass_scores = self.createTestMultiClassScores()
tensor_dict[fields.InputDataFields.multiclass_scores] = (
multiclass_scores)
if include_instance_masks:
masks = self.createTestMasks()
tensor_dict[fields.InputDataFields.groundtruth_instance_masks] = masks
if include_keypoints:
keypoints = self.createTestKeypoints()
tensor_dict[fields.InputDataFields.groundtruth_keypoints] = keypoints
preprocessor_arg_map = preprocessor.get_default_func_arg_map(
include_multiclass_scores=include_multiclass_scores,
include_instance_masks=include_instance_masks,
include_keypoints=include_keypoints)
distorted_tensor_dict = preprocessor.preprocess(
tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
distorted_boxes = distorted_tensor_dict[
fields.InputDataFields.groundtruth_boxes]
images_rank = tf.rank(images)
distorted_images_rank = tf.rank(distorted_images)
boxes_rank = tf.rank(boxes)
distorted_boxes_rank = tf.rank(distorted_boxes)
with self.test_session() as sess:
(boxes_rank_, distorted_boxes_rank_, images_rank_,
distorted_images_rank_) = sess.run(
[boxes_rank, distorted_boxes_rank, images_rank,
distorted_images_rank])
self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
self.assertAllEqual(images_rank_, distorted_images_rank_)
def testSSDRandomCropFixedAspectRatio(self):
self._testSSDRandomCropFixedAspectRatio(include_multiclass_scores=False,
include_instance_masks=False,
include_keypoints=False)
def testSSDRandomCropFixedAspectRatioWithMultiClassScores(self):
self._testSSDRandomCropFixedAspectRatio(include_multiclass_scores=True,
include_instance_masks=False,
include_keypoints=False)
def testSSDRandomCropFixedAspectRatioWithMasksAndKeypoints(self):
self._testSSDRandomCropFixedAspectRatio(include_multiclass_scores=False,
include_instance_masks=True,
include_keypoints=True)
def testSSDRandomCropFixedAspectRatioWithLabelScoresMasksAndKeypoints(self):
self._testSSDRandomCropFixedAspectRatio(include_multiclass_scores=False,
include_instance_masks=True,
include_keypoints=True)
def testConvertClassLogitsToSoftmax(self):
multiclass_scores = tf.constant(
[[1.0, 0.0], [0.5, 0.5], [1000, 1]], dtype=tf.float32)
temperature = 2.0
converted_multiclass_scores = (
preprocessor.convert_class_logits_to_softmax(
multiclass_scores=multiclass_scores, temperature=temperature))
expected_converted_multiclass_scores = [[[0.62245935, 0.37754068],
[0.5, 0.5], [1, 0]]]
with self.test_session() as sess:
(converted_multiclass_scores_) = sess.run([converted_multiclass_scores])
self.assertAllClose(converted_multiclass_scores_,
expected_converted_multiclass_scores)
if __name__ == '__main__':
tf.test.main()
|
PyTorch/Detection/Efficientdet/data | data | dataset | """ COCO dataset (quick and dirty)
Hacked together by Ross Wightman
"""
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import torch.utils.data as data
import os
import torch
import numpy as np
from PIL import Image
from pycocotools.coco import COCO
from effdet.anchors import Anchors, AnchorLabeler
class CocoDetection(data.Dataset):
"""`MS Coco Detection <http://mscoco.org/dataset/#detections-challenge2016>`_ Dataset.
Args:
root (string): Root directory where images are downloaded to.
ann_file (string): Path to json annotation file.
transform (callable, optional): A function/transform that takes in an PIL image
and returns a transformed version. E.g, ``transforms.ToTensor``
"""
def __init__(self, root, ann_file, config, transform=None):
super(CocoDetection, self).__init__()
if isinstance(root, (str, bytes)):
root = os.path.expanduser(root)
self.root = root
self.transform = transform
self.yxyx = True # expected for TF model, most PT are xyxy
self.include_masks = False
self.include_bboxes_ignore = False
self.has_annotations = 'image_info' not in ann_file
self.coco = None
self.cat_ids = []
self.cat_to_label = dict()
self.img_ids = []
self.img_ids_invalid = []
self.img_infos = []
self._load_annotations(ann_file)
self.anchors = Anchors(
config.min_level, config.max_level,
config.num_scales, config.aspect_ratios,
config.anchor_scale, config.image_size)
self.anchor_labeler = AnchorLabeler(self.anchors, config.num_classes, match_threshold=0.5)
def _load_annotations(self, ann_file):
assert self.coco is None
self.coco = COCO(ann_file)
self.cat_ids = self.coco.getCatIds()
img_ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values())
for img_id in sorted(self.coco.imgs.keys()):
info = self.coco.loadImgs([img_id])[0]
valid_annotation = not self.has_annotations or img_id in img_ids_with_ann
if valid_annotation and min(info['width'], info['height']) >= 32:
self.img_ids.append(img_id)
self.img_infos.append(info)
else:
self.img_ids_invalid.append(img_id)
def _parse_img_ann(self, img_id, img_info):
ann_ids = self.coco.getAnnIds(imgIds=[img_id])
ann_info = self.coco.loadAnns(ann_ids)
bboxes = []
bboxes_ignore = []
cls = []
for i, ann in enumerate(ann_info):
if ann.get('ignore', False):
continue
x1, y1, w, h = ann['bbox']
if self.include_masks and ann['area'] <= 0:
continue
if w < 1 or h < 1:
continue
# To subtract 1 or not, TF doesn't appear to do this so will keep it out for now.
if self.yxyx:
#bbox = [y1, x1, y1 + h - 1, x1 + w - 1]
bbox = [y1, x1, y1 + h, x1 + w]
else:
#bbox = [x1, y1, x1 + w - 1, y1 + h - 1]
bbox = [x1, y1, x1 + w, y1 + h]
if ann.get('iscrowd', False):
if self.include_bboxes_ignore:
bboxes_ignore.append(bbox)
else:
bboxes.append(bbox)
cls.append(self.cat_to_label[ann['category_id']] if self.cat_to_label else ann['category_id'])
if bboxes:
bboxes = np.array(bboxes, dtype=np.float32)
cls = np.array(cls, dtype=np.int64)
else:
bboxes = np.zeros((0, 4), dtype=np.float32)
cls = np.array([], dtype=np.int64)
if self.include_bboxes_ignore:
if bboxes_ignore:
bboxes_ignore = np.array(bboxes_ignore, dtype=np.float32)
else:
bboxes_ignore = np.zeros((0, 4), dtype=np.float32)
ann = dict(img_id=img_id, bbox=bboxes, cls=cls, img_size=(img_info['width'], img_info['height']))
if self.include_bboxes_ignore:
ann['bbox_ignore'] = bboxes_ignore
return ann
def __getitem__(self, index):
"""
Args:
index (int): Index
Returns:
tuple: Tuple (image, annotations (target)).
"""
img_id = self.img_ids[index]
img_info = self.img_infos[index]
if self.has_annotations:
ann = self._parse_img_ann(img_id, img_info)
else:
ann = dict(img_id=img_id, img_size=(img_info['width'], img_info['height']))
path = img_info['file_name']
img = Image.open(os.path.join(self.root, path)).convert('RGB')
if self.transform is not None:
img, ann = self.transform(img, ann)
cls_targets, box_targets, num_positives = self.anchor_labeler.label_anchors(
ann['bbox'], ann['cls'])
ann.pop('bbox')
ann.pop('cls')
ann['num_positives'] = num_positives
ann.update(cls_targets)
ann.update(box_targets)
return img, ann
def __len__(self):
return len(self.img_ids)
|
Tools/PyTorch/TimeSeriesPredictionPlatform/models/tft_pyt | tft_pyt | utils | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
class PerformanceMeter():
def __init__(self):
self.reset()
def reset(self):
self.avg = 0
self.count = 0
self.total_time = 0
self.last_update_time = time.time()
self.intervals = []
def update(self, n, exclude_from_total=False):
delta = time.time() - self.last_update_time
self.intervals.append(delta)
if not exclude_from_total:
self.total_time += delta
self.count += n
self.avg = self.count / self.total_time
self.last_update_time = time.time()
return n/delta
def reset_current_lap(self):
self.last_update_time = time.time()
def p(self, i):
assert i <= 100
idx = int(len(self.intervals) * i / 100)
return sorted(self.intervals)[idx]
|
PyTorch/SpeechSynthesis/FastPitch/phrases | phrases | phrase_4_256 | The forms of printed letters should be beautiful, and that their arrangement on the page should be reasonable and a help to the shapeliness of the letters themselves and the form of printed letters should be beautiful, and that their arrangement on pages.
The forms of printed letters should be beautiful, and that their arrangement on the page should be reasonable and a help to the shapeliness of the letters themselves and the form of printed letters should be beautiful, and that their arrangement on pages.
The forms of printed letters should be beautiful, and that their arrangement on the page should be reasonable and a help to the shapeliness of the letters themselves and the form of printed letters should be beautiful, and that their arrangement on pages.
The forms of printed letters should be beautiful, and that their arrangement on the page should be reasonable and a help to the shapeliness of the letters themselves and the form of printed letters should be beautiful, and that their arrangement on pages.
|
TensorFlow/LanguageModeling/BERT/triton/scripts | scripts | run_perf_client | #!/bin/bash
# Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
MODEL_NAME=${1:-"bert"}
MODEL_VERSION=${2:-1}
BATCH_SIZE=${3:-1}
MAX_LATENCY=${4:-100}
MAX_CLIENT_THREADS=${5:-10}
MAX_CONCURRENCY=${6:-50}
SERVER_HOSTNAME=${7:-"localhost"}
if [[ $SERVER_HOSTNAME == *":"* ]]; then
echo "ERROR! Do not include the port when passing the Server Hostname. These scripts require that the TRITON HTTP endpoint is on Port 8000 and the gRPC endpoint is on Port 8001. Exiting..."
exit 1
fi
if [ "$SERVER_HOSTNAME" = "localhost" ]
then
if [ ! "$(docker inspect -f "{{.State.Running}}" triton_server_cont)" = "true" ] ; then
echo "Launching TRITON server"
bash triton/scripts/launch_server.sh
SERVER_LAUNCHED=true
function cleanup_server {
echo "Killing TRITON server"
docker kill triton_server_cont
}
# Ensure we cleanup the server on exit
# trap "exit" INT TERM
trap cleanup_server EXIT
fi
fi
# Wait until server is up. curl on the health of the server and sleep until its ready
bash triton/scripts/wait_for_triton_server.sh $SERVER_HOSTNAME
TIMESTAMP=$(date "+%y%m%d_%H%M")
bash scripts/docker/launch.sh mkdir -p /results/perf_client/${MODEL_NAME}
OUTPUT_FILE_CSV="/results/perf_client/${MODEL_NAME}/results_${TIMESTAMP}.csv"
ARGS="\
--max-threads ${MAX_CLIENT_THREADS} \
-m ${MODEL_NAME} \
-x ${MODEL_VERSION} \
-p 200000 \
-d \
-v -z \
-i gRPC \
-u ${SERVER_HOSTNAME}:8001 \
-b ${BATCH_SIZE} \
-l ${MAX_LATENCY} \
-c ${MAX_CONCURRENCY} \
-f ${OUTPUT_FILE_CSV}"
echo "Using args: $(echo "$ARGS" | sed -e 's/ -/\n-/g')"
bash scripts/docker/launch.sh perf_client $ARGS
|
PyTorch/SpeechSynthesis/Tacotron2/phrases | phrases | phrase_8_64 | She sells seashells by the seashore, shells she sells are great
She sells seashells by the seashore, shells she sells are great
She sells seashells by the seashore, shells she sells are great
She sells seashells by the seashore, shells she sells are great
She sells seashells by the seashore, shells she sells are great
She sells seashells by the seashore, shells she sells are great
She sells seashells by the seashore, shells she sells are great
She sells seashells by the seashore, shells she sells are great
|
TensorFlow2/Recommendation/WideAndDeep/tests/feature_specs | feature_specs | less_onehot | channel_spec:
label:
- clicked
map: []
multihot_categorical:
- topic_id_list
- entity_id_list
- category_id_list
numerical:
- document_id_document_id_promo_sim_categories
- document_id_document_id_promo_sim_topics
- document_id_document_id_promo_sim_entities
- document_id_promo_ctr
- publisher_id_promo_ctr
- source_id_promo_ctr
- document_id_promo_count
- publish_time_days_since_published
- ad_id_ctr
- advertiser_id_ctr
- campaign_id_ctr
- ad_id_count
- publish_time_promo_days_since_published
onehot_categorical:
- document_id
- platform
- document_id_promo
- source_id
- geo_location
- geo_location_country
- geo_location_state
- publisher_id
- source_id_promo
- publisher_id_promo
feature_spec:
ad_id_count: {}
ad_id_ctr: {}
advertiser_id_ctr: {}
campaign_id_ctr: {}
category_id_list:
cardinality: 100
max_hotness: 3
clicked: {}
document_id:
cardinality: 300000
document_id_document_id_promo_sim_categories: {}
document_id_document_id_promo_sim_entities: {}
document_id_document_id_promo_sim_topics: {}
document_id_promo:
cardinality: 100000
document_id_promo_count: {}
document_id_promo_ctr: {}
entity_id_list:
cardinality: 10000
max_hotness: 3
geo_location:
cardinality: 2500
geo_location_country:
cardinality: 300
geo_location_state:
cardinality: 2000
platform:
cardinality: 4
publish_time_days_since_published: {}
publish_time_promo_days_since_published: {}
publisher_id:
cardinality: 1000
publisher_id_promo:
cardinality: 1000
publisher_id_promo_ctr: {}
source_id:
cardinality: 4000
source_id_promo:
cardinality: 4000
source_id_promo_ctr: {}
topic_id_list:
cardinality: 350
max_hotness: 3
metadata: {}
source_spec:
test:
- features:
- clicked
- document_id
- platform
- document_id_promo
- source_id
- geo_location
- geo_location_country
- geo_location_state
- publisher_id
- source_id_promo
- publisher_id_promo
- topic_id_list
- entity_id_list
- category_id_list
- document_id_document_id_promo_sim_categories
- document_id_document_id_promo_sim_topics
- document_id_document_id_promo_sim_entities
- document_id_promo_ctr
- publisher_id_promo_ctr
- source_id_promo_ctr
- document_id_promo_count
- publish_time_days_since_published
- ad_id_ctr
- advertiser_id_ctr
- campaign_id_ctr
- ad_id_count
- publish_time_promo_days_since_published
files:
- valid.csv
type: csv
train:
- features:
- clicked
- document_id
- platform
- document_id_promo
- source_id
- geo_location
- geo_location_country
- geo_location_state
- publisher_id
- source_id_promo
- publisher_id_promo
- topic_id_list
- entity_id_list
- category_id_list
- document_id_document_id_promo_sim_categories
- document_id_document_id_promo_sim_topics
- document_id_document_id_promo_sim_entities
- document_id_promo_ctr
- publisher_id_promo_ctr
- source_id_promo_ctr
- document_id_promo_count
- publish_time_days_since_published
- ad_id_ctr
- advertiser_id_ctr
- campaign_id_ctr
- ad_id_count
- publish_time_promo_days_since_published
files:
- train.csv
type: csv
|
PyTorch/LanguageModeling/BERT/triton/dist4l/runner | runner | prepare_datasets | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/env bash
mkdir -p datasets/data/squad/v1.1
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json -O datasets/data/squad/v1.1/train-v1.1.json
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json -O datasets/data/squad/v1.1/dev-v1.1.json
wget https://worksheets.codalab.org/rest/bundles/0xbcd57bee090b421c982906709c8c27e1/contents/blob/ -O datasets/data/squad/v1.1/evaluate-v1.1.py
|
TensorFlow/Detection/SSD/models/research/slim/nets | nets | mobilenet_v1_eval | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Validate mobilenet_v1 with options for quantization."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow as tf
from datasets import dataset_factory
from nets import mobilenet_v1
from preprocessing import preprocessing_factory
slim = tf.contrib.slim
flags = tf.app.flags
flags.DEFINE_string('master', '', 'Session master')
flags.DEFINE_integer('batch_size', 250, 'Batch size')
flags.DEFINE_integer('num_classes', 1001, 'Number of classes to distinguish')
flags.DEFINE_integer('num_examples', 50000, 'Number of examples to evaluate')
flags.DEFINE_integer('image_size', 224, 'Input image resolution')
flags.DEFINE_float('depth_multiplier', 1.0, 'Depth multiplier for mobilenet')
flags.DEFINE_bool('quantize', False, 'Quantize training')
flags.DEFINE_string('checkpoint_dir', '', 'The directory for checkpoints')
flags.DEFINE_string('eval_dir', '', 'Directory for writing eval event logs')
flags.DEFINE_string('dataset_dir', '', 'Location of dataset')
FLAGS = flags.FLAGS
def imagenet_input(is_training):
"""Data reader for imagenet.
Reads in imagenet data and performs pre-processing on the images.
Args:
is_training: bool specifying if train or validation dataset is needed.
Returns:
A batch of images and labels.
"""
if is_training:
dataset = dataset_factory.get_dataset('imagenet', 'train',
FLAGS.dataset_dir)
else:
dataset = dataset_factory.get_dataset('imagenet', 'validation',
FLAGS.dataset_dir)
provider = slim.dataset_data_provider.DatasetDataProvider(
dataset,
shuffle=is_training,
common_queue_capacity=2 * FLAGS.batch_size,
common_queue_min=FLAGS.batch_size)
[image, label] = provider.get(['image', 'label'])
image_preprocessing_fn = preprocessing_factory.get_preprocessing(
'mobilenet_v1', is_training=is_training)
image = image_preprocessing_fn(image, FLAGS.image_size, FLAGS.image_size)
images, labels = tf.train.batch(
tensors=[image, label],
batch_size=FLAGS.batch_size,
num_threads=4,
capacity=5 * FLAGS.batch_size)
return images, labels
def metrics(logits, labels):
"""Specify the metrics for eval.
Args:
logits: Logits output from the graph.
labels: Ground truth labels for inputs.
Returns:
Eval Op for the graph.
"""
labels = tf.squeeze(labels)
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
'Accuracy': tf.metrics.accuracy(tf.argmax(logits, 1), labels),
'Recall_5': tf.metrics.recall_at_k(labels, logits, 5),
})
for name, value in names_to_values.iteritems():
slim.summaries.add_scalar_summary(
value, name, prefix='eval', print_summary=True)
return names_to_updates.values()
def build_model():
"""Build the mobilenet_v1 model for evaluation.
Returns:
g: graph with rewrites after insertion of quantization ops and batch norm
folding.
eval_ops: eval ops for inference.
variables_to_restore: List of variables to restore from checkpoint.
"""
g = tf.Graph()
with g.as_default():
inputs, labels = imagenet_input(is_training=False)
scope = mobilenet_v1.mobilenet_v1_arg_scope(
is_training=False, weight_decay=0.0)
with slim.arg_scope(scope):
logits, _ = mobilenet_v1.mobilenet_v1(
inputs,
is_training=False,
depth_multiplier=FLAGS.depth_multiplier,
num_classes=FLAGS.num_classes)
if FLAGS.quantize:
tf.contrib.quantize.create_eval_graph()
eval_ops = metrics(logits, labels)
return g, eval_ops
def eval_model():
"""Evaluates mobilenet_v1."""
g, eval_ops = build_model()
with g.as_default():
num_batches = math.ceil(FLAGS.num_examples / float(FLAGS.batch_size))
slim.evaluation.evaluate_once(
FLAGS.master,
FLAGS.checkpoint_dir,
logdir=FLAGS.eval_dir,
num_evals=num_batches,
eval_op=eval_ops)
def main(unused_arg):
eval_model()
if __name__ == '__main__':
tf.app.run(main)
|
TensorFlow/Recommendation/WideAndDeep/preproc | preproc | preproc4 | #!/usr/bin/env python
# coding: utf-8
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import datetime
import numpy as np
import pandas as pd
import pyspark.sql.functions as F
import tensorflow as tf
import trainer
from pyspark import TaskContext
from pyspark.context import SparkContext, SparkConf
from pyspark.sql.functions import col, udf
from pyspark.sql.session import SparkSession
from pyspark.sql.types import ArrayType, DoubleType
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import metadata_io
from trainer.features import LABEL_COLUMN, DISPLAY_ID_COLUMN, IS_LEAK_COLUMN, DISPLAY_ID_AND_IS_LEAK_ENCODED_COLUMN, \
CATEGORICAL_COLUMNS, DOC_CATEGORICAL_MULTIVALUED_COLUMNS, BOOL_COLUMNS, INT_COLUMNS, FLOAT_COLUMNS, \
FLOAT_COLUMNS_LOG_BIN_TRANSFORM, FLOAT_COLUMNS_SIMPLE_BIN_TRANSFORM
evaluation = True
evaluation_verbose = False
OUTPUT_BUCKET_FOLDER = "/outbrain/preprocessed/"
DATA_BUCKET_FOLDER = "/outbrain/orig/"
SPARK_TEMP_FOLDER = "/outbrain/spark-temp/"
LOCAL_DATA_TFRECORDS_DIR = "/outbrain/tfrecords"
TEST_SET_MODE = False
TENSORFLOW_HADOOP = "preproc/data/tensorflow-hadoop-1.5.0.jar"
conf = SparkConf().setMaster('local[*]').set('spark.executor.memory', '40g').set('spark.driver.memory', '200g').set(
"spark.local.dir", SPARK_TEMP_FOLDER)
conf.set("spark.jars", TENSORFLOW_HADOOP)
conf.set("spark.sql.files.maxPartitionBytes", 805306368)
sc = SparkContext(conf=conf)
spark = SparkSession(sc)
parser = argparse.ArgumentParser()
parser.add_argument(
'--prebatch_size',
help='Prebatch size in created tfrecords',
type=int,
default=4096)
parser.add_argument(
'--submission',
action='store_true',
default=False
)
args = parser.parse_args()
batch_size = args.prebatch_size
# # Feature Vector export
bool_feature_names = ['event_weekend',
'user_has_already_viewed_doc']
int_feature_names = ['user_views',
'ad_views',
'doc_views',
'doc_event_days_since_published',
'doc_event_hour',
'doc_ad_days_since_published',
]
float_feature_names = [
'pop_ad_id',
'pop_ad_id_conf',
'pop_ad_id_conf_multipl',
'pop_document_id',
'pop_document_id_conf',
'pop_document_id_conf_multipl',
'pop_publisher_id',
'pop_publisher_id_conf',
'pop_publisher_id_conf_multipl',
'pop_advertiser_id',
'pop_advertiser_id_conf',
'pop_advertiser_id_conf_multipl',
'pop_campain_id',
'pop_campain_id_conf',
'pop_campain_id_conf_multipl',
'pop_doc_event_doc_ad',
'pop_doc_event_doc_ad_conf',
'pop_doc_event_doc_ad_conf_multipl',
'pop_source_id',
'pop_source_id_conf',
'pop_source_id_conf_multipl',
'pop_source_id_country',
'pop_source_id_country_conf',
'pop_source_id_country_conf_multipl',
'pop_entity_id',
'pop_entity_id_conf',
'pop_entity_id_conf_multipl',
'pop_entity_id_country',
'pop_entity_id_country_conf',
'pop_entity_id_country_conf_multipl',
'pop_topic_id',
'pop_topic_id_conf',
'pop_topic_id_conf_multipl',
'pop_topic_id_country',
'pop_topic_id_country_conf',
'pop_topic_id_country_conf_multipl',
'pop_category_id',
'pop_category_id_conf',
'pop_category_id_conf_multipl',
'pop_category_id_country',
'pop_category_id_country_conf',
'pop_category_id_country_conf_multipl',
'user_doc_ad_sim_categories',
'user_doc_ad_sim_categories_conf',
'user_doc_ad_sim_categories_conf_multipl',
'user_doc_ad_sim_topics',
'user_doc_ad_sim_topics_conf',
'user_doc_ad_sim_topics_conf_multipl',
'user_doc_ad_sim_entities',
'user_doc_ad_sim_entities_conf',
'user_doc_ad_sim_entities_conf_multipl',
'doc_event_doc_ad_sim_categories',
'doc_event_doc_ad_sim_categories_conf',
'doc_event_doc_ad_sim_categories_conf_multipl',
'doc_event_doc_ad_sim_topics',
'doc_event_doc_ad_sim_topics_conf',
'doc_event_doc_ad_sim_topics_conf_multipl',
'doc_event_doc_ad_sim_entities',
'doc_event_doc_ad_sim_entities_conf',
'doc_event_doc_ad_sim_entities_conf_multipl'
]
# ### Configuring feature vector
category_feature_names_integral = ['ad_advertiser',
'doc_ad_category_id_1',
'doc_ad_category_id_2',
'doc_ad_category_id_3',
'doc_ad_topic_id_1',
'doc_ad_topic_id_2',
'doc_ad_topic_id_3',
'doc_ad_entity_id_1',
'doc_ad_entity_id_2',
'doc_ad_entity_id_3',
'doc_ad_entity_id_4',
'doc_ad_entity_id_5',
'doc_ad_entity_id_6',
'doc_ad_publisher_id',
'doc_ad_source_id',
'doc_event_category_id_1',
'doc_event_category_id_2',
'doc_event_category_id_3',
'doc_event_topic_id_1',
'doc_event_topic_id_2',
'doc_event_topic_id_3',
'doc_event_entity_id_1',
'doc_event_entity_id_2',
'doc_event_entity_id_3',
'doc_event_entity_id_4',
'doc_event_entity_id_5',
'doc_event_entity_id_6',
'doc_event_publisher_id',
'doc_event_source_id',
'event_country',
'event_country_state',
'event_geo_location',
'event_hour',
'event_platform',
'traffic_source']
feature_vector_labels_integral = bool_feature_names \
+ int_feature_names \
+ float_feature_names \
+ category_feature_names_integral
if args.submission:
train_feature_vector_gcs_folder_name = 'train_feature_vectors_integral'
else:
train_feature_vector_gcs_folder_name = 'train_feature_vectors_integral_eval'
# ## Exporting integral feature vectors to CSV
train_feature_vectors_exported_df = spark.read.parquet(OUTPUT_BUCKET_FOLDER + train_feature_vector_gcs_folder_name)
train_feature_vectors_exported_df.take(3)
integral_headers = ['label', 'display_id', 'ad_id', 'doc_id', 'doc_event_id',
'is_leak'] + feature_vector_labels_integral
CSV_ORDERED_COLUMNS = ['label', 'display_id', 'ad_id', 'doc_id', 'doc_event_id', 'is_leak', 'event_weekend',
'user_has_already_viewed_doc', 'user_views', 'ad_views', 'doc_views',
'doc_event_days_since_published', 'doc_event_hour', 'doc_ad_days_since_published',
'pop_ad_id', 'pop_ad_id_conf',
'pop_ad_id_conf_multipl', 'pop_document_id', 'pop_document_id_conf',
'pop_document_id_conf_multipl', 'pop_publisher_id', 'pop_publisher_id_conf',
'pop_publisher_id_conf_multipl', 'pop_advertiser_id', 'pop_advertiser_id_conf',
'pop_advertiser_id_conf_multipl', 'pop_campain_id', 'pop_campain_id_conf',
'pop_campain_id_conf_multipl', 'pop_doc_event_doc_ad', 'pop_doc_event_doc_ad_conf',
'pop_doc_event_doc_ad_conf_multipl', 'pop_source_id', 'pop_source_id_conf',
'pop_source_id_conf_multipl', 'pop_source_id_country', 'pop_source_id_country_conf',
'pop_source_id_country_conf_multipl', 'pop_entity_id', 'pop_entity_id_conf',
'pop_entity_id_conf_multipl', 'pop_entity_id_country', 'pop_entity_id_country_conf',
'pop_entity_id_country_conf_multipl', 'pop_topic_id', 'pop_topic_id_conf',
'pop_topic_id_conf_multipl', 'pop_topic_id_country', 'pop_topic_id_country_conf',
'pop_topic_id_country_conf_multipl', 'pop_category_id', 'pop_category_id_conf',
'pop_category_id_conf_multipl', 'pop_category_id_country', 'pop_category_id_country_conf',
'pop_category_id_country_conf_multipl', 'user_doc_ad_sim_categories',
'user_doc_ad_sim_categories_conf', 'user_doc_ad_sim_categories_conf_multipl',
'user_doc_ad_sim_topics', 'user_doc_ad_sim_topics_conf', 'user_doc_ad_sim_topics_conf_multipl',
'user_doc_ad_sim_entities', 'user_doc_ad_sim_entities_conf',
'user_doc_ad_sim_entities_conf_multipl',
'doc_event_doc_ad_sim_categories', 'doc_event_doc_ad_sim_categories_conf',
'doc_event_doc_ad_sim_categories_conf_multipl', 'doc_event_doc_ad_sim_topics',
'doc_event_doc_ad_sim_topics_conf', 'doc_event_doc_ad_sim_topics_conf_multipl',
'doc_event_doc_ad_sim_entities', 'doc_event_doc_ad_sim_entities_conf',
'doc_event_doc_ad_sim_entities_conf_multipl', 'ad_advertiser', 'doc_ad_category_id_1',
'doc_ad_category_id_2', 'doc_ad_category_id_3', 'doc_ad_topic_id_1', 'doc_ad_topic_id_2',
'doc_ad_topic_id_3', 'doc_ad_entity_id_1', 'doc_ad_entity_id_2', 'doc_ad_entity_id_3',
'doc_ad_entity_id_4', 'doc_ad_entity_id_5', 'doc_ad_entity_id_6', 'doc_ad_publisher_id',
'doc_ad_source_id', 'doc_event_category_id_1', 'doc_event_category_id_2',
'doc_event_category_id_3',
'doc_event_topic_id_1', 'doc_event_topic_id_2', 'doc_event_topic_id_3', 'doc_event_entity_id_1',
'doc_event_entity_id_2', 'doc_event_entity_id_3', 'doc_event_entity_id_4',
'doc_event_entity_id_5',
'doc_event_entity_id_6', 'doc_event_publisher_id', 'doc_event_source_id', 'event_country',
'event_country_state', 'event_geo_location', 'event_hour', 'event_platform', 'traffic_source']
FEAT_CSV_ORDERED_COLUMNS = ['event_weekend',
'user_has_already_viewed_doc', 'user_views', 'ad_views', 'doc_views',
'doc_event_days_since_published', 'doc_event_hour', 'doc_ad_days_since_published',
'pop_ad_id', 'pop_ad_id_conf',
'pop_ad_id_conf_multipl', 'pop_document_id', 'pop_document_id_conf',
'pop_document_id_conf_multipl', 'pop_publisher_id', 'pop_publisher_id_conf',
'pop_publisher_id_conf_multipl', 'pop_advertiser_id', 'pop_advertiser_id_conf',
'pop_advertiser_id_conf_multipl', 'pop_campain_id', 'pop_campain_id_conf',
'pop_campain_id_conf_multipl', 'pop_doc_event_doc_ad', 'pop_doc_event_doc_ad_conf',
'pop_doc_event_doc_ad_conf_multipl', 'pop_source_id', 'pop_source_id_conf',
'pop_source_id_conf_multipl', 'pop_source_id_country', 'pop_source_id_country_conf',
'pop_source_id_country_conf_multipl', 'pop_entity_id', 'pop_entity_id_conf',
'pop_entity_id_conf_multipl', 'pop_entity_id_country', 'pop_entity_id_country_conf',
'pop_entity_id_country_conf_multipl', 'pop_topic_id', 'pop_topic_id_conf',
'pop_topic_id_conf_multipl', 'pop_topic_id_country', 'pop_topic_id_country_conf',
'pop_topic_id_country_conf_multipl', 'pop_category_id', 'pop_category_id_conf',
'pop_category_id_conf_multipl', 'pop_category_id_country', 'pop_category_id_country_conf',
'pop_category_id_country_conf_multipl', 'user_doc_ad_sim_categories',
'user_doc_ad_sim_categories_conf', 'user_doc_ad_sim_categories_conf_multipl',
'user_doc_ad_sim_topics', 'user_doc_ad_sim_topics_conf',
'user_doc_ad_sim_topics_conf_multipl',
'user_doc_ad_sim_entities', 'user_doc_ad_sim_entities_conf',
'user_doc_ad_sim_entities_conf_multipl',
'doc_event_doc_ad_sim_categories', 'doc_event_doc_ad_sim_categories_conf',
'doc_event_doc_ad_sim_categories_conf_multipl', 'doc_event_doc_ad_sim_topics',
'doc_event_doc_ad_sim_topics_conf', 'doc_event_doc_ad_sim_topics_conf_multipl',
'doc_event_doc_ad_sim_entities', 'doc_event_doc_ad_sim_entities_conf',
'doc_event_doc_ad_sim_entities_conf_multipl', 'ad_advertiser', 'doc_ad_category_id_1',
'doc_ad_category_id_2', 'doc_ad_category_id_3', 'doc_ad_topic_id_1', 'doc_ad_topic_id_2',
'doc_ad_topic_id_3', 'doc_ad_entity_id_1', 'doc_ad_entity_id_2', 'doc_ad_entity_id_3',
'doc_ad_entity_id_4', 'doc_ad_entity_id_5', 'doc_ad_entity_id_6', 'doc_ad_publisher_id',
'doc_ad_source_id', 'doc_event_category_id_1', 'doc_event_category_id_2',
'doc_event_category_id_3',
'doc_event_topic_id_1', 'doc_event_topic_id_2', 'doc_event_topic_id_3',
'doc_event_entity_id_1',
'doc_event_entity_id_2', 'doc_event_entity_id_3', 'doc_event_entity_id_4',
'doc_event_entity_id_5',
'doc_event_entity_id_6', 'doc_event_publisher_id', 'doc_event_source_id', 'event_country',
'event_country_state', 'event_geo_location', 'event_hour', 'event_platform',
'traffic_source']
def to_array(col):
def to_array_(v):
return v.toArray().tolist()
# Important: asNondeterministic requires Spark 2.3 or later
# It can be safely removed i.e.
# return udf(to_array_, ArrayType(DoubleType()))(col)
# but at the cost of decreased performance
return udf(to_array_, ArrayType(DoubleType())).asNondeterministic()(col)
CONVERT_TO_INT = ['doc_ad_category_id_1',
'doc_ad_category_id_2', 'doc_ad_category_id_3', 'doc_ad_topic_id_1', 'doc_ad_topic_id_2',
'doc_ad_topic_id_3', 'doc_ad_entity_id_1', 'doc_ad_entity_id_2', 'doc_ad_entity_id_3',
'doc_ad_entity_id_4', 'doc_ad_entity_id_5', 'doc_ad_entity_id_6',
'doc_ad_source_id', 'doc_event_category_id_1', 'doc_event_category_id_2', 'doc_event_category_id_3',
'doc_event_topic_id_1', 'doc_event_topic_id_2', 'doc_event_topic_id_3', 'doc_event_entity_id_1',
'doc_event_entity_id_2', 'doc_event_entity_id_3', 'doc_event_entity_id_4', 'doc_event_entity_id_5',
'doc_event_entity_id_6']
def format_number(element, name):
if name in BOOL_COLUMNS + CATEGORICAL_COLUMNS:
return element.cast("int")
elif name in CONVERT_TO_INT:
return element.cast("int")
else:
return element
def to_array_with_none(col):
def to_array_with_none_(v):
tmp = np.full((v.size,), fill_value=None, dtype=np.float64)
tmp[v.indices] = v.values
return tmp.tolist()
# Important: asNondeterministic requires Spark 2.3 or later
# It can be safely removed i.e.
# return udf(to_array_, ArrayType(DoubleType()))(col)
# but at the cost of decreased performance
return udf(to_array_with_none_, ArrayType(DoubleType())).asNondeterministic()(col)
@udf
def count_value(x):
from collections import Counter
tmp = Counter(x).most_common(2)
if not tmp or np.isnan(tmp[0][0]):
return 0
return float(tmp[0][0])
def replace_with_most_frequent(most_value):
return udf(lambda x: most_value if not x or np.isnan(x) else x)
train_feature_vectors_integral_csv_rdd_df = train_feature_vectors_exported_df.select('label', 'display_id', 'ad_id',
'document_id', 'document_id_event',
'feature_vector').withColumn(
'is_leak', F.lit(-1)).withColumn("featvec", to_array("feature_vector")).select(
['label'] + ['display_id'] + ['ad_id'] + ['document_id'] + ['document_id_event'] + ['is_leak'] + [
format_number(element, FEAT_CSV_ORDERED_COLUMNS[index]).alias(FEAT_CSV_ORDERED_COLUMNS[index]) for
index, element in enumerate([col("featvec")[i] for i in range(len(feature_vector_labels_integral))])]).replace(
float('nan'), 0)
if args.submission:
test_validation_feature_vector_gcs_folder_name = 'test_feature_vectors_integral'
else:
test_validation_feature_vector_gcs_folder_name = 'validation_feature_vectors_integral'
# ## Exporting integral feature vectors
test_validation_feature_vectors_exported_df = spark.read.parquet(
OUTPUT_BUCKET_FOLDER + test_validation_feature_vector_gcs_folder_name)
test_validation_feature_vectors_exported_df.take(3)
test_validation_feature_vectors_integral_csv_rdd_df = test_validation_feature_vectors_exported_df.select(
'label', 'display_id', 'ad_id', 'document_id', 'document_id_event',
'is_leak', 'feature_vector').withColumn("featvec", to_array("feature_vector")).select(
['label'] + ['display_id'] + ['ad_id'] + ['document_id'] + ['document_id_event'] + ['is_leak'] + [
format_number(element, FEAT_CSV_ORDERED_COLUMNS[index]).alias(FEAT_CSV_ORDERED_COLUMNS[index]) for
index, element in enumerate([col("featvec")[i] for i in range(len(feature_vector_labels_integral))])]).replace(
float('nan'), 0)
def make_spec(output_dir, batch_size=None):
fixed_shape = [batch_size, 1] if batch_size is not None else []
spec = {}
spec[LABEL_COLUMN] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.int64, default_value=None)
spec[DISPLAY_ID_COLUMN] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.int64, default_value=None)
spec[IS_LEAK_COLUMN] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.int64, default_value=None)
spec[DISPLAY_ID_AND_IS_LEAK_ENCODED_COLUMN] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.int64,
default_value=None)
for name in BOOL_COLUMNS:
spec[name] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.int64, default_value=None)
for name in FLOAT_COLUMNS_LOG_BIN_TRANSFORM + FLOAT_COLUMNS_SIMPLE_BIN_TRANSFORM:
spec[name] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.float32, default_value=None)
for name in FLOAT_COLUMNS_SIMPLE_BIN_TRANSFORM:
spec[name + '_binned'] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.int64, default_value=None)
for name in FLOAT_COLUMNS_LOG_BIN_TRANSFORM:
spec[name + '_binned'] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.int64, default_value=None)
spec[name + '_log_01scaled'] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.float32, default_value=None)
for name in INT_COLUMNS:
spec[name + '_log_int'] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.int64, default_value=None)
spec[name + '_log_01scaled'] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.float32, default_value=None)
for name in BOOL_COLUMNS + CATEGORICAL_COLUMNS:
spec[name] = tf.FixedLenFeature(shape=fixed_shape, dtype=tf.int64, default_value=None)
for multi_category in DOC_CATEGORICAL_MULTIVALUED_COLUMNS:
shape = fixed_shape[:-1] + [len(DOC_CATEGORICAL_MULTIVALUED_COLUMNS[multi_category])]
spec[multi_category] = tf.FixedLenFeature(shape=shape, dtype=tf.int64)
metadata = dataset_metadata.DatasetMetadata(dataset_schema.from_feature_spec(spec))
metadata_io.write_metadata(metadata, output_dir)
# write out tfrecords meta
make_spec(LOCAL_DATA_TFRECORDS_DIR + '/transformed_metadata', batch_size=batch_size)
def log2_1p(x):
return np.log1p(x) / np.log(2.0)
# calculate min and max stats for the given dataframes all in one go
def compute_min_max_logs(df):
print(str(datetime.datetime.now()) + '\tComputing min and max')
min_logs = {}
max_logs = {}
float_expr = []
for name in trainer.features.FLOAT_COLUMNS_LOG_BIN_TRANSFORM + trainer.features.INT_COLUMNS:
float_expr.append(F.min(name))
float_expr.append(F.max(name))
floatDf = all_df.agg(*float_expr).collect()
for name in trainer.features.FLOAT_COLUMNS_LOG_BIN_TRANSFORM:
minAgg = floatDf[0]["min(" + name + ")"]
maxAgg = floatDf[0]["max(" + name + ")"]
min_logs[name + '_log_01scaled'] = log2_1p(minAgg * 1000)
max_logs[name + '_log_01scaled'] = log2_1p(maxAgg * 1000)
for name in trainer.features.INT_COLUMNS:
minAgg = floatDf[0]["min(" + name + ")"]
maxAgg = floatDf[0]["max(" + name + ")"]
min_logs[name + '_log_01scaled'] = log2_1p(minAgg)
max_logs[name + '_log_01scaled'] = log2_1p(maxAgg)
return min_logs, max_logs
all_df = test_validation_feature_vectors_integral_csv_rdd_df.union(train_feature_vectors_integral_csv_rdd_df)
min_logs, max_logs = compute_min_max_logs(all_df)
if args.submission:
train_output_string = '/sub_train'
eval_output_string = '/test'
else:
train_output_string = '/train'
eval_output_string = '/eval'
path = LOCAL_DATA_TFRECORDS_DIR
def create_tf_example_spark(df, min_logs, max_logs):
result = {}
result[LABEL_COLUMN] = tf.train.Feature(int64_list=tf.train.Int64List(value=df[LABEL_COLUMN].to_list()))
result[DISPLAY_ID_COLUMN] = tf.train.Feature(int64_list=tf.train.Int64List(value=df[DISPLAY_ID_COLUMN].to_list()))
result[IS_LEAK_COLUMN] = tf.train.Feature(int64_list=tf.train.Int64List(value=df[IS_LEAK_COLUMN].to_list()))
encoded_value = df[DISPLAY_ID_COLUMN].multiply(10).add(df[IS_LEAK_COLUMN].clip(lower=0)).to_list()
result[DISPLAY_ID_AND_IS_LEAK_ENCODED_COLUMN] = tf.train.Feature(int64_list=tf.train.Int64List(value=encoded_value))
for name in FLOAT_COLUMNS:
value = df[name].to_list()
result[name] = tf.train.Feature(float_list=tf.train.FloatList(value=value))
for name in FLOAT_COLUMNS_SIMPLE_BIN_TRANSFORM:
value = df[name].multiply(10).astype('int64').to_list()
result[name + '_binned'] = tf.train.Feature(int64_list=tf.train.Int64List(value=value))
for name in FLOAT_COLUMNS_LOG_BIN_TRANSFORM:
value_prelim = df[name].multiply(1000).apply(np.log1p).multiply(1. / np.log(2.0))
value = value_prelim.astype('int64').to_list()
result[name + '_binned'] = tf.train.Feature(int64_list=tf.train.Int64List(value=value))
nn = name + '_log_01scaled'
value = value_prelim.add(-min_logs[nn]).multiply(1. / (max_logs[nn] - min_logs[nn])).to_list()
result[nn] = tf.train.Feature(float_list=tf.train.FloatList(value=value))
for name in INT_COLUMNS:
value_prelim = df[name].apply(np.log1p).multiply(1. / np.log(2.0))
value = value_prelim.astype('int64').to_list()
result[name + '_log_int'] = tf.train.Feature(int64_list=tf.train.Int64List(value=value))
nn = name + '_log_01scaled'
value = value_prelim.add(-min_logs[nn]).multiply(1. / (max_logs[nn] - min_logs[nn])).to_list()
result[nn] = tf.train.Feature(float_list=tf.train.FloatList(value=value))
for name in BOOL_COLUMNS + CATEGORICAL_COLUMNS:
value = df[name].fillna(0).astype('int64').to_list()
result[name] = tf.train.Feature(int64_list=tf.train.Int64List(value=value))
for multi_category in DOC_CATEGORICAL_MULTIVALUED_COLUMNS:
values = []
for category in DOC_CATEGORICAL_MULTIVALUED_COLUMNS[multi_category]:
values = values + [df[category].to_numpy()]
# need to transpose the series so they will be parsed correctly by the FixedLenFeature
# we can pass in a single series here; they'll be reshaped to [batch_size, num_values]
# when parsed from the TFRecord
value = np.stack(values, axis=1).flatten().tolist()
result[multi_category] = tf.train.Feature(int64_list=tf.train.Int64List(value=value))
tf_example = tf.train.Example(features=tf.train.Features(feature=result))
return tf_example
def _transform_to_tfrecords(rdds):
csv = pd.DataFrame(list(rdds), columns=CSV_ORDERED_COLUMNS)
num_rows = len(csv.index)
examples = []
for start_ind in range(0, num_rows, batch_size if batch_size is not None else 1): # for each batch
if start_ind + batch_size - 1 > num_rows: # if we'd run out of rows
csv_slice = csv.iloc[start_ind:]
# drop the remainder
print("last Example has: ", len(csv_slice))
examples.append((create_tf_example_spark(csv_slice, min_logs, max_logs), len(csv_slice)))
return examples
else:
csv_slice = csv.iloc[start_ind:start_ind + (batch_size if batch_size is not None else 1)]
examples.append((create_tf_example_spark(csv_slice, min_logs, max_logs), batch_size))
return examples
max_partition_num = 30
def _transform_to_slices(rdds):
taskcontext = TaskContext.get()
partitionid = taskcontext.partitionId()
csv = pd.DataFrame(list(rdds), columns=CSV_ORDERED_COLUMNS)
num_rows = len(csv.index)
print("working with partition: ", partitionid, max_partition_num, num_rows)
examples = []
for start_ind in range(0, num_rows, batch_size if batch_size is not None else 1): # for each batch
if start_ind + batch_size - 1 > num_rows: # if we'd run out of rows
csv_slice = csv.iloc[start_ind:]
print("last Example has: ", len(csv_slice), partitionid)
examples.append((csv_slice, len(csv_slice)))
return examples
else:
csv_slice = csv.iloc[start_ind:start_ind + (batch_size if batch_size is not None else 1)]
examples.append((csv_slice, len(csv_slice)))
return examples
def _transform_to_tfrecords_from_slices(rdds):
examples = []
for slice in rdds:
if len(slice[0]) != batch_size:
print("slice size is not correct, dropping: ", len(slice[0]))
else:
examples.append(
(bytearray((create_tf_example_spark(slice[0], min_logs, max_logs)).SerializeToString()), None))
return examples
def _transform_to_tfrecords_from_reslice(rdds):
examples = []
all_dataframes = pd.DataFrame([])
for slice in rdds:
all_dataframes = all_dataframes.append(slice[0])
num_rows = len(all_dataframes.index)
examples = []
for start_ind in range(0, num_rows, batch_size if batch_size is not None else 1): # for each batch
if start_ind + batch_size - 1 > num_rows: # if we'd run out of rows
csv_slice = all_dataframes.iloc[start_ind:]
if TEST_SET_MODE:
remain_len = batch_size - len(csv_slice)
(m, n) = divmod(remain_len, len(csv_slice))
print("remainder: ", len(csv_slice), remain_len, m, n)
if m:
for i in range(m):
csv_slice = csv_slice.append(csv_slice)
csv_slice = csv_slice.append(csv_slice.iloc[:n])
print("after fill remainder: ", len(csv_slice))
examples.append(
(bytearray((create_tf_example_spark(csv_slice, min_logs, max_logs)).SerializeToString()), None))
return examples
# drop the remainder
print("dropping remainder: ", len(csv_slice))
return examples
else:
csv_slice = all_dataframes.iloc[start_ind:start_ind + (batch_size if batch_size is not None else 1)]
examples.append(
(bytearray((create_tf_example_spark(csv_slice, min_logs, max_logs)).SerializeToString()), None))
return examples
TEST_SET_MODE = False
train_features = train_feature_vectors_integral_csv_rdd_df.coalesce(30).rdd.mapPartitions(_transform_to_slices)
cached_train_features = train_features.cache()
cached_train_features.count()
train_full = cached_train_features.filter(lambda x: x[1] == batch_size)
# split out slies where we don't have a full batch so that we can reslice them so we only drop mininal rows
train_not_full = cached_train_features.filter(lambda x: x[1] < batch_size)
train_examples_full = train_full.mapPartitions(_transform_to_tfrecords_from_slices)
train_left = train_not_full.coalesce(1).mapPartitions(_transform_to_tfrecords_from_reslice)
all_train = train_examples_full.union(train_left)
TEST_SET_MODE = True
valid_features = test_validation_feature_vectors_integral_csv_rdd_df.coalesce(30).rdd.mapPartitions(
_transform_to_slices)
cached_valid_features = valid_features.cache()
cached_valid_features.count()
valid_full = cached_valid_features.filter(lambda x: x[1] == batch_size)
valid_not_full = cached_valid_features.filter(lambda x: x[1] < batch_size)
valid_examples_full = valid_full.mapPartitions(_transform_to_tfrecords_from_slices)
valid_left = valid_not_full.coalesce(1).mapPartitions(_transform_to_tfrecords_from_reslice)
all_valid = valid_examples_full.union(valid_left)
all_train.saveAsNewAPIHadoopFile(LOCAL_DATA_TFRECORDS_DIR + train_output_string,
"org.tensorflow.hadoop.io.TFRecordFileOutputFormat",
keyClass="org.apache.hadoop.io.BytesWritable",
valueClass="org.apache.hadoop.io.NullWritable")
all_valid.saveAsNewAPIHadoopFile(LOCAL_DATA_TFRECORDS_DIR + eval_output_string,
"org.tensorflow.hadoop.io.TFRecordFileOutputFormat",
keyClass="org.apache.hadoop.io.BytesWritable",
valueClass="org.apache.hadoop.io.NullWritable")
spark.stop()
|
PyTorch/SpeechRecognition/Jasper/triton/scripts/docker | docker | build_triton_client | #!/bin/bash
# ensure the TRTIS submodule is added and build the clients
SCRIPT_DIR=$(cd $(dirname $0); pwd)
PROJECT_DIR=${SCRIPT_DIR}/../../../
docker pull nvcr.io/nvidia/tritonserver:20.10-py3-clientsdk
git submodule update --init --recursive
docker build . --rm -f ${PROJECT_DIR}/triton/Dockerfile -t jasper:triton
|
TensorFlow/Segmentation/VNet/model | model | layers | # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
def normalization_layer(inputs, name, mode):
if name == 'batchnorm':
return tf.layers.batch_normalization(inputs=inputs,
axis=-1,
training=(mode == tf.estimator.ModeKeys.TRAIN),
trainable=True,
virtual_batch_size=None)
elif name == 'none':
return inputs
else:
raise ValueError('Invalid normalization layer')
def activation_layer(x, activation):
if activation == 'relu':
return tf.nn.relu(x)
elif activation == 'none':
return x
else:
raise ValueError("Unkown activation {}".format(activation))
def convolution_layer(inputs, filters, kernel_size, stride, normalization, activation, mode):
x = tf.layers.conv3d(inputs=inputs,
filters=filters,
kernel_size=kernel_size,
strides=stride,
activation=None,
padding='same',
data_format='channels_last',
use_bias=True,
kernel_initializer=tf.glorot_uniform_initializer(),
bias_initializer=tf.zeros_initializer(),
bias_regularizer=None)
x = normalization_layer(x, normalization, mode)
return activation_layer(x, activation)
def downsample_layer(inputs, pooling, normalization, activation, mode):
if pooling == 'conv_pool':
return convolution_layer(inputs=inputs,
filters=inputs.get_shape()[-1] * 2,
kernel_size=2,
stride=2,
normalization=normalization,
activation=activation,
mode=mode)
else:
raise ValueError('Invalid downsampling method: {}'.format(pooling))
def upsample_layer(inputs, filters, upsampling, normalization, activation, mode):
if upsampling == 'transposed_conv':
x = tf.layers.conv3d_transpose(inputs=inputs,
filters=filters,
kernel_size=2,
strides=2,
activation=None,
padding='same',
data_format='channels_last',
use_bias=True,
kernel_initializer=tf.glorot_uniform_initializer(),
bias_initializer=tf.zeros_initializer(),
bias_regularizer=None)
x = normalization_layer(x, normalization, mode)
return activation_layer(x, activation)
else:
raise ValueError('Unsupported upsampling: {}'.format(upsampling))
def residual_block(input_0, input_1, kernel_size, depth, normalization, activation, mode):
with tf.name_scope('residual_block'):
x = input_0
if input_1 is not None:
x = tf.concat([input_0, input_1], axis=-1)
inputs = x
n_input_channels = inputs.get_shape()[-1]
for i in range(depth):
x = convolution_layer(inputs=x,
filters=n_input_channels,
kernel_size=kernel_size,
stride=1,
normalization=normalization,
activation=activation,
mode=mode)
return x + inputs
def input_block(inputs, filters, kernel_size, normalization, activation, mode):
with tf.name_scope('conversion_block'):
x = inputs
return convolution_layer(inputs=inputs,
filters=filters,
kernel_size=kernel_size,
stride=1,
normalization=normalization,
activation=activation,
mode=mode) + x
def downsample_block(inputs, depth, kernel_size, pooling, normalization, activation, mode):
with tf.name_scope('downsample_block'):
x = downsample_layer(inputs,
pooling=pooling,
normalization=normalization,
activation=activation,
mode=mode)
return residual_block(input_0=x,
input_1=None,
depth=depth,
kernel_size=kernel_size,
normalization=normalization,
activation=activation,
mode=mode)
def upsample_block(inputs, residual_inputs, depth, kernel_size, upsampling, normalization, activation, mode):
with tf.name_scope('upsample_block'):
x = upsample_layer(inputs,
filters=residual_inputs.get_shape()[-1],
upsampling=upsampling,
normalization=normalization,
activation=activation,
mode=mode)
return residual_block(input_0=x,
input_1=residual_inputs,
depth=depth,
kernel_size=kernel_size,
normalization=normalization,
activation=activation,
mode=mode)
def output_block(inputs, residual_inputs, n_classes, kernel_size, upsampling, normalization, activation, mode):
with tf.name_scope('output_block'):
x = upsample_layer(inputs,
filters=residual_inputs.get_shape()[-1],
upsampling=upsampling,
normalization=normalization,
activation=activation,
mode=mode)
return convolution_layer(inputs=x,
filters=n_classes,
kernel_size=kernel_size,
stride=1,
mode=mode,
activation='none',
normalization='none')
|
TensorFlow2/Recommendation/WideAndDeep/triton | triton | requirements | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
model_navigator[tf] @ git+https://github.com/triton-inference-server/[email protected]#egg=model_navigator
natsort>=7.0.0
networkx==2.5
numpy
onnx>=1.8.0,<1.9.0
onnxruntime-gpu==1.8.1
pycuda>=2019.1.2
PyYAML>=5.2
tabulate>=0.8.7
tf2onnx>=1.9.0,<1.10.0
tqdm>=4.44.1
|
TensorFlow2/Detection/Efficientdet/scripts/docker | docker | interactive | #!/bin/bash
docker run --runtime=nvidia \
-v $BACKBONE_CKPT:/workspace/checkpoints/efficientnet-b0-joc \
-v $CKPT:/workspace/checkpoints/efficientdet-tf2 \
-v ${DATA:-/mnt/nvdl/datasets/coco_master/coco2017_tfrecords}:/workspace/coco \
--rm --name=${name:-interactive} \
--shm-size=30g --ulimit memlock=-1 --ulimit stack=67108864 \
--ipc=host -p 0.0.0.0:${PORT:-6007}:${PORT:-6007} -t -i \
${DOCKER:-effdet_tf2:latest} bash |
TensorFlow/LanguageModeling/BERT/utils | utils | dllogger_class | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dllogger import Logger, StdOutBackend, JSONStreamBackend, Verbosity
import numpy
class dllogger_class():
def format_step(self, step):
if isinstance(step, str):
return step
elif isinstance(step, int):
return "Iteration: {} ".format(step)
elif len(step) > 0:
return "Iteration: {} ".format(step[0])
else:
return ""
def __init__(self, log_path="bert_dllog.json"):
self.logger = Logger([
StdOutBackend(Verbosity.DEFAULT, step_format=self.format_step),
JSONStreamBackend(Verbosity.VERBOSE, log_path),
])
self.logger.metadata("mlm_loss", {"format": ":.4f", "GOAL": "MINIMIZE", "STAGE": "TRAIN"})
self.logger.metadata("nsp_loss", {"format": ":.4f", "GOAL": "MINIMIZE", "STAGE": "TRAIN"})
self.logger.metadata("avg_loss_step", {"format": ":.4f", "GOAL": "MINIMIZE", "STAGE": "TRAIN"})
self.logger.metadata("total_loss", {"format": ":.4f", "GOAL": "MINIMIZE", "STAGE": "TRAIN"})
self.logger.metadata("loss", {"format": ":.4f", "GOAL": "MINIMIZE", "STAGE": "TRAIN"})
self.logger.metadata("f1", {"unit": None, "format": ":.4f", "GOAL": "MINIMIZE", "STAGE": "VAL"})
self.logger.metadata("precision", {"format": ":.4f", "GOAL": "MINIMIZE", "STAGE": "VAL"})
self.logger.metadata("recall", {"format": ":.4f", "GOAL": "MINIMIZE", "STAGE": "VAL"})
self.logger.metadata("mcc", {"format": ":.4f", "GOAL": "MINIMIZE", "STAGE": "VAL"})
self.logger.metadata("exact_match", {"unit": None, "format": ":.4f", "GOAL": "MINIMIZE", "STAGE": "VAL"})
self.logger.metadata(
"throughput_train",
{"unit": "sequences/s", "format": ":.3f", "GOAL": "MAXIMIZE", "STAGE": "TRAIN"},
)
self.logger.metadata(
"throughput_inf",
{"unit": "sequences/s", "format": ":.3f", "GOAL": "MAXIMIZE", "STAGE": "VAL"},
)
|
TensorFlow/Classification/ConvNets/triton | triton | dataloader | import logging
from pathlib import Path
import numpy as np
from PIL import Image
from rn50_model import HEIGHT, WIDTH
LOGGER = logging.getLogger(__name__)
def get_dataloader_fn(
*, data_dir: str, batch_size: int = 1, width: int = WIDTH, height: int = HEIGHT, images_num: int = None
):
image_extensions = [".gif", ".png", ".jpeg", ".jpg"]
image_paths = sorted([p for p in Path(data_dir).rglob("*") if p.suffix.lower() in image_extensions])
if images_num is not None:
image_paths = image_paths[:images_num]
LOGGER.info(
f"Creating PIL dataloader on data_dir={data_dir} #images={len(image_paths)} "
f"image_size=({width}, {height}) batch_size={batch_size}"
)
def _dataloader_fn():
batch = []
for image_path in image_paths:
img = Image.open(image_path.as_posix()).convert('RGB')
img = img.resize((width, height))
img = np.array(img).astype(np.float32)
true_class = np.array([int(image_path.parent.name)])
assert tuple(img.shape) == (height, width, 3)
img = img[np.newaxis, ...]
batch.append((img, image_path.as_posix(), true_class))
if len(batch) >= batch_size:
ids = [image_path for _, image_path, *_ in batch]
x = {
"input": np.concatenate([img for img, *_ in batch]),
}
y_real = {"classes": np.concatenate([class_ for *_, class_ in batch])}
batch = []
yield ids, x, y_real
return _dataloader_fn
|
PyTorch/LanguageModeling/BART/scripts/docker | docker | launch | #!/bin/bash
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
CMD=${1:-/bin/bash}
NV_VISIBLE_DEVICES=${2:-"all"}
docker run --gpus $NV_VISIBLE_DEVICES -it --rm --ipc=host \
-v ${PWD}:/workspace/bart bart_pyt $CMD
|
TensorFlow/Detection/SSD/models/research/object_detection/utils | utils | np_box_ops | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Operations for [N, 4] numpy arrays representing bounding boxes.
Example box operations that are supported:
* Areas: compute bounding box areas
* IOU: pairwise intersection-over-union scores
"""
import numpy as np
def area(boxes):
"""Computes area of boxes.
Args:
boxes: Numpy array with shape [N, 4] holding N boxes
Returns:
a numpy array with shape [N*1] representing box areas
"""
return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
def intersection(boxes1, boxes2):
"""Compute pairwise intersection areas between boxes.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes
boxes2: a numpy array with shape [M, 4] holding M boxes
Returns:
a numpy array with shape [N*M] representing pairwise intersection area
"""
[y_min1, x_min1, y_max1, x_max1] = np.split(boxes1, 4, axis=1)
[y_min2, x_min2, y_max2, x_max2] = np.split(boxes2, 4, axis=1)
all_pairs_min_ymax = np.minimum(y_max1, np.transpose(y_max2))
all_pairs_max_ymin = np.maximum(y_min1, np.transpose(y_min2))
intersect_heights = np.maximum(
np.zeros(all_pairs_max_ymin.shape),
all_pairs_min_ymax - all_pairs_max_ymin)
all_pairs_min_xmax = np.minimum(x_max1, np.transpose(x_max2))
all_pairs_max_xmin = np.maximum(x_min1, np.transpose(x_min2))
intersect_widths = np.maximum(
np.zeros(all_pairs_max_xmin.shape),
all_pairs_min_xmax - all_pairs_max_xmin)
return intersect_heights * intersect_widths
def iou(boxes1, boxes2):
"""Computes pairwise intersection-over-union between box collections.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes.
boxes2: a numpy array with shape [M, 4] holding N boxes.
Returns:
a numpy array with shape [N, M] representing pairwise iou scores.
"""
intersect = intersection(boxes1, boxes2)
area1 = area(boxes1)
area2 = area(boxes2)
union = np.expand_dims(area1, axis=1) + np.expand_dims(
area2, axis=0) - intersect
return intersect / union
def ioa(boxes1, boxes2):
"""Computes pairwise intersection-over-area between box collections.
Intersection-over-area (ioa) between two boxes box1 and box2 is defined as
their intersection area over box2's area. Note that ioa is not symmetric,
that is, IOA(box1, box2) != IOA(box2, box1).
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes.
boxes2: a numpy array with shape [M, 4] holding N boxes.
Returns:
a numpy array with shape [N, M] representing pairwise ioa scores.
"""
intersect = intersection(boxes1, boxes2)
areas = np.expand_dims(area(boxes2), axis=0)
return intersect / areas
|
PaddlePaddle/LanguageModeling/BERT | BERT | squad_dataset | # Copyright (c) 2022 NVIDIA Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import json
import paddle
from tokenizer import _is_whitespace
def create_squad_data_holder():
input_ids = paddle.static.data(
name="input_ids", shape=[-1, -1], dtype="int64")
segment_ids = paddle.static.data(
name="segment_ids", shape=[-1, -1], dtype="int64")
start_positions = paddle.static.data(
name="start_positions", shape=[-1, 1], dtype="int64")
end_positions = paddle.static.data(
name="end_positions", shape=[-1, 1], dtype="int64")
unique_id = paddle.static.data(
name="unique_id", shape=[-1, 1], dtype="int64")
return input_ids, segment_ids, start_positions, end_positions, unique_id
class SquadExample:
"""
A single training/test example for simple sequence classification.
For examples without an answer, the start and end position are -1.
"""
def __init__(self,
qas_id,
question_text,
doc_tokens,
orig_answer_text=None,
start_position=None,
end_position=None,
is_impossible=False):
self.qas_id = qas_id
self.question_text = question_text
self.doc_tokens = doc_tokens
self.orig_answer_text = orig_answer_text
self.start_position = start_position
self.end_position = end_position
self.is_impossible = is_impossible
class InputFeatures:
"""A single set of features of data."""
def __init__(self,
unique_id,
example_index,
doc_span_index,
tokens,
token_to_orig_map,
token_is_max_context,
input_ids,
input_mask,
segment_ids,
start_position=None,
end_position=None,
is_impossible=None):
self.unique_id = unique_id
self.example_index = example_index
self.doc_span_index = doc_span_index
self.tokens = tokens
self.token_to_orig_map = token_to_orig_map
self.token_is_max_context = token_is_max_context
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.start_position = start_position
self.end_position = end_position
self.is_impossible = is_impossible
class SQuAD(paddle.io.Dataset):
def __init__(self,
tokenizer,
mode='train',
version_2_with_negative=False,
path=None,
doc_stride=128,
max_query_length=64,
max_seq_length=512):
self.version_2_with_negative = version_2_with_negative
self.path = path
self.tokenizer = tokenizer
self.doc_stride = doc_stride
self.max_query_length = max_query_length
self.max_seq_length = max_seq_length
self._transform_func = None
if mode == 'train':
self.is_training = True
else:
self.is_training = False
self._read()
self.features = self.convert_examples_to_features(
self.examples,
tokenizer=self.tokenizer,
doc_stride=self.doc_stride,
max_query_length=self.max_query_length,
max_seq_length=self.max_seq_length)
def convert_examples_to_features(self, examples, tokenizer, max_seq_length,
doc_stride, max_query_length):
"""Loads a data file into a list of `InputBatch`s."""
unique_id = 1000000000
features = []
for (example_index, example) in enumerate(examples):
query_tokens = tokenizer.tokenize(example.question_text)
if len(query_tokens) > max_query_length:
query_tokens = query_tokens[0:max_query_length]
tok_to_orig_index = []
orig_to_tok_index = []
all_doc_tokens = []
for (i, token) in enumerate(example.doc_tokens):
orig_to_tok_index.append(len(all_doc_tokens))
sub_tokens = tokenizer.tokenize(token)
for sub_token in sub_tokens:
tok_to_orig_index.append(i)
all_doc_tokens.append(sub_token)
tok_start_position = None
tok_end_position = None
if self.is_training and example.is_impossible:
tok_start_position = -1
tok_end_position = -1
if self.is_training and not example.is_impossible:
tok_start_position = orig_to_tok_index[example.start_position]
if example.end_position < len(example.doc_tokens) - 1:
tok_end_position = orig_to_tok_index[example.end_position +
1] - 1
else:
tok_end_position = len(all_doc_tokens) - 1
(tok_start_position,
tok_end_position) = self._improve_answer_span(
all_doc_tokens, tok_start_position, tok_end_position,
tokenizer, example.orig_answer_text)
# The -3 accounts for [CLS], [SEP] and [SEP]
max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
# We can have documents that are longer than the maximum sequence length.
# To deal with this we do a sliding window approach, where we take chunks
# of the up to our max length with a stride of `doc_stride`.
_DocSpan = collections.namedtuple( # pylint: disable=invalid-name
"DocSpan", ["start", "length"])
doc_spans = []
start_offset = 0
while start_offset < len(all_doc_tokens):
length = len(all_doc_tokens) - start_offset
if length > max_tokens_for_doc:
length = max_tokens_for_doc
doc_spans.append(_DocSpan(start=start_offset, length=length))
if start_offset + length == len(all_doc_tokens):
break
start_offset += min(length, doc_stride)
for (doc_span_index, doc_span) in enumerate(doc_spans):
tokens = []
token_to_orig_map = {}
token_is_max_context = {}
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in query_tokens:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
for i in range(doc_span.length):
split_token_index = doc_span.start + i
token_to_orig_map[len(tokens)] = tok_to_orig_index[
split_token_index]
is_max_context = self._check_is_max_context(
doc_spans, doc_span_index, split_token_index)
token_is_max_context[len(tokens)] = is_max_context
tokens.append(all_doc_tokens[split_token_index])
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
input_ids = input_ids + [
tokenizer.vocab[tokenizer.pad_token]
for _ in range(self.max_seq_length - len(input_ids))
]
segment_ids = segment_ids + [
tokenizer.vocab[tokenizer.pad_token]
for _ in range(self.max_seq_length - len(segment_ids))
]
input_mask = [1] * len(input_ids)
start_position = None
end_position = None
if self.is_training and not example.is_impossible:
# For training, if our document chunk does not contain an annotation
# we throw it out, since there is nothing to predict.
doc_start = doc_span.start
doc_end = doc_span.start + doc_span.length - 1
out_of_span = False
if not (tok_start_position >= doc_start and
tok_end_position <= doc_end):
out_of_span = True
if out_of_span:
start_position = 0
end_position = 0
else:
doc_offset = len(query_tokens) + 2
start_position = tok_start_position - doc_start + doc_offset
end_position = tok_end_position - doc_start + doc_offset
if self.is_training and example.is_impossible:
start_position = 0
end_position = 0
features.append(
InputFeatures(
unique_id=unique_id,
example_index=example_index,
doc_span_index=doc_span_index,
tokens=tokens,
token_to_orig_map=token_to_orig_map,
token_is_max_context=token_is_max_context,
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
start_position=start_position,
end_position=end_position,
is_impossible=example.is_impossible))
unique_id += 1
return features
def _improve_answer_span(self, doc_tokens, input_start, input_end,
tokenizer, orig_answer_text):
"""Returns tokenized answer spans that better match the annotated answer."""
# The SQuAD annotations are character based. We first project them to
# whitespace-tokenized words. But then after WordPiece tokenization, we can
# often find a "better match". For example:
#
# Question: What year was John Smith born?
# Context: The leader was John Smith (1895-1943).
# Answer: 1895
#
# The original whitespace-tokenized answer will be "(1895-1943).". However
# after tokenization, our tokens will be "( 1895 - 1943 ) .". So we can match
# the exact answer, 1895.
#
# However, this is not always possible. Consider the following:
#
# Question: What country is the top exporter of electornics?
# Context: The Japanese electronics industry is the lagest in the world.
# Answer: Japan
#
# In this case, the annotator chose "Japan" as a character sub-span of
# the word "Japanese". Since our WordPiece tokenizer does not split
# "Japanese", we just use "Japanese" as the annotation. This is fairly rare
# in SQuAD, but does happen.
tok_answer_text = " ".join(tokenizer.tokenize(orig_answer_text))
for new_start in range(input_start, input_end + 1):
for new_end in range(input_end, new_start - 1, -1):
text_span = " ".join(doc_tokens[new_start:(new_end + 1)])
if text_span == tok_answer_text:
return (new_start, new_end)
return (input_start, input_end)
def _check_is_max_context(self, doc_spans, cur_span_index, position):
"""Check if this is the 'max context' doc span for the token."""
# Because of the sliding window approach taken to scoring documents, a single
# token can appear in multiple documents. E.g.
# Doc: the man went to the store and bought a gallon of milk
# Span A: the man went to the
# Span B: to the store and bought
# Span C: and bought a gallon of
# ...
#
# Now the word 'bought' will have two scores from spans B and C. We only
# want to consider the score with "maximum context", which we define as
# the *minimum* of its left and right context (the *sum* of left and
# right context will always be the same, of course).
#
# In the example the maximum context for 'bought' would be span C since
# it has 1 left context and 3 right context, while span B has 4 left context
# and 0 right context.
best_score = None
best_span_index = None
for (span_index, doc_span) in enumerate(doc_spans):
end = doc_span.start + doc_span.length - 1
if position < doc_span.start:
continue
if position > end:
continue
num_left_context = position - doc_span.start
num_right_context = end - position
score = min(num_left_context,
num_right_context) + 0.01 * doc_span.length
if best_score is None or score > best_score:
best_score = score
best_span_index = span_index
return cur_span_index == best_span_index
def _read(self):
with open(self.path, "r", encoding="utf8") as reader:
input_data = json.load(reader)["data"]
examples = []
for entry in input_data:
for paragraph in entry["paragraphs"]:
paragraph_text = paragraph["context"]
doc_tokens = []
char_to_word_offset = []
prev_is_whitespace = True
for c in paragraph_text:
if _is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
doc_tokens.append(c)
else:
doc_tokens[-1] += c
prev_is_whitespace = False
char_to_word_offset.append(len(doc_tokens) - 1)
for qa in paragraph["qas"]:
qas_id = qa["id"]
question_text = qa["question"]
start_position = None
end_position = None
orig_answer_text = None
is_impossible = False
if self.is_training:
if self.version_2_with_negative:
is_impossible = qa["is_impossible"]
if (len(qa["answers"]) != 1) and (not is_impossible):
raise ValueError(
"For training, each question should have exactly 1 answer."
)
if not is_impossible:
answer = qa["answers"][0]
orig_answer_text = answer["text"]
answer_offset = answer["answer_start"]
answer_length = len(orig_answer_text)
start_position = char_to_word_offset[answer_offset]
try:
end_position = char_to_word_offset[
answer_offset + answer_length - 1]
except:
continue
else:
start_position = -1
end_position = -1
orig_answer_text = ""
else:
if self.version_2_with_negative:
is_impossible = qa["is_impossible"]
orig_answer_text = []
if not is_impossible and 'answers' in qa.keys():
answers = qa["answers"]
for answer in answers:
orig_answer_text.append(answer["text"])
else:
start_position = -1
end_position = -1
example = SquadExample(
qas_id=qas_id,
question_text=question_text,
doc_tokens=doc_tokens,
orig_answer_text=orig_answer_text,
start_position=start_position,
end_position=end_position,
is_impossible=is_impossible)
examples.append(example)
self.examples = examples
def __len__(self):
return len(self.features)
def __getitem__(self, idx):
feature = self.features[idx]
if self.is_training:
return feature.input_ids, feature.segment_ids, feature.unique_id, feature.start_position, feature.end_position
else:
return feature.input_ids, feature.segment_ids, feature.unique_id
|
TensorFlow2/Recommendation/WideAndDeep/triton/runner/maintainer/docker/containers | containers | __init__ | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .triton_server_container import TritonServerContainer # noqa: F401
|
TensorFlow2/Classification/ConvNets/model/blocks | blocks | mb_conv_block | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from typing import Any, Dict, Optional, Text, Tuple
from model.layers import get_activation
from model.blocks import conv2d_block
__all__ = ['mb_conv_block']
def mb_conv_block(inputs: tf.Tensor,
block: dict,
config: dict,
prefix: Text = None):
"""Mobile Inverted Residual Bottleneck.
Args:
inputs: the Keras input to the block
block: BlockConfig, arguments to create a Block
config: ModelConfig, a set of model parameters
prefix: prefix for naming all layers
Returns:
the output of the block
"""
use_se = config.mparams.use_se if 'use_se' in config.mparams else block['se_ratio'] is not None
activation = get_activation(config.mparams.activation)
drop_connect_rate = config.mparams.drop_connect_rate
data_format = tf.keras.backend.image_data_format()
use_depthwise = block['conv_type'] != 'no_depthwise'
prefix = prefix or ''
filters = block['input_filters'] * block['expand_ratio']
x = inputs
if block['fused_conv']:
# If we use fused mbconv, skip expansion and use regular conv.
x = conv2d_block(x,
filters,
config,
kernel_size=block['kernel_size'],
strides=block['strides'],
activation=activation,
name=prefix + 'fused')
else:
if block['expand_ratio'] != 1:
# Expansion phase
kernel_size = (1, 1) if use_depthwise else (3, 3)
x = conv2d_block(x,
filters,
config,
kernel_size=kernel_size,
activation=activation,
name=prefix + 'expand')
# Depthwise Convolution
if use_depthwise:
x = conv2d_block(x,
conv_filters=None,
config=config,
kernel_size=block['kernel_size'],
strides=block['strides'],
activation=activation,
depthwise=True,
name=prefix + 'depthwise')
# Squeeze and Excitation phase
if use_se:
assert block['se_ratio'] is not None
assert 0 < block['se_ratio'] <= 1
num_reduced_filters = max(1, int(
block['input_filters'] * block['se_ratio']
))
if data_format == 'channels_first':
se_shape = (filters, 1, 1)
else:
se_shape = (1, 1, filters)
se = tf.keras.layers.GlobalAveragePooling2D(name=prefix + 'se_squeeze')(x)
se = tf.keras.layers.Reshape(se_shape, name=prefix + 'se_reshape')(se)
se = conv2d_block(se,
num_reduced_filters,
config,
use_bias=True,
use_batch_norm=False,
activation=activation,
name=prefix + 'se_reduce')
se = conv2d_block(se,
filters,
config,
use_bias=True,
use_batch_norm=False,
activation='sigmoid',
name=prefix + 'se_expand')
x = tf.keras.layers.multiply([x, se], name=prefix + 'se_excite')
# Output phase
x = conv2d_block(x,
block['output_filters'],
config,
activation=None,
name=prefix + 'project')
# Add identity so that quantization-aware training can insert quantization
# ops correctly.
x = tf.keras.layers.Activation(get_activation('identity'),
name=prefix + 'id')(x)
if (block['id_skip']
and all(s == 1 for s in block['strides'])
and block['input_filters'] == block['output_filters']):
if drop_connect_rate and drop_connect_rate > 0:
# Apply dropconnect
# The only difference between dropout and dropconnect in TF is scaling by
# drop_connect_rate during training. See:
# https://github.com/keras-team/keras/pull/9898#issuecomment-380577612
x = tf.keras.layers.Dropout(drop_connect_rate,
noise_shape=(None, 1, 1, 1),
name=prefix + 'drop')(x)
x = tf.keras.layers.add([x, inputs], name=prefix + 'add')
return x |
PyTorch/Classification/ConvNets/resnext101-32x4d/training/AMP | AMP | DGXA100_resnext101-32x4d_AMP_250E | python ./multiproc.py --nproc_per_node 8 ./launch.py --model resnext101-32x4d --precision AMP --mode convergence --platform DGXA100 /imagenet --workspace ${1:-./} --raport-file raport.json
|
PyTorch/Classification/GPUNet/triton/deployment_toolkit/triton_performance_runner | triton_performance_runner | runner | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# method from PEP-366 to support relative import in executed modules
import logging
import pathlib
from typing import List, Optional
if __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from ..core import EvaluationMode, MeasurementMode, OfflineMode, PerformanceTool
from .model_analyzer import ModelAnalyzerRunner
from .perf_analyzer import PerfAnalyzerRunner, PerfAnalyzerWarmupRunner
LOGGER = logging.getLogger("triton_performance_runner")
class TritonPerformanceRunner:
def __init__(
self,
server_url: str,
model_name: str,
input_data: str,
input_shapes: List[str],
batch_sizes: List[int],
concurrency: List[int],
measurement_mode: MeasurementMode,
measurement_interval: int,
measurement_request_count: int,
evaluation_mode: EvaluationMode,
offline_mode: OfflineMode,
output_shared_memory_size: int,
performance_tool: PerformanceTool,
model_repository: str,
result_path: pathlib.Path,
warmup: bool,
timeout: Optional[int],
verbose: bool,
):
self._warmup_runner = None
if warmup:
LOGGER.info("Running warmup before the main test")
self._warmup_runner = PerfAnalyzerWarmupRunner(
server_url=server_url,
model_name=model_name,
input_data=input_data,
input_shapes=input_shapes,
batch_sizes=batch_sizes,
concurrency=concurrency,
measurement_mode=measurement_mode,
measurement_interval=measurement_interval,
measurement_request_count=measurement_request_count,
evaluation_mode=evaluation_mode,
offline_mode=offline_mode,
output_shared_memory_size=output_shared_memory_size,
timeout=timeout,
)
if performance_tool == PerformanceTool.MODEL_ANALYZER:
LOGGER.info("Using Model Analyzer for performance evaluation")
self._runner = ModelAnalyzerRunner(
server_url=server_url,
model_name=model_name,
input_data=input_data,
input_shapes=input_shapes,
batch_sizes=batch_sizes,
concurrency=concurrency,
measurement_mode=measurement_mode,
measurement_interval=measurement_interval,
measurement_request_count=measurement_request_count,
evaluation_mode=evaluation_mode,
offline_mode=offline_mode,
output_shared_memory_size=output_shared_memory_size,
model_repository=model_repository,
result_path=result_path,
timeout=timeout,
verbose=verbose,
)
elif performance_tool == PerformanceTool.PERF_ANALYZER:
LOGGER.info("Using Perf Analyzer for performance evaluation")
self._runner = PerfAnalyzerRunner(
server_url=server_url,
model_name=model_name,
input_data=input_data,
input_shapes=input_shapes,
batch_sizes=batch_sizes,
measurement_mode=measurement_mode,
measurement_interval=measurement_interval,
measurement_request_count=measurement_request_count,
concurrency=concurrency,
evaluation_mode=evaluation_mode,
offline_mode=offline_mode,
output_shared_memory_size=output_shared_memory_size,
result_path=result_path,
timeout=timeout,
verbose=verbose,
)
else:
raise ValueError(f"Unsupported performance tool {performance_tool}")
def run(self):
if self._warmup_runner:
self._warmup_runner.run()
self._runner.run()
|
TensorFlow2/Recommendation/WideAndDeep/triton/runner | runner | triton | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pathlib
# method from PEP-366 to support relative import in executed modules
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from .core import Framework, Paths
class Triton:
"""
Triton Inference Server helper class
"""
image = "nvcr.io/nvidia/tritonserver"
tag = "py3"
class LOAD_MODE:
"""
Loading mode available in Triton
"""
POLL = "poll"
EXPLICIT = "explicit"
@staticmethod
def container_image(container_version: str):
"""
Container image based on version
Args:
container_version: Version of container to be used
Returns:
Image name with tag
"""
return f"{Triton.image}:{container_version}-{Triton.tag}"
@staticmethod
def command(
framework: str,
repository_path: str,
strict_mode: bool = False,
poll_model: bool = False,
metrics: bool = False,
verbose: bool = False,
):
"""
Command to run Triton Inference Server inside container
Args:
framework: Framework used for model
repository_path: Path to model repository
strict_mode: Flag to use strict model config
poll_model: Poll model
metrics: Enable GPU metrics (disable for MIG)
verbose: Use verbose mode logging
Returns:
"""
triton_command = f"tritonserver --model-store={repository_path}"
if poll_model:
triton_command += " --model-control-mode=poll --repository-poll-secs 5"
else:
triton_command += " --model-control-mode=explicit"
if not strict_mode:
triton_command += " --strict-model-config=false"
if not metrics:
triton_command += " --allow-metrics=false --allow-gpu-metrics=false"
if verbose:
triton_command += " --log-verbose 1"
if framework in (Framework.TensorFlow1, Framework.TensorFlow2):
version = 1 if framework == Framework.TensorFlow1 else 2
triton_command += f" --backend-config=tensorflow,version={version}"
return triton_command
@staticmethod
def library_path(framework: str):
"""
Obtain custom library path for framework
Args:
framework: Framework used for model
Returns:
Path to additional libraries needed by framework
"""
paths = {
Framework.PyTorch.name: "/opt/tritonserver/backends/pytorch",
Framework.TensorFlow1.name: "/opt/tritonserver/backends/tensorflow1",
Framework.TensorFlow2.name: "/opt/tritonserver/backends/tensorflow2",
}
return paths[framework]
@staticmethod
def custom_library_path_remote() -> str:
"""
Path to custom library mounted in Triton container
Returns:
Path to shared library with custom operations
"""
return f"{Paths.LIBRARIES_PATH}/libcustomops.so"
@staticmethod
def custom_library_path_local(libs_dir: pathlib.Path) -> pathlib.Path:
"""
Path to custom library in local path
Args:
libs_dir: path to libraries directory
Returns:
Path to shared library with custom operations
"""
return libs_dir / "libcustomops.so"
|
TensorFlow/Classification/ConvNets/triton/scripts/docker | docker | triton_inference_server | #!/usr/bin/env bash
# Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES:=all}
docker run --rm -d \
-p 8000:8000 \
-p 8001:8001 \
-p 8002:8002 \
--runtime=nvidia \
-e NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES} \
-v ${MODEL_REPOSITORY_PATH}:${MODEL_REPOSITORY_PATH} \
--shm-size=1g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
nvcr.io/nvidia/tritonserver:20.12-py3 tritonserver \
--model-store=${MODEL_REPOSITORY_PATH} \
--strict-model-config=false \
--exit-on-error=true \
--model-control-mode=explicit
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/model-config/tacotron2waveglow | tacotron2waveglow | mapping | # sequence-number symbol
0 _
1 -
2 !
3 '
4 (
5 )
6 ,
7 .
8 :
9 ;
10 ?
11
38 A
39 B
40 C
41 D
42 E
43 F
44 G
45 H
46 I
47 J
48 K
49 L
50 M
51 N
52 O
53 P
54 Q
55 R
56 S
57 T
58 U
59 V
60 W
61 X
62 Y
63 Z
38 a
39 b
40 c
41 d
42 e
43 f
44 g
45 h
46 i
47 j
48 k
49 l
50 m
51 n
52 o
53 p
54 q
55 r
56 s
57 t
58 u
59 v
60 w
61 x
62 y
63 z
64 @AA
65 @AA0
66 @AA1
67 @AA2
68 @AE
69 @AE0
70 @AE1
71 @AE2
72 @AH
73 @AH0
74 @AH1
75 @AH2
76 @AO
77 @AO0
78 @AO1
79 @AO2
80 @AW
81 @AW0
82 @AW1
83 @AW2
84 @AY
85 @AY0
86 @AY1
87 @AY2
88 @B
89 @CH
90 @D
91 @DH
92 @EH
93 @EH0
94 @EH1
95 @EH2
96 @ER
97 @ER0
98 @ER1
99 @ER2
100 @EY
101 @EY0
102 @EY1
103 @EY2
104 @F
105 @G
106 @HH
107 @IH
108 @IH0
109 @IH1
110 @IH2
111 @IY
112 @IY0
113 @IY1
114 @IY2
115 @JH
116 @K
117 @L
118 @M
119 @N
120 @NG
121 @OW
122 @OW0
123 @OW1
124 @OW2
125 @OY
126 @OY0
127 @OY1
128 @OY2
129 @P
130 @R
131 @S
132 @SH
133 @T
134 @TH
135 @UH
136 @UH0
137 @UH1
138 @UH2
139 @UW
140 @UW0
141 @UW1
142 @UW2
143 @V
144 @W
145 @Y
146 @Z
147 @ZH
|
TensorFlow/Segmentation/UNet_Industrial/datasets | datasets | core | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# ==============================================================================
#
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ==============================================================================
import os
from abc import ABC, abstractmethod
import math
import tensorflow as tf
__all__ = ["BaseDataset"]
class BaseDataset(ABC):
authorized_normalization_methods = [None, "zero_centered", "zero_one"]
def __init__(self, data_dir):
self.data_dir = data_dir
if not os.path.exists(data_dir):
raise FileNotFoundError("The dataset directory `%s` does not exist." % data_dir)
@staticmethod
def _count_steps(iter_unit, num_samples, num_iter, global_batch_size):
if iter_unit not in ["batch", "epoch"]:
raise ValueError("Invalid `iter_unit` value: %s" % iter_unit)
if iter_unit == 'epoch':
num_steps = (num_samples // global_batch_size) * num_iter
num_epochs = num_iter
else:
num_steps = num_iter
num_epochs = math.ceil(num_steps / (num_samples // global_batch_size))
return num_steps, num_epochs
@abstractmethod
def dataset_name(self):
raise NotImplementedError
@abstractmethod
def get_dataset_runtime_specs(self, training, iter_unit, num_iter, global_batch_size):
# return filenames, num_samples, num_steps, num_epochs
raise NotImplementedError
@abstractmethod
def dataset_fn(
self,
batch_size,
training,
input_shape,
mask_shape,
num_threads,
use_gpu_prefetch,
normalize_data_method,
only_defective_images,
augment_data,
seed=None
):
if normalize_data_method not in BaseDataset.authorized_normalization_methods:
raise ValueError(
'Unknown `normalize_data_method`: %s - Authorized: %s' %
(normalize_data_method, BaseDataset.authorized_normalization_methods)
)
def synth_dataset_fn(
self,
batch_size,
training,
input_shape,
mask_shape,
num_threads,
use_gpu_prefetch,
normalize_data_method,
only_defective_images,
augment_data,
seed=None
):
if normalize_data_method not in BaseDataset.authorized_normalization_methods:
raise ValueError(
'Unknown `normalize_data_method`: %s - Authorized: %s' %
(normalize_data_method, BaseDataset.authorized_normalization_methods)
)
input_shape = [batch_size] + list(input_shape)
mask_shape = [batch_size] + list(mask_shape)
# Convert the inputs to a Dataset
if normalize_data_method is None:
mean_val = 127.5
elif normalize_data_method == "zero_centered":
mean_val = 0
else:
mean_val = 0.5
inputs = tf.truncated_normal(
input_shape, dtype=tf.float32, mean=mean_val, stddev=1, seed=seed, name='synth_inputs'
)
masks = tf.truncated_normal(mask_shape, dtype=tf.float32, mean=0.01, stddev=0.1, seed=seed, name='synth_masks')
labels = tf.random_uniform([batch_size], minval=0, maxval=1, dtype=tf.int32, name='synthetic_labels')
dataset = tf.data.Dataset.from_tensors(((inputs, masks), labels))
dataset = dataset.cache()
dataset = dataset.repeat()
dataset = dataset.prefetch(buffer_size=tf.contrib.data.AUTOTUNE)
if use_gpu_prefetch:
dataset.apply(tf.data.experimental.prefetch_to_device(device="/gpu:0", buffer_size=batch_size * 8))
return dataset
|
TensorFlow2/LanguageModeling/ELECTRA/scripts | scripts | benchmark_squad | #!/usr/bin/env bash
# Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
mode=${1:-"train"}
num_gpu=${2:-"8"}
batch_size=${3:-"16"}
infer_batch_size=${4:-"$batch_size"}
precision=${5:-"amp"}
SQUAD_VERSION=${6:-"1.1"}
squad_dir=${7:-"/workspace/electra/data/download/squad/v$SQUAD_VERSION"}
OUT_DIR=${8:-"results/"}
init_checkpoint=${9:-"None"}
cache_dir=${10:-"$squad_dir"}
bash scripts/run_squad.sh google/electra-base-discriminator 1 $batch_size $infer_batch_size 8e-4 $precision $num_gpu $RANDOM $SQUAD_VERSION $squad_dir $OUT_DIR $init_checkpoint $mode interactive $cache_dir 200
|
TensorFlow/Detection/SSD/models/research/object_detection/dataset_tools | dataset_tools | create_pascal_tf_record_test | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Test for create_pascal_tf_record.py."""
import os
import numpy as np
import PIL.Image
import tensorflow as tf
from object_detection.dataset_tools import create_pascal_tf_record
class CreatePascalTFRecordTest(tf.test.TestCase):
def _assertProtoEqual(self, proto_field, expectation):
"""Helper function to assert if a proto field equals some value.
Args:
proto_field: The protobuf field to compare.
expectation: The expected value of the protobuf field.
"""
proto_list = [p for p in proto_field]
self.assertListEqual(proto_list, expectation)
def test_dict_to_tf_example(self):
image_file_name = 'tmp_image.jpg'
image_data = np.random.rand(256, 256, 3)
save_path = os.path.join(self.get_temp_dir(), image_file_name)
image = PIL.Image.fromarray(image_data, 'RGB')
image.save(save_path)
data = {
'folder': '',
'filename': image_file_name,
'size': {
'height': 256,
'width': 256,
},
'object': [
{
'difficult': 1,
'bndbox': {
'xmin': 64,
'ymin': 64,
'xmax': 192,
'ymax': 192,
},
'name': 'person',
'truncated': 0,
'pose': '',
},
],
}
label_map_dict = {
'background': 0,
'person': 1,
'notperson': 2,
}
example = create_pascal_tf_record.dict_to_tf_example(
data, self.get_temp_dir(), label_map_dict, image_subdirectory='')
self._assertProtoEqual(
example.features.feature['image/height'].int64_list.value, [256])
self._assertProtoEqual(
example.features.feature['image/width'].int64_list.value, [256])
self._assertProtoEqual(
example.features.feature['image/filename'].bytes_list.value,
[image_file_name])
self._assertProtoEqual(
example.features.feature['image/source_id'].bytes_list.value,
[image_file_name])
self._assertProtoEqual(
example.features.feature['image/format'].bytes_list.value, ['jpeg'])
self._assertProtoEqual(
example.features.feature['image/object/bbox/xmin'].float_list.value,
[0.25])
self._assertProtoEqual(
example.features.feature['image/object/bbox/ymin'].float_list.value,
[0.25])
self._assertProtoEqual(
example.features.feature['image/object/bbox/xmax'].float_list.value,
[0.75])
self._assertProtoEqual(
example.features.feature['image/object/bbox/ymax'].float_list.value,
[0.75])
self._assertProtoEqual(
example.features.feature['image/object/class/text'].bytes_list.value,
['person'])
self._assertProtoEqual(
example.features.feature['image/object/class/label'].int64_list.value,
[1])
self._assertProtoEqual(
example.features.feature['image/object/difficult'].int64_list.value,
[1])
self._assertProtoEqual(
example.features.feature['image/object/truncated'].int64_list.value,
[0])
self._assertProtoEqual(
example.features.feature['image/object/view'].bytes_list.value, [''])
if __name__ == '__main__':
tf.test.main()
|
TensorFlow/Detection/SSD/models/research/slim/nets | nets | inception_v4_test | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for slim.inception_v4."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from nets import inception
class InceptionTest(tf.test.TestCase):
def testBuildLogits(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v4(inputs, num_classes)
auxlogits = end_points['AuxLogits']
predictions = end_points['Predictions']
self.assertTrue(auxlogits.op.name.startswith('InceptionV4/AuxLogits'))
self.assertListEqual(auxlogits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue(predictions.op.name.startswith(
'InceptionV4/Logits/Predictions'))
self.assertListEqual(predictions.get_shape().as_list(),
[batch_size, num_classes])
def testBuildPreLogitsNetwork(self):
batch_size = 5
height, width = 299, 299
num_classes = None
inputs = tf.random_uniform((batch_size, height, width, 3))
net, end_points = inception.inception_v4(inputs, num_classes)
self.assertTrue(net.op.name.startswith('InceptionV4/Logits/AvgPool'))
self.assertListEqual(net.get_shape().as_list(), [batch_size, 1, 1, 1536])
self.assertFalse('Logits' in end_points)
self.assertFalse('Predictions' in end_points)
def testBuildWithoutAuxLogits(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, endpoints = inception.inception_v4(inputs, num_classes,
create_aux_logits=False)
self.assertFalse('AuxLogits' in endpoints)
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
def testAllEndPointsShapes(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v4(inputs, num_classes)
endpoints_shapes = {'Conv2d_1a_3x3': [batch_size, 149, 149, 32],
'Conv2d_2a_3x3': [batch_size, 147, 147, 32],
'Conv2d_2b_3x3': [batch_size, 147, 147, 64],
'Mixed_3a': [batch_size, 73, 73, 160],
'Mixed_4a': [batch_size, 71, 71, 192],
'Mixed_5a': [batch_size, 35, 35, 384],
# 4 x Inception-A blocks
'Mixed_5b': [batch_size, 35, 35, 384],
'Mixed_5c': [batch_size, 35, 35, 384],
'Mixed_5d': [batch_size, 35, 35, 384],
'Mixed_5e': [batch_size, 35, 35, 384],
# Reduction-A block
'Mixed_6a': [batch_size, 17, 17, 1024],
# 7 x Inception-B blocks
'Mixed_6b': [batch_size, 17, 17, 1024],
'Mixed_6c': [batch_size, 17, 17, 1024],
'Mixed_6d': [batch_size, 17, 17, 1024],
'Mixed_6e': [batch_size, 17, 17, 1024],
'Mixed_6f': [batch_size, 17, 17, 1024],
'Mixed_6g': [batch_size, 17, 17, 1024],
'Mixed_6h': [batch_size, 17, 17, 1024],
# Reduction-A block
'Mixed_7a': [batch_size, 8, 8, 1536],
# 3 x Inception-C blocks
'Mixed_7b': [batch_size, 8, 8, 1536],
'Mixed_7c': [batch_size, 8, 8, 1536],
'Mixed_7d': [batch_size, 8, 8, 1536],
# Logits and predictions
'AuxLogits': [batch_size, num_classes],
'global_pool': [batch_size, 1, 1, 1536],
'PreLogitsFlatten': [batch_size, 1536],
'Logits': [batch_size, num_classes],
'Predictions': [batch_size, num_classes]}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testBuildBaseNetwork(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random_uniform((batch_size, height, width, 3))
net, end_points = inception.inception_v4_base(inputs)
self.assertTrue(net.op.name.startswith(
'InceptionV4/Mixed_7d'))
self.assertListEqual(net.get_shape().as_list(), [batch_size, 8, 8, 1536])
expected_endpoints = [
'Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3', 'Mixed_3a',
'Mixed_4a', 'Mixed_5a', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d',
'Mixed_5e', 'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d',
'Mixed_6e', 'Mixed_6f', 'Mixed_6g', 'Mixed_6h', 'Mixed_7a',
'Mixed_7b', 'Mixed_7c', 'Mixed_7d']
self.assertItemsEqual(end_points.keys(), expected_endpoints)
for name, op in end_points.items():
self.assertTrue(op.name.startswith('InceptionV4/' + name))
def testBuildOnlyUpToFinalEndpoint(self):
batch_size = 5
height, width = 299, 299
all_endpoints = [
'Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3', 'Mixed_3a',
'Mixed_4a', 'Mixed_5a', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d',
'Mixed_5e', 'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d',
'Mixed_6e', 'Mixed_6f', 'Mixed_6g', 'Mixed_6h', 'Mixed_7a',
'Mixed_7b', 'Mixed_7c', 'Mixed_7d']
for index, endpoint in enumerate(all_endpoints):
with tf.Graph().as_default():
inputs = tf.random_uniform((batch_size, height, width, 3))
out_tensor, end_points = inception.inception_v4_base(
inputs, final_endpoint=endpoint)
self.assertTrue(out_tensor.op.name.startswith(
'InceptionV4/' + endpoint))
self.assertItemsEqual(all_endpoints[:index+1], end_points.keys())
def testVariablesSetDevice(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
# Force all Variables to reside on the device.
with tf.variable_scope('on_cpu'), tf.device('/cpu:0'):
inception.inception_v4(inputs, num_classes)
with tf.variable_scope('on_gpu'), tf.device('/gpu:0'):
inception.inception_v4(inputs, num_classes)
for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='on_cpu'):
self.assertDeviceEqual(v.device, '/cpu:0')
for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='on_gpu'):
self.assertDeviceEqual(v.device, '/gpu:0')
def testHalfSizeImages(self):
batch_size = 5
height, width = 150, 150
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v4(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7d']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 3, 3, 1536])
def testGlobalPool(self):
batch_size = 1
height, width = 350, 400
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v4(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7d']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 9, 11, 1536])
def testGlobalPoolUnknownImageShape(self):
batch_size = 1
height, width = 350, 400
num_classes = 1000
with self.test_session() as sess:
inputs = tf.placeholder(tf.float32, (batch_size, None, None, 3))
logits, end_points = inception.inception_v4(
inputs, num_classes, create_aux_logits=False)
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7d']
images = tf.random_uniform((batch_size, height, width, 3))
sess.run(tf.global_variables_initializer())
logits_out, pre_pool_out = sess.run([logits, pre_pool],
{inputs: images.eval()})
self.assertTupleEqual(logits_out.shape, (batch_size, num_classes))
self.assertTupleEqual(pre_pool_out.shape, (batch_size, 9, 11, 1536))
def testUnknownBatchSize(self):
batch_size = 1
height, width = 299, 299
num_classes = 1000
with self.test_session() as sess:
inputs = tf.placeholder(tf.float32, (None, height, width, 3))
logits, _ = inception.inception_v4(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[None, num_classes])
images = tf.random_uniform((batch_size, height, width, 3))
sess.run(tf.global_variables_initializer())
output = sess.run(logits, {inputs: images.eval()})
self.assertEquals(output.shape, (batch_size, num_classes))
def testEvaluation(self):
batch_size = 2
height, width = 299, 299
num_classes = 1000
with self.test_session() as sess:
eval_inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = inception.inception_v4(eval_inputs,
num_classes,
is_training=False)
predictions = tf.argmax(logits, 1)
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (batch_size,))
def testTrainEvalWithReuse(self):
train_batch_size = 5
eval_batch_size = 2
height, width = 150, 150
num_classes = 1000
with self.test_session() as sess:
train_inputs = tf.random_uniform((train_batch_size, height, width, 3))
inception.inception_v4(train_inputs, num_classes)
eval_inputs = tf.random_uniform((eval_batch_size, height, width, 3))
logits, _ = inception.inception_v4(eval_inputs,
num_classes,
is_training=False,
reuse=True)
predictions = tf.argmax(logits, 1)
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (eval_batch_size,))
def testNoBatchNormScaleByDefault(self):
height, width = 299, 299
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with tf.contrib.slim.arg_scope(inception.inception_v4_arg_scope()):
inception.inception_v4(inputs, num_classes, is_training=False)
self.assertEqual(tf.global_variables('.*/BatchNorm/gamma:0$'), [])
def testBatchNormScale(self):
height, width = 299, 299
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with tf.contrib.slim.arg_scope(
inception.inception_v4_arg_scope(batch_norm_scale=True)):
inception.inception_v4(inputs, num_classes, is_training=False)
gamma_names = set(
v.op.name for v in tf.global_variables('.*/BatchNorm/gamma:0$'))
self.assertGreater(len(gamma_names), 0)
for v in tf.global_variables('.*/BatchNorm/moving_mean:0$'):
self.assertIn(v.op.name[:-len('moving_mean')] + 'gamma', gamma_names)
if __name__ == '__main__':
tf.test.main()
|
TensorFlow/Detection/SSD/models/research/object_detection/models | models | embedded_ssd_mobilenet_v1_feature_extractor | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Embedded-friendly SSDFeatureExtractor for MobilenetV1 features."""
import tensorflow as tf
from object_detection.meta_architectures import ssd_meta_arch
from object_detection.models import feature_map_generators
from object_detection.utils import context_manager
from object_detection.utils import ops
from nets import mobilenet_v1
slim = tf.contrib.slim
class EmbeddedSSDMobileNetV1FeatureExtractor(ssd_meta_arch.SSDFeatureExtractor):
"""Embedded-friendly SSD Feature Extractor using MobilenetV1 features.
This feature extractor is similar to SSD MobileNetV1 feature extractor, and
it fixes input resolution to be 256x256, reduces the number of feature maps
used for box prediction and ensures convolution kernel to be no larger
than input tensor in spatial dimensions.
This feature extractor requires support of the following ops if used in
embedded devices:
- Conv
- DepthwiseConv
- Relu6
All conv/depthwiseconv use SAME padding, and no additional spatial padding is
needed.
"""
def __init__(self,
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams_fn,
reuse_weights=None,
use_explicit_padding=False,
use_depthwise=False,
override_base_feature_extractor_hyperparams=False):
"""MobileNetV1 Feature Extractor for Embedded-friendly SSD Models.
Args:
is_training: whether the network is in training mode.
depth_multiplier: float depth multiplier for feature extractor.
min_depth: minimum feature extractor depth.
pad_to_multiple: the nearest multiple to zero pad the input height and
width dimensions to. For EmbeddedSSD it must be set to 1.
conv_hyperparams_fn: A function to construct tf slim arg_scope for conv2d
and separable_conv2d ops in the layers that are added on top of the
base feature extractor.
reuse_weights: Whether to reuse variables. Default is None.
use_explicit_padding: Whether to use explicit padding when extracting
features. Default is False.
use_depthwise: Whether to use depthwise convolutions. Default is False.
override_base_feature_extractor_hyperparams: Whether to override
hyperparameters of the base feature extractor with the one from
`conv_hyperparams_fn`.
Raises:
ValueError: upon invalid `pad_to_multiple` values.
"""
if pad_to_multiple != 1:
raise ValueError('Embedded-specific SSD only supports `pad_to_multiple` '
'of 1.')
super(EmbeddedSSDMobileNetV1FeatureExtractor, self).__init__(
is_training, depth_multiplier, min_depth, pad_to_multiple,
conv_hyperparams_fn, reuse_weights, use_explicit_padding, use_depthwise,
override_base_feature_extractor_hyperparams)
def preprocess(self, resized_inputs):
"""SSD preprocessing.
Maps pixel values to the range [-1, 1].
Args:
resized_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
Returns:
preprocessed_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
"""
return (2.0 / 255.0) * resized_inputs - 1.0
def extract_features(self, preprocessed_inputs):
"""Extract features from preprocessed inputs.
Args:
preprocessed_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
Returns:
feature_maps: a list of tensors where the ith tensor has shape
[batch, height_i, width_i, depth_i]
Raises:
ValueError: if image height or width are not 256 pixels.
"""
image_shape = preprocessed_inputs.get_shape()
image_shape.assert_has_rank(4)
image_height = image_shape[1].value
image_width = image_shape[2].value
if image_height is None or image_width is None:
shape_assert = tf.Assert(
tf.logical_and(tf.equal(tf.shape(preprocessed_inputs)[1], 256),
tf.equal(tf.shape(preprocessed_inputs)[2], 256)),
['image size must be 256 in both height and width.'])
with tf.control_dependencies([shape_assert]):
preprocessed_inputs = tf.identity(preprocessed_inputs)
elif image_height != 256 or image_width != 256:
raise ValueError('image size must be = 256 in both height and width;'
' image dim = %d,%d' % (image_height, image_width))
feature_map_layout = {
'from_layer': [
'Conv2d_11_pointwise', 'Conv2d_13_pointwise', '', '', ''
],
'layer_depth': [-1, -1, 512, 256, 256],
'conv_kernel_size': [-1, -1, 3, 3, 2],
'use_explicit_padding': self._use_explicit_padding,
'use_depthwise': self._use_depthwise,
}
with tf.variable_scope('MobilenetV1',
reuse=self._reuse_weights) as scope:
with slim.arg_scope(
mobilenet_v1.mobilenet_v1_arg_scope(is_training=None)):
with (slim.arg_scope(self._conv_hyperparams_fn())
if self._override_base_feature_extractor_hyperparams
else context_manager.IdentityContextManager()):
_, image_features = mobilenet_v1.mobilenet_v1_base(
ops.pad_to_multiple(preprocessed_inputs, self._pad_to_multiple),
final_endpoint='Conv2d_13_pointwise',
min_depth=self._min_depth,
depth_multiplier=self._depth_multiplier,
use_explicit_padding=self._use_explicit_padding,
scope=scope)
with slim.arg_scope(self._conv_hyperparams_fn()):
feature_maps = feature_map_generators.multi_resolution_feature_maps(
feature_map_layout=feature_map_layout,
depth_multiplier=self._depth_multiplier,
min_depth=self._min_depth,
insert_1x1_conv=True,
image_features=image_features)
return feature_maps.values()
|
JAX/LanguageModeling/T5X | T5X | README | T5X is a framework for training, evaluation, and inference of sequence models (starting with language). It is based on [JAX](https://github.com/google/jax) and [Flax](https://github.com/google/flax). To learn more, see the [T5X Paper](https://arxiv.org/abs/2203.17189).
# T5X on GPUs
Please refer to [Rosetta T5X](https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/t5x), NVIDIA's project that enables seamless training of LLMs, CV models and multimodal models in JAX, for information about running models and experiments on GPUs in T5X.
|
PyTorch/Classification/ConvNets/triton | triton | run_offline_performance_test_on_triton | #!/usr/bin/env python3
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
For models with variable-sized inputs you must provide the --input-shape argument so that perf_analyzer knows
what shape tensors to use. For example, for a model that has an input called IMAGE that has shape [ 3, N, M ],
where N and M are variable-size dimensions, to tell perf_analyzer to send batch-size 4 requests of shape [ 3, 224, 224 ]
`--shape IMAGE:3,224,224`.
"""
import argparse
import csv
import os
import sys
from pathlib import Path
from typing import Dict, List, Optional
# method from PEP-366 to support relative import in executed modules
if __package__ is None:
__package__ = Path(__file__).parent.name
from .deployment_toolkit.report import save_results, show_results, sort_results
from .deployment_toolkit.warmup import warmup
def calculate_average_latency(r):
avg_sum_fields = [
"Client Send",
"Network+Server Send/Recv",
"Server Queue",
"Server Compute",
"Server Compute Input",
"Server Compute Infer",
"Server Compute Output",
"Client Recv",
]
avg_latency = sum([int(r.get(f, 0)) for f in avg_sum_fields])
return avg_latency
def update_performance_data(results: List, batch_size: int, performance_partial_file: str):
row: Dict = {"batch_size": batch_size}
with open(performance_partial_file, "r") as csvfile:
reader = csv.DictReader(csvfile)
for r in reader:
avg_latency = calculate_average_latency(r)
row = {**row, **r, "avg latency": avg_latency}
results.append(row)
def _parse_batch_sizes(batch_sizes: str):
batches = batch_sizes.split(sep=",")
return list(map(lambda x: int(x.strip()), batches))
def offline_performance(
model_name: str,
batch_sizes: List[int],
result_path: str,
input_shapes: Optional[List[str]] = None,
profiling_data: str = "random",
triton_instances: int = 1,
server_url: str = "localhost",
measurement_window: int = 10000,
shared_memory: bool = False
):
print("\n")
print(f"==== Static batching analysis start ====")
print("\n")
input_shapes = " ".join(map(lambda shape: f" --shape {shape}", input_shapes)) if input_shapes else ""
results: List[Dict] = list()
for batch_size in batch_sizes:
print(f"Running performance tests for batch size: {batch_size}")
performance_partial_file = f"triton_performance_partial_{batch_size}.csv"
exec_args = f"""-max-threads {triton_instances} \
-m {model_name} \
-x 1 \
-c {triton_instances} \
-t {triton_instances} \
-p {measurement_window} \
-v \
-i http \
-u {server_url}:8000 \
-b {batch_size} \
-f {performance_partial_file} \
--input-data {profiling_data} {input_shapes}"""
if shared_memory:
exec_args += " --shared-memory=cuda"
result = os.system(f"perf_client {exec_args}")
if result != 0:
print(f"Failed running performance tests. Perf client failed with exit code {result}")
sys.exit(1)
update_performance_data(results, batch_size, performance_partial_file)
os.remove(performance_partial_file)
results = sort_results(results=results)
save_results(filename=result_path, data=results)
show_results(results=results)
print("Performance results for static batching stored in: {0}".format(result_path))
print("\n")
print(f"==== Analysis done ====")
print("\n")
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--model-name", type=str, required=True, help="Name of the model to test")
parser.add_argument(
"--input-data", type=str, required=False, default="random", help="Input data to perform profiling."
)
parser.add_argument(
"--input-shape",
action="append",
required=False,
help="Input data shape in form INPUT_NAME:<full_shape_without_batch_axis>.",
)
parser.add_argument("--batch-sizes", type=str, required=True, help="List of batch sizes to tests. Comma separated.")
parser.add_argument("--result-path", type=str, required=True, help="Path where result file is going to be stored.")
parser.add_argument("--triton-instances", type=int, default=1, help="Number of Triton Server instances")
parser.add_argument("--server-url", type=str, required=False, default="localhost", help="Url to Triton server")
parser.add_argument(
"--measurement-window", required=False, help="Time which perf_analyzer will wait for results", default=10000
)
parser.add_argument("--shared-memory", help="Use shared memory for communication with Triton", action="store_true",
default=False)
args = parser.parse_args()
warmup(
server_url=args.server_url,
model_name=args.model_name,
batch_sizes=_parse_batch_sizes(args.batch_sizes),
triton_instances=args.triton_instances,
profiling_data=args.input_data,
input_shapes=args.input_shape,
measurement_window=args.measurement_window,
shared_memory=args.shared_memory
)
offline_performance(
server_url=args.server_url,
model_name=args.model_name,
batch_sizes=_parse_batch_sizes(args.batch_sizes),
triton_instances=args.triton_instances,
profiling_data=args.input_data,
input_shapes=args.input_shape,
result_path=args.result_path,
measurement_window=args.measurement_window,
shared_memory=args.shared_memory
)
if __name__ == "__main__":
main()
|
Tools/DGLPyTorch/SyntheticGraphGeneration/scripts | scripts | get_datasets | #Note: Each user is responsible for checking the content of datasets and the applicable licenses and determining if suitable for the intended use
if [ ! "$(ls | grep -c ^scripts$)" -eq 1 ]; then
echo "Run this script from root directory. Usage: bash ./scripts/get_datasets.sh"
exit 1
fi
mkdir -p data
cd data || exit 1
# Lastfm
echo "Processing lastfm ..."
echo "@inproceedings{feather,
title={{Characteristic Functions on Graphs: Birds of a Feather, from Statistical Descriptors to Parametric Models}},
author={Benedek Rozemberczki and Rik Sarkar},
year={2020},
pages = {1325–1334},
booktitle={Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20)},
organization={ACM},
}"
if [ "$(ls | grep -c "^lasftm_asia$")" -ge 1 ]; then
echo "Lastfm directory already exists, skipping ..."
else
wget https://snap.stanford.edu/data/lastfm_asia.zip
unzip lastfm_asia.zip
rm lastfm_asia.zip
fi
# Twitch
echo "Processing Twitch ..."
echo "@misc{rozemberczki2019multiscale,
title={Multi-scale Attributed Node Embedding},
author={Benedek Rozemberczki and Carl Allen and Rik Sarkar},
year={2019},
eprint={1909.13021},
archivePrefix={arXiv},
primaryClass={cs.LG}
}"
if [ "$(ls | grep -c "^twitch$")" -ge 1 ]; then
echo "Twitch directory already exists, skipping ..."
else
mkdir -p twitch && cd twitch || exit 1
wget https://snap.stanford.edu/data/twitch_gamers.zip && unzip twitch_gamers.zip
rm twitch_gamers.zip
cd ..
fi
# Orkut
echo "Processing Orkut ..."
echo "@inproceedings{yang2012defining,
title={Defining and evaluating network communities based on ground-truth},
author={Yang, Jaewon and Leskovec, Jure},
booktitle={Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics},
pages={1--8},
year={2012}
}"
if [ "$(ls | grep -c "^orkut$")" -ge 1 ]; then
echo "Orkut directory already exists, skipping ..."
else
mkdir -p orkut && cd orkut || exit 1
wget https://snap.stanford.edu/data/bigdata/communities/com-orkut.ungraph.txt.gz && gzip -d com-orkut.ungraph.txt.gz
rm com-orkut.ungraph.txt.gz
cd ..
fi
# Tabformer
echo "Processing tabformer ..."
echo "@inproceedings{padhi2021tabular,
title={Tabular transformers for modeling multivariate time series},
author={Padhi, Inkit and Schiff, Yair and Melnyk, Igor and Rigotti, Mattia and Mroueh, Youssef and Dognin, Pierre and Ross, Jerret and Nair, Ravi and Altman, Erik},
booktitle={ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={3565--3569},
year={2021},
organization={IEEE},
url={https://ieeexplore.ieee.org/document/9414142}
}"
if [ "$(ls | grep -c "^tabformer$")" -ge 1 ]; then
echo "Tabformer directory already exists, skipping ..."
else
if [ "$(ls | grep -c "^transactions.tgz$")" -eq 0 ]; then
echo "transactions.tgz not found, skipping ..."
echo "Download tabformer manually - https://github.com/IBM/TabFormer/tree/main/data/credit_card/ and store it as ./data/transactions.tgz"
else
mkdir -p tabformer && mv transactions.tgz tabformer && cd tabformer || exit 1
tar zxvf transactions.tgz
mv transactions.tgz ..
python ../../scripts/time_filter_tabformer.py ./card_transaction.v1.csv
rm card_transaction.v1.csv
cd ..
fi
fi
# IEEE
echo "Processing IEEE ..."
# kaggle competitions download -c ieee-fraud-detection
if [ "$(ls | grep -c "^ieee-fraud$")" -ge 1 ]; then
echo "IEEE directory already exists, skipping ..."
else
if [ "$(ls | grep -c "^ieee-fraud-detection.zip$")" -eq 0 ]; then
echo "ieee-fraud-detection.zip not found, skipping ..."
echo "Download IEEE manually from https://www.kaggle.com/competitions/ieee-fraud-detection/data and store it as ./data/ieee-fraud-detection.zip"
# kaggle competitions download -c ieee-fraud-detection // exemplary command to download
else
mkdir -p ieee-fraud && mv ieee-fraud-detection.zip ieee-fraud && cd ieee-fraud || exit 1
unzip ieee-fraud-detection.zip "*_transaction.csv"
mv ieee-fraud-detection.zip ..
python ../../scripts/ieee_fraud.py .
rm *_transaction.csv
cd ..
fi
fi
# Paysim
echo "Processing Paysim ..."
if [ "$(ls | grep -c "^paysim$")" -ge 1 ]; then
echo "Paysim directory already exists, skipping ..."
else
if [ "$(ls | grep -c "^paysim.zip$")" -eq 0 ]; then
echo "paysim.zip not found, skipping ..."
echo "Download paysim manually from https://www.kaggle.com/datasets/ealaxi/paysim1/download?datasetVersionNumber=2 and store it as ./data/paysim.zip"
#kaggle datasets download -d ealaxi/paysim1 #exemplary command to download
else
mkdir -p paysim && mv paysim.zip paysim && cd paysim || exit 1
unzip paysim.zip
mv paysim.zip ..
cd ..
fi
fi
# credit
echo "Processing credit ..."
if [ "$(ls | grep "^credit$")" -ge 1 ]; then
echo "credit directory already exists, skipping ..."
else
if [ "$(ls | grep -c "^credit.zip$")" -eq 0 ]; then
echo "credit.zip not found, skipping ..."
echo "Download credit manually from https://www.kaggle.com/datasets/kartik2112/fraud-detection/download?datasetVersionNumber=1 and store it as ./data/credit.zip"
# kaggle datasets download -d kartik2112/fraud-detection // exemplary command to download
else
mkdir -p credit && mv credit.zip credit && cd credit || exit 1
unzip credit.zip "fraudTrain.csv"
mv credit.zip ..
python ../../scripts/time_filter_credit.py ./fraudTrain.csv
rm "fraudTrain.csv"
cd ..
fi
fi
# CORA
echo "Processing CORA ..."
echo "@article{sen:aim08,
title = {Collective Classification in Network Data},
author = {Prithviraj Sen, Galileo Mark Namata, Mustafa Bilgic, Lise Getoor, Brian Gallagher, and Tina Eliassi-Rad},
journal = {AI Magazine},
year = {2008},
publisher = {AAAI},
pages = {93--106},
volume = {29},
number = {3},
}"
if [ "$(ls | grep -c "^cora$")" -ge 1 ]; then
echo "CORA directory already exists, skipping ..."
else
python -m syngen preprocess --source-path=./cora --dataset=cora --download
fi
# Rating
echo "Processing Rating ..."
if [ "$(ls | grep -c "^epinions$")" -ge 1 ]; then
echo "Rating file already exists, skipping ..."
else
python -m syngen preprocess --source-path=./epinions --dataset=epinions --download
fi
|
TensorFlow2/Segmentation/nnUNet/models | models | layers | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import nv_norms
import tensorflow as tf
import tensorflow_addons as tfa
convolutions = {
"Conv2d": tf.keras.layers.Conv2D,
"Conv3d": tf.keras.layers.Conv3D,
"ConvTranspose2d": tf.keras.layers.Conv2DTranspose,
"ConvTranspose3d": tf.keras.layers.Conv3DTranspose,
}
class KaimingNormal(tf.keras.initializers.VarianceScaling):
def __init__(self, negative_slope, seed=None):
super().__init__(
scale=2.0 / (1 + negative_slope**2), mode="fan_in", distribution="untruncated_normal", seed=seed
)
def get_config(self):
return {"seed": self.seed}
def get_norm(name):
if "group" in name:
return tfa.layers.GroupNormalization(32, axis=-1, center=True, scale=True)
elif "batch" in name:
return tf.keras.layers.BatchNormalization(axis=-1, center=True, scale=True)
elif "atex_instance" in name:
return nv_norms.InstanceNormalization(axis=-1)
elif "instance" in name:
return tfa.layers.InstanceNormalization(axis=-1, center=True, scale=True)
elif "none" in name:
return tf.identity
else:
raise ValueError("Invalid normalization layer")
def extract_args(kwargs):
args = {}
if "input_shape" in kwargs:
args["input_shape"] = kwargs["input_shape"]
return args
def get_conv(filters, kernel_size, stride, dim, use_bias=False, **kwargs):
conv = convolutions[f"Conv{dim}d"]
return conv(
filters=filters,
kernel_size=kernel_size,
strides=stride,
padding="same",
use_bias=use_bias,
kernel_initializer=KaimingNormal(kwargs["negative_slope"]),
data_format="channels_last",
**extract_args(kwargs),
)
def get_transp_conv(filters, kernel_size, stride, dim, **kwargs):
conv = convolutions[f"ConvTranspose{dim}d"]
return conv(
filters=filters,
kernel_size=kernel_size,
strides=stride,
padding="same",
use_bias=True,
data_format="channels_last",
**extract_args(kwargs),
)
class ConvLayer(tf.keras.layers.Layer):
def __init__(self, filters, kernel_size, stride, **kwargs):
super().__init__()
self.conv = get_conv(filters, kernel_size, stride, **kwargs)
self.norm = get_norm(kwargs["norm"])
self.lrelu = tf.keras.layers.LeakyReLU(alpha=kwargs["negative_slope"])
def call(self, data):
out = self.conv(data)
out = self.norm(out)
out = self.lrelu(out)
return out
class ConvBlock(tf.keras.layers.Layer):
def __init__(self, filters, kernel_size, stride, **kwargs):
super().__init__()
self.conv1 = ConvLayer(filters, kernel_size, stride, **kwargs)
kwargs.pop("input_shape", None)
self.conv2 = ConvLayer(filters, kernel_size, 1, **kwargs)
def call(self, input_data):
out = self.conv1(input_data)
out = self.conv2(out)
return out
class UpsampleBlock(tf.keras.layers.Layer):
def __init__(self, filters, kernel_size, stride, **kwargs):
super().__init__()
self.transp_conv = get_transp_conv(filters, stride, stride, **kwargs)
self.conv_block = ConvBlock(filters, kernel_size, 1, **kwargs)
def call(self, input_data, skip_data):
out = self.transp_conv(input_data)
out = tf.concat((out, skip_data), axis=-1)
out = self.conv_block(out)
return out
class OutputBlock(tf.keras.layers.Layer):
def __init__(self, filters, dim, negative_slope):
super().__init__()
self.conv = get_conv(
filters,
kernel_size=1,
stride=1,
dim=dim,
use_bias=True,
negative_slope=negative_slope,
)
def call(self, data):
return self.conv(data)
|
PyTorch/Classification/GPUNet/triton/runner/maintainer/docker | docker | maintainer | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pathlib
from typing import Any, Dict, List, Optional, Union
import docker
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from ...logger import LOGGER
from ..maintainer import Maintainer
from .container import DockerContainer
from .containers import TritonServerContainer
class DockerMaintainer(Maintainer):
def triton_container(
self, command: str, image: str, devices: List, volumes: Dict, environment: Dict, log_file: Union[pathlib.Path, str]
) -> DockerContainer:
"""
Return triton container
Args:
command: Triton Server command that has to be executed
image: Container image
devices: List of device ids which has to be available in container
volumes: Volumes mapping
environment: Environment variables set in container
log_file: File path where server logs has to be saved
Returns:
DockerContainer object
"""
return TritonServerContainer(
name="triton-server",
command=command,
image=image,
devices=devices,
volumes=volumes,
environment=environment,
log_file=log_file,
)
def build_image(
self,
*,
image_file_path: pathlib.Path,
image_name: str,
workdir_path: Optional[pathlib.Path] = None,
build_args: Optional[Dict[str, Any]] = None,
) -> None:
workdir_path = workdir_path or image_file_path.parent
build_args = build_args or {}
LOGGER.info(f"Building {image_name} docker image.")
LOGGER.debug(f" Using workdir: {workdir_path}")
LOGGER.debug(f" Dockerfile: {image_file_path}")
LOGGER.debug(f" Build args: {build_args}")
build_logs = list()
try:
docker_client = docker.from_env()
_, build_logs = docker_client.images.build(
path=workdir_path.resolve().as_posix(),
dockerfile=image_file_path.resolve().as_posix(),
tag=image_name,
buildargs=build_args,
network_mode="host",
rm=True,
)
except docker.errors.BuildError as e:
build_logs = e.build_log
raise e
finally:
for chunk in build_logs:
log = chunk.get("stream")
if log:
LOGGER.debug(log.rstrip())
|
Tools/DGLPyTorch/SyntheticGraphGeneration/syngen/synthesizer | synthesizer | configuration_graph_synthesizer | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import logging
import json
import os
import shutil
import warnings
from typing import Optional, Literal
import pandas as pd
from syngen.configuration import SynGenDatasetFeatureSpec, SynGenConfiguration
from syngen.generator.tabular import tabular_generators_classes
from syngen.graph_aligner import aligner_classes
from syngen.generator.graph import get_structural_generator_class
from syngen.generator.tabular.utils import tabular_chunk_sample_generation
from syngen.utils.io_utils import (
dump_generated_graph,
load_graph,
load_dataframe,
merge_dataframe_files, dump_dataframe,
)
from syngen.utils.types import DataFrameType, MetaData, DataSourceInputType
from syngen.utils.utils import CustomTimer, dynamic_import, get_object_path, to_ndarray, df_to_pandas, ensure_path
logger = logging.getLogger(__name__)
log = logger
warnings.filterwarnings('ignore')
class ConfigurationGraphSynthesizer(object):
"""A configuration graph synthesizer. Supports generating graph datasets based on the provided configuration. This synthesizer requires a dataset to be fit on
prior to generating graphs of similar properties.
Args:
configuration (SynGenConfiguration): configuration to be used during generation
timer_path (srt): path to the file where the generation process timings will be saved
num_workers (int): number of workers to speed up generation.
save_path (str): path to the directory where the results will be saved
gpu (bool): flag to use GPU graph generator (default: True ), if set to False CPU will be used.
verbose (bool): print intermediate results (default: False)
"""
def __init__(
self,
configuration: SynGenConfiguration,
timer_path: Optional[str] = None,
num_workers: int = 1,
save_path: str = './',
gpu: bool = True,
verbose: bool = False,
**kwargs,
):
self.configuration = configuration
self.num_workers = num_workers
self.verbose = verbose
self.timer = CustomTimer(timer_path, verbose=self.verbose)
self.gpu = gpu
self.save_path = save_path
if not os.path.exists(self.save_path):
os.makedirs(self.save_path)
self.structure_generators = None
self.tabular_generators = None
self.aligners = None
def _fit_tabular_generators(self, tab_gen_configs, feature_info_list,
part: Literal[MetaData.NODES, MetaData.EDGES],
features_to_return=()):
tabular_generators = []
feature_info_dict = {feature[MetaData.NAME]: feature for feature in feature_info_list}
feature_data_cache = {}
for tab_gen_cfg in tab_gen_configs:
gen_info = {'feature_file': tab_gen_cfg.get('feature_file')}
tab_gen_class = tabular_generators_classes[tab_gen_cfg[MetaData.TYPE]]
tab_gen_cfg[MetaData.PARAMS]['gpu'] = tab_gen_cfg[MetaData.PARAMS].get('gpu', self.gpu)
tab_gen_cfg[MetaData.PARAMS]['verbose'] = tab_gen_cfg[MetaData.PARAMS].get('verbose', self.verbose)
perform_fit = True
enforce_fit = tab_gen_cfg.get('perform_fit', False)
generator_dump_path = tab_gen_cfg.get(MetaData.DUMP_PATH, None)
if generator_dump_path and os.path.exists(generator_dump_path) and not enforce_fit:
tab_gen = tab_gen_class.load(generator_dump_path)
perform_fit = False
else:
tab_gen = tab_gen_class(**tab_gen_cfg[MetaData.PARAMS])
if tab_gen_cfg[MetaData.DATA_SOURCE][MetaData.TYPE] == DataSourceInputType.RANDOM:
if perform_fit:
tab_gen.fit(columns=tab_gen_cfg[MetaData.FEATURES_LIST])
if generator_dump_path and perform_fit:
tab_gen.save(generator_dump_path)
tabular_generators.append((tab_gen, gen_info))
continue
categorical_features = []
data_source_feature_info_list = None
if not perform_fit:
pass
elif tab_gen_cfg[MetaData.DATA_SOURCE][MetaData.TYPE] == DataSourceInputType.DATASET:
data_source_path = tab_gen_cfg[MetaData.DATA_SOURCE][MetaData.PATH]
elif tab_gen_cfg[MetaData.DATA_SOURCE][MetaData.TYPE] == DataSourceInputType.CONFIGURATION:
cfg = SynGenDatasetFeatureSpec.instantiate_from_preprocessed(
tab_gen_cfg[MetaData.DATA_SOURCE][MetaData.PATH])
data_source_info = cfg.get_info(part, tab_gen_cfg[MetaData.DATA_SOURCE][MetaData.NAME])
data_source_feature_info_list = data_source_info[MetaData.FEATURES]
data_source_path = os.path.join(tab_gen_cfg[MetaData.DATA_SOURCE][MetaData.PATH],
data_source_info[MetaData.FEATURES_PATH])
else:
raise ValueError("unsupported data_source type")
for feature_name in tab_gen_cfg[MetaData.FEATURES_LIST]:
if feature_info_dict[feature_name][MetaData.FEATURE_TYPE] == MetaData.CATEGORICAL:
categorical_features.append(feature_name)
if not perform_fit and len(features_to_return) == 0:
pass
elif data_source_path in feature_data_cache:
data = feature_data_cache[data_source_path]
else:
# FORCE_CPU_MEM_TRANSFER
data = load_dataframe(data_source_path, feature_info=data_source_feature_info_list)
feature_data_cache[data_source_path] = data
if perform_fit:
tab_gen.fit(data,
categorical_columns=categorical_features,
columns=tab_gen_cfg[MetaData.FEATURES_LIST],
verbose=self.verbose)
if generator_dump_path and perform_fit:
tab_gen.save(ensure_path(generator_dump_path))
tabular_generators.append((tab_gen, gen_info))
if features_to_return:
return_dataframe = pd.DataFrame()
for _, cache_data in feature_data_cache.items():
columns_intersect = list(set(features_to_return) & set(cache_data.columns))
return_dataframe[columns_intersect] = cache_data[columns_intersect]
del feature_data_cache
return_categorical_features = []
for feature_name in features_to_return:
if feature_info_dict[feature_name][MetaData.FEATURE_TYPE] == MetaData.CATEGORICAL:
return_categorical_features.append(feature_name)
return tabular_generators, (return_dataframe, return_categorical_features)
del feature_data_cache
return tabular_generators
def _fit_structural_generator(self, edge_type, return_graph=False):
structure_gen_cfg = edge_type[MetaData.STRUCTURE_GENERATOR]
is_bipartite = edge_type[MetaData.SRC_NODE_TYPE] != edge_type[MetaData.DST_NODE_TYPE]
is_directed = edge_type[MetaData.DIRECTED]
data_source_cfg = structure_gen_cfg[MetaData.DATA_SOURCE]
is_random = data_source_cfg[MetaData.TYPE] == DataSourceInputType.RANDOM
generator_class = get_structural_generator_class(
structure_gen_cfg[MetaData.TYPE],
is_bipartite=is_bipartite,
is_random=is_random,
)
gen_info = dict(is_bipartite=is_bipartite,
is_directed=is_directed,
num_edges=edge_type[MetaData.COUNT],
noise=structure_gen_cfg[MetaData.PARAMS].get('noise', 0.5))
structure_gen_cfg[MetaData.PARAMS]['gpu'] = structure_gen_cfg[MetaData.PARAMS].get('gpu', self.gpu)
structure_gen_cfg[MetaData.PARAMS]['verbose'] = structure_gen_cfg[MetaData.PARAMS].get('verbose', self.verbose)
perform_fit = True
enforce_fit = structure_gen_cfg.get('perform_fit', False)
generator_dump_path = structure_gen_cfg.get(MetaData.DUMP_PATH, None)
if generator_dump_path and os.path.exists(generator_dump_path) and not enforce_fit:
generator = generator_class.load(generator_dump_path)
generator.gpu = structure_gen_cfg[MetaData.PARAMS]['gpu']
generator.verbose = structure_gen_cfg[MetaData.PARAMS]['verbose']
perform_fit = False
else:
generator = generator_class(
**structure_gen_cfg[MetaData.PARAMS]
)
if not perform_fit and not return_graph:
pass
elif data_source_cfg[MetaData.TYPE] == DataSourceInputType.RANDOM:
graph = None
elif data_source_cfg[MetaData.TYPE] == DataSourceInputType.CONFIGURATION:
cfg = SynGenDatasetFeatureSpec.instantiate_from_preprocessed(data_source_cfg[MetaData.PATH])
data_source_edge_info = cfg.get_edge_info(data_source_cfg[MetaData.NAME])
graph_src_set = cfg.get_node_info(data_source_edge_info[MetaData.SRC_NODE_TYPE])[MetaData.COUNT]
graph_path = os.path.join(data_source_cfg[MetaData.PATH], data_source_edge_info[MetaData.STRUCTURE_PATH])
graph = load_graph(graph_path)
else:
raise ValueError("unsupported data_source type")
if is_bipartite:
gen_info['is_directed'] = False
gen_info['num_nodes_src_set'] = self.configuration.get_node_info(
edge_type[MetaData.SRC_NODE_TYPE])[MetaData.COUNT]
gen_info['num_nodes_dst_set'] = self.configuration.get_node_info(
edge_type[MetaData.DST_NODE_TYPE])[MetaData.COUNT]
if perform_fit:
generator.fit(graph, src_set=None, dst_set=None,
is_directed=False, transform_graph=False)
else:
gen_info['num_nodes'] = self.configuration.get_node_info(edge_type[MetaData.SRC_NODE_TYPE])[MetaData.COUNT]
gen_info['has_self_loop'] = structure_gen_cfg[MetaData.PARAMS].get('has_self_loop', False)
if perform_fit:
generator.fit(graph, is_directed=is_directed)
if generator_dump_path and perform_fit:
generator.save(generator_dump_path)
if return_graph:
return (generator, gen_info), graph, graph_src_set
return generator, gen_info
def _fit_aligners(self, aligner_cfgs, graphs_to_process, features_to_align):
aligners = []
for aligner_cfg in aligner_cfgs:
aligner_class = aligner_classes[aligner_cfg[MetaData.TYPE]]
aligner_graphs = {graph_name: graphs_to_process[graph_name] for graph_name in aligner_cfg[MetaData.GRAPHS]}
aligner_node_features = {feature_name: features_to_align[MetaData.NODES][feature_name]
for feature_name in aligner_cfg[MetaData.NODES]}
aligner_edge_features = {feature_name: features_to_align[MetaData.EDGES][feature_name]
for feature_name in aligner_cfg[MetaData.EDGES]}
aligner = aligner_class(**aligner_cfg[MetaData.PARAMS])
aligner.fit(aligner_graphs, aligner_node_features, aligner_edge_features)
aligners.append((
aligner,
{
graph_name: {
MetaData.SRC_NODE_TYPE: graph_info[MetaData.SRC_NODE_TYPE],
MetaData.DST_NODE_TYPE: graph_info[MetaData.DST_NODE_TYPE]
}
for graph_name, graph_info in aligner_graphs.items()
}
))
del features_to_align
del graphs_to_process
return aligners
def fit(
self,
):
"""Fit the synthesizer on graph.
"""
self.structure_generators = {}
self.tabular_generators = {MetaData.NODES: {}, MetaData.EDGES: {}}
self.aligners = []
graphs_to_process = {}
features_to_align = {MetaData.NODES: {}, MetaData.EDGES: {}}
if MetaData.ALIGNERS in self.configuration:
for aligner_cfg in self.configuration[MetaData.ALIGNERS]:
for graph_name in aligner_cfg[MetaData.GRAPHS]:
graphs_to_process[graph_name] = None
for part in [MetaData.NODES, MetaData.EDGES]:
if aligner_cfg[part]:
for part_name, feature_names in aligner_cfg[part].items():
if part_name not in features_to_align[part]:
features_to_align[part][part_name] = {
MetaData.FEATURES_LIST: set(),
}
features_to_align[part][part_name][MetaData.FEATURES_LIST] |= set(feature_names)
self.timer.start_counter('fit')
self.timer.start_counter('fit_nodes')
for node_type in self.configuration[MetaData.NODES]:
node_name = node_type[MetaData.NAME]
if MetaData.TABULAR_GENERATORS in node_type:
self.timer.start_counter(f'fit_node_{node_name}')
if node_name in features_to_align[MetaData.NODES]:
self.tabular_generators[MetaData.NODES][node_name], (features_data, cat_cols) = \
self._fit_tabular_generators(
node_type[MetaData.TABULAR_GENERATORS], node_type[MetaData.FEATURES], MetaData.NODES,
features_to_return=list(features_to_align[MetaData.NODES][node_name][MetaData.FEATURES_LIST])
)
features_to_align[MetaData.NODES][node_name][MetaData.FEATURES_DATA] = features_data
features_to_align[MetaData.NODES][node_name][MetaData.CATEGORICAL_COLUMNS] = cat_cols
else:
self.tabular_generators[MetaData.NODES][node_name] = self._fit_tabular_generators(
node_type[MetaData.TABULAR_GENERATORS], node_type[MetaData.FEATURES], MetaData.NODES
)
self.timer.end_counter(f'fit_node_{node_name}',
f'NODE {node_name} FIT TOOK')
self.timer.end_counter('fit_nodes', 'FIT NODES TOOK')
self.timer.start_counter('fit_edges')
for edge_type in self.configuration[MetaData.EDGES]:
edge_name = edge_type[MetaData.NAME]
if MetaData.STRUCTURE_GENERATOR in edge_type:
self.timer.start_counter(f'fit_edges_struct_{edge_name}')
if edge_name in graphs_to_process:
graphs_to_process[edge_name] = {
MetaData.SRC_NODE_TYPE: edge_type[MetaData.SRC_NODE_TYPE],
MetaData.DST_NODE_TYPE: edge_type[MetaData.DST_NODE_TYPE],
}
self.structure_generators[edge_name], \
graphs_to_process[edge_name][MetaData.STRUCTURE_DATA], \
graphs_to_process[edge_name]['src_size'] = self._fit_structural_generator(edge_type, return_graph=True)
else:
self.structure_generators[edge_name] = self._fit_structural_generator(edge_type)
self.timer.end_counter(f'fit_edges_struct_{edge_name}',
f'EDGE {edge_name} STRUCTURAL FIT TOOK')
if MetaData.TABULAR_GENERATORS in edge_type:
self.timer.start_counter(f'fit_edges_tabular_{edge_name}')
if edge_name in features_to_align[MetaData.EDGES]:
self.tabular_generators[MetaData.EDGES][edge_name], (features_data, cat_cols) = \
self._fit_tabular_generators(
edge_type[MetaData.TABULAR_GENERATORS], edge_type[MetaData.FEATURES], MetaData.EDGES,
features_to_return=list(features_to_align[MetaData.EDGES][edge_name][MetaData.FEATURES_LIST])
)
features_to_align[MetaData.EDGES][edge_name][MetaData.FEATURES_DATA] = features_data
features_to_align[MetaData.EDGES][edge_name][MetaData.CATEGORICAL_COLUMNS] = cat_cols
else:
self.tabular_generators[MetaData.EDGES][edge_name] = self._fit_tabular_generators(
edge_type[MetaData.TABULAR_GENERATORS], edge_type[MetaData.FEATURES], MetaData.EDGES
)
self.timer.end_counter(f'fit_edges_tabular_{edge_name}',
f'EDGE {edge_name} TABULAR FIT TOOK')
if MetaData.ALIGNERS in self.configuration:
self.aligners = self._fit_aligners(self.configuration[MetaData.ALIGNERS],
graphs_to_process,
features_to_align)
self.timer.end_counter('fit_edges', 'FIT EDGES TOOK')
self.timer.end_counter('fit', 'FIT TOOK')
def _generate_tabular_data(self, tabular_generators, num_samples, features_path, name):
merge_data = features_path.endswith('.csv') or features_path.endswith('.parquet')
if self.aligners:
assert merge_data
generated_dfs = []
for tab_gen_id, (tab_gen, gen_info) in enumerate(tabular_generators):
use_memmap = False
if merge_data:
save_path = os.path.join(self.save_path, 'temp_tab_gen_dir')
fname = f"{name}_{tab_gen_id}" if len(tabular_generators) > 1 else name
else:
save_path = os.path.join(self.save_path, features_path)
fname = 'chunk'
os.makedirs(save_path, exist_ok=True)
if gen_info['feature_file'] and gen_info['feature_file'].endswith('.npy') and tab_gen.supports_memmap:
use_memmap = True
fname = gen_info['feature_file']
feature_files = tabular_chunk_sample_generation(
tab_gen,
n_samples=num_samples,
save_path=save_path,
fname=fname,
num_workers=self.num_workers,
use_memmap=use_memmap,
verbose=self.verbose
)
if merge_data:
generated_df = merge_dataframe_files(feature_files, format='parquet')
generated_dfs.append(generated_df)
shutil.rmtree(save_path)
if merge_data:
generated_dfs = pd.concat(generated_dfs, axis=1)
dump_dataframe(generated_dfs, os.path.join(self.save_path, features_path), format=None)
gc.collect()
def generate(
self,
return_data=False,
**kwargs,
):
""" Generates graph
Args:
return_data(bool): if true load the generated data into the output configuration
"""
node_type_to_node_counts = {node_type[MetaData.NAME]: node_type[MetaData.COUNT]
for node_type in self.configuration[MetaData.NODES]}
edge_type_to_edge_info = {edge_type[MetaData.NAME]: edge_type
for edge_type in self.configuration[MetaData.EDGES]}
output_config = self.configuration.copy()
edge_type_name_to_idx = {edge_info[MetaData.NAME]: idx
for idx, edge_info in enumerate(output_config[MetaData.EDGES])}
node_type_name_to_idx = {node_info[MetaData.NAME]: idx
for idx, node_info in enumerate(output_config[MetaData.NODES])}
self.timer.start_counter("gen_s")
for edge_type_name, (structure_generator, gen_info) in self.structure_generators.items():
self.timer.start_counter(f'gen_edges_struct_{edge_type_name}')
edge_info = edge_type_to_edge_info[edge_type_name]
generated_graph_path = ensure_path(os.path.join(self.save_path, edge_info[MetaData.STRUCTURE_PATH]))
merge_data = generated_graph_path.endswith('.csv') or \
generated_graph_path.endswith('.parquet')
use_memmap = generated_graph_path.endswith('.npy')
if not merge_data and not use_memmap:
os.makedirs(generated_graph_path, exist_ok=True)
if gen_info['is_bipartite']:
num_nodes_src_set = node_type_to_node_counts[edge_info[MetaData.SRC_NODE_TYPE]] \
if node_type_to_node_counts[edge_info[MetaData.SRC_NODE_TYPE]] > -1 \
else gen_info['num_nodes_src_set']
num_nodes_dst_set = node_type_to_node_counts[edge_info[MetaData.DST_NODE_TYPE]] \
if node_type_to_node_counts[edge_info[MetaData.DST_NODE_TYPE]] > -1 \
else gen_info['num_nodes_dst_set']
graph, src_nodes, dst_nodes = structure_generator.generate(
num_edges_dst_src=gen_info['num_edges'],
num_edges_src_dst=gen_info['num_edges'],
num_nodes_src_set=num_nodes_src_set,
num_nodes_dst_set=num_nodes_dst_set,
is_directed=gen_info['is_directed'],
noise=gen_info.get('noise', 0.5),
return_node_ids=True,
apply_edge_mirroring=False,
transform_graph=False,
save_path=None if merge_data else generated_graph_path,
)
node_type_to_node_counts[edge_info[MetaData.SRC_NODE_TYPE]] = max(
node_type_to_node_counts[edge_info[MetaData.SRC_NODE_TYPE]],
src_nodes.max() + 1
)
node_type_to_node_counts[edge_info[MetaData.DST_NODE_TYPE]] = max(
node_type_to_node_counts[edge_info[MetaData.DST_NODE_TYPE]],
dst_nodes.max() + 1
)
else:
num_nodes = node_type_to_node_counts[edge_info[MetaData.SRC_NODE_TYPE]] \
if node_type_to_node_counts[edge_info[MetaData.SRC_NODE_TYPE]] > -1 \
else gen_info['num_nodes']
graph, node_ids = structure_generator.generate(
num_nodes=num_nodes,
num_edges=gen_info['num_edges'],
is_directed=gen_info['is_directed'],
has_self_loop=gen_info.get('has_self_loop', False),
noise=gen_info.get('noise', 0.5),
return_node_ids=True,
save_path=None if merge_data else generated_graph_path
)
node_type_to_node_counts[edge_info[MetaData.SRC_NODE_TYPE]] = max(
node_type_to_node_counts[edge_info[MetaData.SRC_NODE_TYPE]],
node_ids.max() + 1
)
if merge_data or not self.gpu:
dump_generated_graph(generated_graph_path, graph)
output_config[MetaData.EDGES][edge_type_name_to_idx[edge_type_name]][MetaData.COUNT] = \
len(graph) if merge_data or use_memmap else int(graph)
del graph
gc.collect()
self.timer.end_counter(f'gen_edges_struct_{edge_type_name}',
f'EDGE {edge_type_name} STRUCT GEN TOOK')
self.timer.end_counter("gen_s", "GEN STRUCT TOOK")
for node_type_name, counts in node_type_to_node_counts.items():
output_config[MetaData.NODES][node_type_name_to_idx[node_type_name]][MetaData.COUNT] = int(counts)
self.timer.start_counter("gen_t_nodes")
for node_type_name, tabular_generators in self.tabular_generators[MetaData.NODES].items():
num_nodes = node_type_to_node_counts[node_type_name]
features_path = output_config[MetaData.NODES][node_type_name_to_idx[node_type_name]][MetaData.FEATURES_PATH]
self._generate_tabular_data(tabular_generators, num_nodes, features_path, node_type_name)
self.timer.end_counter("gen_t_nodes", "GEN TABULAR NODE FEATURES TOOK")
self.timer.start_counter("gen_t_edges")
for edge_type_name, tabular_generators in self.tabular_generators[MetaData.EDGES].items():
num_edges = output_config[MetaData.EDGES][edge_type_name_to_idx[edge_type_name]][MetaData.COUNT]
features_path = output_config[MetaData.EDGES][edge_type_name_to_idx[edge_type_name]][MetaData.FEATURES_PATH]
self._generate_tabular_data(tabular_generators, num_edges, features_path, edge_type_name)
self.timer.end_counter("gen_t_edges", "GEN TABULAR EDGE FEATURES TOOK")
self.timer.start_counter("gen_alignment")
if self.aligners:
for aligner, graphs_info in self.aligners:
graphs_data = {}
for graph_name, graph_info in graphs_info.items():
graphs_data[graph_name] = graph_info.copy()
if graph_info[MetaData.SRC_NODE_TYPE] != graph_info[MetaData.DST_NODE_TYPE]:
graphs_data[graph_name]['src_size'] = \
output_config[MetaData.NODES][node_type_name_to_idx[graph_info[MetaData.SRC_NODE_TYPE]]][
MetaData.COUNT]
graphs_data[graph_name][MetaData.STRUCTURE_DATA] = load_graph(os.path.join(
self.save_path,
output_config[MetaData.EDGES][edge_type_name_to_idx[graph_name]][MetaData.STRUCTURE_PATH]
))
node_features_data = {
node_name: load_dataframe(os.path.join(
self.save_path,
output_config[MetaData.NODES][node_type_name_to_idx[node_name]][MetaData.FEATURES_PATH]),
feature_info=output_config[MetaData.NODES][node_type_name_to_idx[node_name]][MetaData.FEATURES]
)
for node_name in aligner.features_to_correlate_node
}
edge_features_data = {
edge_name: load_dataframe(os.path.join(
self.save_path,
output_config[MetaData.EDGES][edge_type_name_to_idx[edge_name]][MetaData.FEATURES_PATH]),
feature_info=output_config[MetaData.EDGES][edge_type_name_to_idx[edge_name]][MetaData.FEATURES]
)
for edge_name in aligner.features_to_correlate_edge
}
aligned_data = aligner.align(
graphs_data,
node_features_data,
edge_features_data,
)
for node_name, tab_data in aligned_data[MetaData.NODES].items():
dump_dataframe(tab_data, os.path.join(
self.save_path,
output_config[MetaData.NODES][node_type_name_to_idx[node_name]][MetaData.FEATURES_PATH]
), format=None
)
for edge_name, tab_data in aligned_data[MetaData.EDGES].items():
dump_dataframe(tab_data, os.path.join(
self.save_path,
output_config[MetaData.EDGES][edge_type_name_to_idx[edge_name]][MetaData.FEATURES_PATH]
), format=None
)
self.timer.end_counter("gen_alignment", "GEN ALIGNMENT TAKE")
with open(os.path.join(self.save_path, 'graph_metadata.json'), 'w') as f:
json.dump(output_config, f, indent=4)
output_config[MetaData.PATH] = self.save_path
if return_data:
for node_info in output_config[MetaData.NODES]:
if node_info[MetaData.FEATURES_PATH]:
node_info[MetaData.FEATURES_DATA] = load_dataframe(os.path.join(
self.save_path, node_info[MetaData.FEATURES_PATH]
))
for edge_info in output_config[MetaData.EDGES]:
if edge_info[MetaData.FEATURES_PATH]:
edge_info[MetaData.FEATURES_DATA] = load_dataframe(os.path.join(
self.save_path, edge_info[MetaData.FEATURES_PATH]
))
if edge_info[MetaData.STRUCTURE_PATH]:
edge_info[MetaData.STRUCTURE_DATA] = load_graph(os.path.join(
self.save_path, edge_info[MetaData.STRUCTURE_PATH],
))
return output_config
return output_config
def save(self, path):
""" saves the synthesizer to disk
Args:
path (str): The path to save the synthesizer to
"""
meta_data = {
"configuration": self.configuration.copy(),
"timer_path": self.timer.path,
"num_workers": self.num_workers,
"save_path": self.save_path,
"gpu": self.gpu,
"verbose": self.verbose,
}
if not os.path.exists(path):
os.makedirs(path)
if self.structure_generators:
meta_data['struct_gens'] = {}
for edge_name, (struct_gen, gen_info) in self.structure_generators.items():
struct_gen.save(os.path.join(path, f'struct_gen_{edge_name}'))
meta_data['struct_gens'][edge_name] = {
'gen_info': gen_info,
'object_path': get_object_path(struct_gen)
}
if self.tabular_generators:
meta_data['tab_gens'] = {}
for part, part_gens in self.tabular_generators.items():
meta_data['tab_gens'][part] = {}
for part_name, tab_gens in part_gens.items():
meta_data['tab_gens'][part][part_name] = []
for idx, (tab_gen, gen_info) in enumerate(tab_gens):
tab_gen.save(os.path.join(path, f'tab_gen_{part}_{part_name}_{idx}'))
meta_data['tab_gens'][part][part_name].append({
'gen_info': gen_info,
'object_path': get_object_path(tab_gen)
})
if self.aligners:
meta_data['aligners'] = []
for idx, (aligner, graphs_info) in enumerate(self.aligners):
aligner.save(os.path.join(path, f'aligner_{idx}'))
meta_data['aligners'].append(
{
'object_path': get_object_path(aligner),
'graphs_info': graphs_info,
}
)
with open(os.path.join(path, "synthesizer_metadata.json"), "w") as fp:
json.dump(meta_data, fp, indent=4)
@classmethod
def load(cls, path):
""" load up a saved synthesizer object from disk.
Args:
path (str): The path to load the synthesizer from
"""
with open(os.path.join(path, "synthesizer_metadata.json"), 'r') as f:
meta_data = json.load(f)
struct_gens = meta_data.pop('struct_gens', {})
tab_gens = meta_data.pop('tab_gens', {})
aligners = meta_data.pop('aligners', {})
instance = cls(**meta_data)
if struct_gens:
instance.structure_generators = {
edge_name: (
dynamic_import(data['object_path']).load(
os.path.join(path, f'struct_gen_{edge_name}')
),
data['gen_info'],
)
for edge_name, data in struct_gens.items()
}
if tab_gens:
instance.tabular_generators = {
part: {
part_name: [
(
dynamic_import(data['object_path']).load(
os.path.join(path, f'tab_gen_{part}_{part_name}_{idx}')
),
data['gen_info'],
)
for idx, data in enumerate(part_gens)
]
for part_name, part_gens in part_data.items()
}
for part, part_data in tab_gens.items()
}
if aligners:
instance.aligners = [
(
dynamic_import(data['object_path']).load(
os.path.join(path, f'aligner_{idx}')
),
data['graphs_info'],
)
for idx, data in enumerate(aligners)
]
return instance
|
PyTorch/LanguageModeling/BERT/lamb_amp_opt/csrc | csrc | multi_tensor_lamb_out | #include <ATen/ATen.h>
#include <ATen/AccumulateType.h>
#include <ATen/cuda/CUDAContext.h>
#include <ATen/cuda/Exceptions.h>
// Another possibility:
// #include <torch/all.h>
#include <assert.h>
#include "type_shim.h"
#include "multi_tensor_apply.cuh"
#define BLOCK_SIZE 512
#define ILP 4
std::tuple<at::Tensor, at::Tensor> multi_tensor_l2norm_cuda(
int chunk_size,
at::Tensor noop_flag,
std::vector<std::vector<at::Tensor>> tensor_lists,
at::optional<bool> per_tensor_python,
at::Tensor found_inf,
at::Tensor inv_scale);
template<typename T>
__device__ __forceinline__ bool is_aligned(T* p){
return ((uint64_t)p) % (ILP*sizeof(T)) == 0;
}
template<typename T1, typename T2>
__device__ __forceinline__ void load_store_with_cast(T1* dst, T2* src, int dst_offset, int src_offset) {
for (size_t i = 0; i < ILP; ++i) {
dst[dst_offset + i] = static_cast<T1>(src[src_offset + i]);
}
}
template<typename T>
__device__ __forceinline__ void load_store(T* dst, T* src, int dst_offset, int src_offset){
typedef typename std::aligned_storage<ILP*sizeof(T), ILP*alignof(T)>::type LT;
((LT*)dst)[dst_offset] = ((LT*)src)[src_offset];
}
typedef enum{
MOMENT_MODE_0 =0, // L2 regularization mode
MOMENT_MODE_1 =1 // Decoupled weight decay mode
} adamMode_t;
using MATH_T = float;
template<typename grad_t, typename param_t>
struct LAMBStage1Functor
{
__device__ __forceinline__ void operator()(
int chunk_size,
volatile int* noop_gmem,
TensorListMetadata<4>& tl,
const float beta1,
const float beta2,
const float beta3,
const float beta1_correction,
const float beta2_correction,
const float epsilon,
adamMode_t mode,
const float decay,
const float* global_grad_norm,
const float max_global_grad_norm,
const float* found_inf,
const float* inv_scale)
{
if (*found_inf) {
return;
}
int tensor_loc = tl.block_to_tensor[blockIdx.x];
int chunk_idx = tl.block_to_chunk[blockIdx.x];
int n = tl.sizes[tensor_loc];
float clipped_global_grad_norm = (*global_grad_norm) > max_global_grad_norm ? (*global_grad_norm) / max_global_grad_norm : 1.0f;
grad_t* g = (grad_t*)tl.addresses[0][tensor_loc];
g += chunk_idx*chunk_size;
param_t* p = (param_t*)tl.addresses[1][tensor_loc];
p += chunk_idx*chunk_size;
param_t* m = (param_t*)tl.addresses[2][tensor_loc];
m += chunk_idx*chunk_size;
param_t* v = (param_t*)tl.addresses[3][tensor_loc];
v += chunk_idx*chunk_size;
n -= chunk_idx*chunk_size;
MATH_T r_g[ILP];
MATH_T r_p[ILP];
MATH_T r_m[ILP];
MATH_T r_v[ILP];
// to make things simple, we put aligned case in a different code path
if(n % ILP == 0 &&
chunk_size % ILP == 0 &&
is_aligned(g) &&
is_aligned(p) &&
is_aligned(m) &&
is_aligned(v))
{
grad_t l_g[ILP];
param_t l_p[ILP];
param_t l_m[ILP];
param_t l_v[ILP];
for(int i_start = threadIdx.x; i_start*ILP < n && i_start*ILP < chunk_size; i_start += blockDim.x)
{
// load
load_store(l_g, g, 0, i_start);
if (decay != 0)
load_store(l_p, p, 0, i_start);
load_store(l_m, m, 0, i_start);
load_store(l_v, v, 0, i_start);
// unpack
#pragma unroll
for(int ii = 0; ii < ILP; ii++)
{
r_g[ii] = l_g[ii] * (*inv_scale);
if (decay == 0) {
r_p[ii] = MATH_T(0);
}
else {
r_p[ii] = l_p[ii];
}
r_m[ii] = l_m[ii];
r_v[ii] = l_v[ii];
}
#pragma unroll
for(int ii = 0; ii < ILP; ii++)
{
if (mode == MOMENT_MODE_0) {
MATH_T scaled_grad = r_g[ii] / clipped_global_grad_norm;
// L2 on scaled grad
scaled_grad = scaled_grad + decay*r_p[ii];
r_m[ii] = r_m[ii] * beta1 + beta3 * scaled_grad;
r_v[ii] = r_v[ii] * beta2 + (1-beta2) * scaled_grad * scaled_grad;
MATH_T next_m_unbiased = r_m[ii] / beta1_correction;
MATH_T next_v_unbiased = r_v[ii] / beta2_correction;
MATH_T denom = sqrtf(next_v_unbiased) + epsilon;
r_p[ii] = next_m_unbiased / denom;
}
else {
MATH_T scaled_grad = r_g[ii] / clipped_global_grad_norm;
r_m[ii] = r_m[ii] * beta1 + beta3 * scaled_grad;
r_v[ii] = r_v[ii] * beta2 + (1-beta2) * scaled_grad * scaled_grad;
MATH_T next_m_unbiased = r_m[ii] / beta1_correction;
MATH_T next_v_unbiased = r_v[ii] / beta2_correction;
MATH_T denom = sqrtf(next_v_unbiased) + epsilon;
r_p[ii] = (next_m_unbiased/denom) + (decay*r_p[ii]);
}
}
#pragma unroll
for(int ii = 0; ii < ILP; ii++)
{
l_p[ii] = r_p[ii];
l_m[ii] = r_m[ii];
l_v[ii] = r_v[ii];
}
// store
load_store_with_cast<grad_t, MATH_T>(g, l_p, i_start, 0);
load_store(m, l_m, i_start, 0);
load_store(v, l_v, i_start, 0);
}
}
else
{
// see note in multi_tensor_scale_kernel.cu
for(int i_start = 0;
i_start < n && i_start < chunk_size;
i_start += blockDim.x*ILP)
{
MATH_T r_g[ILP];
MATH_T r_p[ILP];
MATH_T r_m[ILP];
MATH_T r_v[ILP];
#pragma unroll
for(int ii = 0; ii < ILP; ii++)
{
int i = i_start + threadIdx.x + ii*blockDim.x;
if(i < n && i < chunk_size)
{
r_g[ii] = g[i];
// special ?optimization? for lamb stage 1
if (decay == 0) {
r_p[ii] = MATH_T(0);
}
else {
r_p[ii] = p[i];
}
r_m[ii] = m[i];
r_v[ii] = v[i];
} else {
r_g[ii] = MATH_T(0);
r_p[ii] = MATH_T(0);
r_m[ii] = MATH_T(0);
r_v[ii] = MATH_T(0);
}
}
#pragma unroll
for(int ii = 0; ii < ILP; ii++)
{
if (mode == MOMENT_MODE_0) {
MATH_T scaled_grad = r_g[ii] / clipped_global_grad_norm;
// L2 on scaled grad
scaled_grad = scaled_grad + decay*r_p[ii];
r_m[ii] = r_m[ii] * beta1 + beta3 * scaled_grad;
r_v[ii] = r_v[ii] * beta2 + (1-beta2) * scaled_grad * scaled_grad;
MATH_T next_m_unbiased = r_m[ii] / beta1_correction;
MATH_T next_v_unbiased = r_v[ii] / beta2_correction;
MATH_T denom = sqrtf(next_v_unbiased) + epsilon;
r_p[ii] = next_m_unbiased / denom;
}
else {
MATH_T scaled_grad = r_g[ii] / clipped_global_grad_norm;
r_m[ii] = r_m[ii] * beta1 + beta3 * scaled_grad;
r_v[ii] = r_v[ii] * beta2 + (1-beta2) * scaled_grad * scaled_grad;
MATH_T next_m_unbiased = r_m[ii] / beta1_correction;
MATH_T next_v_unbiased = r_v[ii] / beta2_correction;
MATH_T denom = sqrtf(next_v_unbiased) + epsilon;
r_p[ii] = (next_m_unbiased/denom) + (decay*r_p[ii]);
}
}
#pragma unroll
for(int ii = 0; ii < ILP; ii++)
{
int i = i_start + threadIdx.x + ii*blockDim.x;
if(i < n && i < chunk_size)
{
g[i] = r_p[ii];
m[i] = r_m[ii];
v[i] = r_v[ii];
}
}
}
}
}
};
// Step 2 reads in 'update' value and per-tensor param_norm and update_norm.
// It computes new parameter value.
template<typename T, typename master_param_t>
struct LAMBStage2Functor
{
__device__ __forceinline__ void operator()(
int chunk_size,
volatile int* noop_gmem,
TensorListMetadata<3>& tl,
const float* per_tensor_param_norm,
const float* per_tensor_update_norm,
const float learning_rate,
const float decay,
bool use_nvlamb,
float* found_inf,
float* inv_scale)
{
// I'd like this kernel to propagate infs/nans.
// if(*noop_gmem == 1)
// return;
if (*found_inf) {
return;
}
int tensor_loc = tl.block_to_tensor[blockIdx.x];
int tensor_num = tl.start_tensor_this_launch + tensor_loc;
int chunk_idx = tl.block_to_chunk[blockIdx.x];
int n = tl.sizes[tensor_loc];
MATH_T ratio = learning_rate;
// nvlamb: apply adaptive learning rate to all parameters
// otherwise, only apply to those with non-zero weight decay
if (use_nvlamb || (decay != 0.0))
{
float param_norm = per_tensor_param_norm[tensor_num];
float update_norm = per_tensor_update_norm[tensor_num];
ratio = (update_norm != 0.0f && param_norm != 0.0f) ? learning_rate * (param_norm / update_norm) : learning_rate;
}
T* update = (T*)tl.addresses[0][tensor_loc];
update += chunk_idx*chunk_size;
master_param_t* master_p = (master_param_t*)tl.addresses[1][tensor_loc];
master_p += chunk_idx*chunk_size;
T* p = (T*)tl.addresses[2][tensor_loc];
p += chunk_idx*chunk_size;
n -= chunk_idx*chunk_size;
// to make things simple, we put aligned case in a different code path
if(n % ILP == 0 &&
chunk_size % ILP == 0 &&
is_aligned(p) &&
is_aligned(update))
{
T r_p[ILP];
T r_update[ILP];
master_param_t r_master_p[ILP];
for(int i_start = threadIdx.x; i_start*ILP < n && i_start*ILP < chunk_size; i_start += blockDim.x)
{
// load
load_store(r_p, p, 0, i_start);
load_store(r_update, update, 0, i_start);
load_store(r_master_p, master_p, 0, i_start);
#pragma unroll
for(int ii = 0; ii < ILP; ii++)
{
r_master_p[ii] = static_cast<MATH_T>(r_p[ii]) - (ratio * static_cast<MATH_T>(r_update[ii]));
r_p[ii] = static_cast<T>(r_master_p[ii]);
}
load_store(p, r_p, i_start, 0);
load_store(master_p, r_master_p, i_start, 0);
}
}
else
{
for(int i_start = 0;
i_start < n && i_start < chunk_size;
i_start += blockDim.x*ILP)
{
MATH_T r_p[ILP];
MATH_T r_update[ILP];
MATH_T r_master_p[ILP];
#pragma unroll
for(int ii = 0; ii < ILP; ii++)
{
int i = i_start + threadIdx.x + ii*blockDim.x;
if(i < n && i < chunk_size)
{
r_p[ii] = p[i];
r_update[ii] = update[i];
r_master_p[ii] = master_p[i];
}
}
#pragma unroll
for(int ii = 0; ii < ILP; ii++)
{
r_master_p[ii] = r_master_p[ii] - (ratio * r_update[ii]);
r_p[ii] = r_master_p[ii];
}
#pragma unroll
for(int ii = 0; ii < ILP; ii++)
{
int i = i_start + threadIdx.x + ii*blockDim.x;
if(i < n && i < chunk_size)
{
master_p[i] = r_master_p[ii];
p[i] = r_p[ii];
}
}
}
}
}
};
void multi_tensor_lamb_out_cuda(
int chunk_size,
at::Tensor noop_flag,
std::vector<std::vector<at::Tensor>> tensor_lists,
const float lr,
const float beta1,
const float beta2,
const float epsilon,
const int step,
const int bias_correction,
const float weight_decay,
const int grad_averaging,
const int mode,
at::Tensor global_grad_norm,
const float max_grad_norm,
at::optional<bool> use_nvlamb_python,
at::Tensor found_inf,
at::Tensor inv_scale)
{
assert(tensor_lists.size() == 5);
using namespace at;
// Master weight and 32bit momentum(potentially changing) is not handled by this
// So we assume every tensor are all in the same type
bool use_nvlamb = use_nvlamb_python.has_value() ? use_nvlamb_python.value() : false;
// Handle bias correction mode
float bias_correction1 = 1.0f, bias_correction2 = 1.0f;
if (bias_correction == 1) {
bias_correction1 = 1 - std::pow(beta1, step);
bias_correction2 = 1 - std::pow(beta2, step);
}
// Handle grad averaging mode
float beta3 = 1.0f;
if (grad_averaging == 1) beta3 = 1 - beta1;
std::vector<std::vector<at::Tensor>> stage1_tensor_lists{
tensor_lists[0],
tensor_lists[1],
tensor_lists[2],
tensor_lists[3],
};
std::vector<std::vector<at::Tensor>> grad_list(tensor_lists.begin(), tensor_lists.begin()+1);
std::vector<std::vector<at::Tensor>> param_list(tensor_lists.begin()+1, tensor_lists.begin()+2);
// Compute per tensor param norm
auto param_norm_tuple = multi_tensor_l2norm_cuda(chunk_size, noop_flag, param_list, true, found_inf, inv_scale);
// We now in-place modify grad to store update before compute its norm
// Generally this is not a issue since people modify grad in step() method all the time
// We can also grab list of empty tensor to avoid this, but I'd like to save space/cpu code
DISPATCH_FLOAT_AND_HALF(tensor_lists[0][0].scalar_type(), 0, "lamb_stage_1",
multi_tensor_apply<4>(
BLOCK_SIZE,
chunk_size,
noop_flag,
stage1_tensor_lists,
LAMBStage1Functor<scalar_t_0, float>(),
beta1,
beta2,
beta3, // 1-beta1 or 1 depends on averaging mode
bias_correction1,
bias_correction2,
epsilon,
(adamMode_t) mode,
weight_decay,
global_grad_norm.data_ptr<float>(),
max_grad_norm,
found_inf.data_ptr<float>(),
inv_scale.data_ptr<float>()); )
// Compute update norms
auto update_norm_tuple = multi_tensor_l2norm_cuda(chunk_size, noop_flag, grad_list, true, found_inf, inv_scale);
std::vector<std::vector<at::Tensor>> grad_param_list{ tensor_lists[0], tensor_lists[1], tensor_lists[4] };
DISPATCH_FLOAT_AND_HALF(tensor_lists[0][0].scalar_type(), 0, "lamb_stage_2",
multi_tensor_apply<3>(
BLOCK_SIZE,
chunk_size,
noop_flag,
grad_param_list,
LAMBStage2Functor<scalar_t_0, float>(),
std::get<1>(param_norm_tuple).data_ptr<float>(),
std::get<1>(update_norm_tuple).data_ptr<float>(),
lr,
weight_decay,
use_nvlamb,
found_inf.data_ptr<float>(),
inv_scale.data_ptr<float>()); )
AT_CUDA_CHECK(cudaGetLastError());
}
|
TensorFlow/Classification/ConvNets/dataprep | dataprep | build_imagewoof_data | #!/usr/bin/python
# Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Converts ImageNet data to TFRecords file format with Example protos.
The raw ImageNet data set is expected to reside in JPEG files located in the
following directory structure.
data_dir/n01440764/ILSVRC2012_val_00000293.JPEG
data_dir/n01440764/ILSVRC2012_val_00000543.JPEG
...
where 'n01440764' is the unique synset label associated with
these images.
The training data set consists of 1000 sub-directories (i.e. labels)
each containing 1200 JPEG images for a total of 1.2M JPEG images.
The evaluation data set consists of 1000 sub-directories (i.e. labels)
each containing 50 JPEG images for a total of 50K JPEG images.
This TensorFlow script converts the training and evaluation data into
a sharded data set consisting of 1024 and 128 TFRecord files, respectively.
train_directory/train-00000-of-01024
train_directory/train-00001-of-01024
...
train_directory/train-01023-of-01024
and
validation_directory/validation-00000-of-00128
validation_directory/validation-00001-of-00128
...
validation_directory/validation-00127-of-00128
Each validation TFRecord file contains ~390 records. Each training TFREcord
file contains ~1250 records. Each record within the TFRecord file is a
serialized Example proto. The Example proto contains the following fields:
image/encoded: string containing JPEG encoded image in RGB colorspace
image/height: integer, image height in pixels
image/width: integer, image width in pixels
image/colorspace: string, specifying the colorspace, always 'RGB'
image/channels: integer, specifying the number of channels, always 3
image/format: string, specifying the format, always 'JPEG'
image/filename: string containing the basename of the image file
e.g. 'n01440764_10026.JPEG' or 'ILSVRC2012_val_00000293.JPEG'
image/class/label: integer specifying the index in a classification layer.
The label ranges from [1, 1000] where 0 is not used.
image/class/synset: string specifying the unique ID of the label,
e.g. 'n01440764'
image/class/text: string specifying the human-readable version of the label
e.g. 'red fox, Vulpes vulpes'
image/object/bbox/xmin: list of integers specifying the 0+ human annotated
bounding boxes
image/object/bbox/xmax: list of integers specifying the 0+ human annotated
bounding boxes
image/object/bbox/ymin: list of integers specifying the 0+ human annotated
bounding boxes
image/object/bbox/ymax: list of integers specifying the 0+ human annotated
bounding boxes
image/object/bbox/label: integer specifying the index in a classification
layer. The label ranges from [1, 1000] where 0 is not used. Note this is
always identical to the image label.
Note that the length of xmin is identical to the length of xmax, ymin and ymax
for each example.
Running this script using 16 threads may take around ~2.5 hours on an HP Z420.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import os
import random
import sys
import threading
import numpy as np
import six
import tensorflow as tf
tf.app.flags.DEFINE_string('train_directory', '/tmp/',
'Training data directory')
tf.app.flags.DEFINE_string('validation_directory', '/tmp/',
'Validation data directory')
tf.app.flags.DEFINE_string('output_directory', '/tmp/',
'Output data directory')
tf.app.flags.DEFINE_integer('train_shards', 1024,
'Number of shards in training TFRecord files.')
tf.app.flags.DEFINE_integer('validation_shards', 128,
'Number of shards in validation TFRecord files.')
tf.app.flags.DEFINE_integer('num_threads', 8,
'Number of threads to preprocess the images.')
# The labels file contains a list of valid labels are held in this file.
# Assumes that the file contains entries as such:
# n01440764
# n01443537
# n01484850
# where each line corresponds to a label expressed as a synset. We map
# each synset contained in the file to an integer (based on the alphabetical
# ordering). See below for details.
tf.app.flags.DEFINE_string('labels_file',
'imagenet_lsvrc_2015_synsets.txt',
'Labels file')
# This file containing mapping from synset to human-readable label.
# Assumes each line of the file looks like:
#
# n02119247 black fox
# n02119359 silver fox
# n02119477 red fox, Vulpes fulva
#
# where each line corresponds to a unique mapping. Note that each line is
# formatted as <synset>\t<human readable label>.
tf.app.flags.DEFINE_string('imagenet_metadata_file',
'imagenet_metadata.txt',
'ImageNet metadata file')
FLAGS = tf.app.flags.FLAGS
def _int64_feature(value):
"""Wrapper for inserting int64 features into Example proto."""
if not isinstance(value, list):
value = [value]
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
def _float_feature(value):
"""Wrapper for inserting float features into Example proto."""
if not isinstance(value, list):
value = [value]
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
def _bytes_feature(value):
"""Wrapper for inserting bytes features into Example proto."""
if six.PY3 and isinstance(value, six.text_type):
value = six.binary_type(value, encoding='utf-8')
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _convert_to_example(filename, image_buffer, label, synset, human, bbox,
height, width):
"""Build an Example proto for an example.
Args:
filename: string, path to an image file, e.g., '/path/to/example.JPG'
image_buffer: string, JPEG encoding of RGB image
label: integer, identifier for the ground truth for the network
synset: string, unique WordNet ID specifying the label, e.g., 'n02323233'
human: string, human-readable label, e.g., 'red fox, Vulpes vulpes'
bbox: list of bounding boxes; each box is a list of integers
specifying [xmin, ymin, xmax, ymax]. All boxes are assumed to belong to
the same label as the image label.
height: integer, image height in pixels
width: integer, image width in pixels
Returns:
Example proto
"""
xmin = []
ymin = []
xmax = []
ymax = []
for b in bbox:
assert len(b) == 4
# pylint: disable=expression-not-assigned
[l.append(point) for l, point in zip([xmin, ymin, xmax, ymax], b)]
# pylint: enable=expression-not-assigned
colorspace = 'RGB'
channels = 3
image_format = 'JPEG'
example = tf.train.Example(features=tf.train.Features(feature={
'image/height': _int64_feature(height),
'image/width': _int64_feature(width),
'image/colorspace': _bytes_feature(colorspace),
'image/channels': _int64_feature(channels),
'image/class/label': _int64_feature(label),
'image/class/synset': _bytes_feature(synset),
'image/class/text': _bytes_feature(human),
'image/object/bbox/xmin': _float_feature(xmin),
'image/object/bbox/xmax': _float_feature(xmax),
'image/object/bbox/ymin': _float_feature(ymin),
'image/object/bbox/ymax': _float_feature(ymax),
'image/object/bbox/label': _int64_feature([label] * len(xmin)),
'image/format': _bytes_feature(image_format),
'image/filename': _bytes_feature(os.path.basename(filename)),
'image/encoded': _bytes_feature(image_buffer)}))
return example
class ImageCoder(object):
"""Helper class that provides TensorFlow image coding utilities."""
def __init__(self):
# Create a single Session to run all image coding calls.
self._sess = tf.Session()
# Initializes function that converts PNG to JPEG data.
self._png_data = tf.placeholder(dtype=tf.string)
image = tf.image.decode_png(self._png_data, channels=3)
self._png_to_jpeg = tf.image.encode_jpeg(image, format='rgb', quality=100)
# Initializes function that converts CMYK JPEG data to RGB JPEG data.
self._cmyk_data = tf.placeholder(dtype=tf.string)
image = tf.image.decode_jpeg(self._cmyk_data, channels=0)
self._cmyk_to_rgb = tf.image.encode_jpeg(image, format='rgb', quality=100)
# Initializes function that decodes RGB JPEG data.
self._decode_jpeg_data = tf.placeholder(dtype=tf.string)
self._decode_jpeg = tf.image.decode_jpeg(self._decode_jpeg_data, channels=3)
def png_to_jpeg(self, image_data):
return self._sess.run(self._png_to_jpeg,
feed_dict={self._png_data: image_data})
def cmyk_to_rgb(self, image_data):
return self._sess.run(self._cmyk_to_rgb,
feed_dict={self._cmyk_data: image_data})
def decode_jpeg(self, image_data):
image = self._sess.run(self._decode_jpeg,
feed_dict={self._decode_jpeg_data: image_data})
assert len(image.shape) == 3
assert image.shape[2] == 3
return image
def _is_png(filename):
"""Determine if a file contains a PNG format image.
Args:
filename: string, path of the image file.
Returns:
boolean indicating if the image is a PNG.
"""
# File list from:
# https://groups.google.com/forum/embed/?place=forum/torch7#!topic/torch7/fOSTXHIESSU
return 'n02105855_2933.JPEG' in filename
def _is_cmyk(filename):
"""Determine if file contains a CMYK JPEG format image.
Args:
filename: string, path of the image file.
Returns:
boolean indicating if the image is a JPEG encoded with CMYK color space.
"""
# File list from:
# https://github.com/cytsai/ilsvrc-cmyk-image-list
blacklist = ['n01739381_1309.JPEG', 'n02077923_14822.JPEG',
'n02447366_23489.JPEG', 'n02492035_15739.JPEG',
'n02747177_10752.JPEG', 'n03018349_4028.JPEG',
'n03062245_4620.JPEG', 'n03347037_9675.JPEG',
'n03467068_12171.JPEG', 'n03529860_11437.JPEG',
'n03544143_17228.JPEG', 'n03633091_5218.JPEG',
'n03710637_5125.JPEG', 'n03961711_5286.JPEG',
'n04033995_2932.JPEG', 'n04258138_17003.JPEG',
'n04264628_27969.JPEG', 'n04336792_7448.JPEG',
'n04371774_5854.JPEG', 'n04596742_4225.JPEG',
'n07583066_647.JPEG', 'n13037406_4650.JPEG']
return filename.split('/')[-1] in blacklist
def _process_image(filename, coder):
"""Process a single image file.
Args:
filename: string, path to an image file e.g., '/path/to/example.JPG'.
coder: instance of ImageCoder to provide TensorFlow image coding utils.
Returns:
image_buffer: string, JPEG encoding of RGB image.
height: integer, image height in pixels.
width: integer, image width in pixels.
"""
# Read the image file.
with tf.gfile.FastGFile(filename, 'rb') as f:
image_data = f.read()
# Clean the dirty data.
if _is_png(filename):
# 1 image is a PNG.
print('Converting PNG to JPEG for %s' % filename)
image_data = coder.png_to_jpeg(image_data)
elif _is_cmyk(filename):
# 22 JPEG images are in CMYK colorspace.
print('Converting CMYK to RGB for %s' % filename)
image_data = coder.cmyk_to_rgb(image_data)
# Decode the RGB JPEG.
image = coder.decode_jpeg(image_data)
# Check that image converted to RGB
assert len(image.shape) == 3
height = image.shape[0]
width = image.shape[1]
assert image.shape[2] == 3
return image_data, height, width
def _process_image_files_batch(coder, thread_index, ranges, name, filenames,
synsets, labels, humans, bboxes, num_shards):
"""Processes and saves list of images as TFRecord in 1 thread.
Args:
coder: instance of ImageCoder to provide TensorFlow image coding utils.
thread_index: integer, unique batch to run index is within [0, len(ranges)).
ranges: list of pairs of integers specifying ranges of each batches to
analyze in parallel.
name: string, unique identifier specifying the data set
filenames: list of strings; each string is a path to an image file
synsets: list of strings; each string is a unique WordNet ID
labels: list of integer; each integer identifies the ground truth
humans: list of strings; each string is a human-readable label
bboxes: list of bounding boxes for each image. Note that each entry in this
list might contain from 0+ entries corresponding to the number of bounding
box annotations for the image.
num_shards: integer number of shards for this data set.
"""
# Each thread produces N shards where N = int(num_shards / num_threads).
# For instance, if num_shards = 128, and the num_threads = 2, then the first
# thread would produce shards [0, 64).
num_threads = len(ranges)
assert not num_shards % num_threads
num_shards_per_batch = int(num_shards / num_threads)
shard_ranges = np.linspace(ranges[thread_index][0],
ranges[thread_index][1],
num_shards_per_batch + 1).astype(int)
num_files_in_thread = ranges[thread_index][1] - ranges[thread_index][0]
counter = 0
for s in range(num_shards_per_batch):
# Generate a sharded version of the file name, e.g. 'train-00002-of-00010'
shard = thread_index * num_shards_per_batch + s
output_filename = '%s-%.5d-of-%.5d' % (name, shard, num_shards)
output_file = os.path.join(FLAGS.output_directory, output_filename)
writer = tf.python_io.TFRecordWriter(output_file)
shard_counter = 0
files_in_shard = np.arange(shard_ranges[s], shard_ranges[s + 1], dtype=int)
for i in files_in_shard:
filename = filenames[i]
label = labels[i]
synset = synsets[i]
human = humans[i]
#bbox = bboxes[i]
image_buffer, height, width = _process_image(filename, coder)
example = _convert_to_example(filename, image_buffer, label,
synset, human, [[0, 0, 1, 1]],
height, width)
writer.write(example.SerializeToString())
shard_counter += 1
counter += 1
if not counter % 1000:
print('%s [thread %d]: Processed %d of %d images in thread batch.' %
(datetime.now(), thread_index, counter, num_files_in_thread))
sys.stdout.flush()
writer.close()
print('%s [thread %d]: Wrote %d images to %s' %
(datetime.now(), thread_index, shard_counter, output_file))
sys.stdout.flush()
shard_counter = 0
print('%s [thread %d]: Wrote %d images to %d shards.' %
(datetime.now(), thread_index, counter, num_files_in_thread))
sys.stdout.flush()
def _process_image_files(name, filenames, synsets, labels, humans,
bboxes, num_shards):
"""Process and save list of images as TFRecord of Example protos.
Args:
name: string, unique identifier specifying the data set
filenames: list of strings; each string is a path to an image file
synsets: list of strings; each string is a unique WordNet ID
labels: list of integer; each integer identifies the ground truth
humans: list of strings; each string is a human-readable label
bboxes: list of bounding boxes for each image. Note that each entry in this
list might contain from 0+ entries corresponding to the number of bounding
box annotations for the image.
num_shards: integer number of shards for this data set.
"""
assert len(filenames) == len(synsets)
assert len(filenames) == len(labels)
assert len(filenames) == len(humans)
#assert len(filenames) == len(bboxes)
# Break all images into batches with a [ranges[i][0], ranges[i][1]].
spacing = np.linspace(0, len(filenames), FLAGS.num_threads + 1).astype(np.int)
ranges = []
threads = []
for i in range(len(spacing) - 1):
ranges.append([spacing[i], spacing[i + 1]])
# Launch a thread for each batch.
print('Launching %d threads for spacings: %s' % (FLAGS.num_threads, ranges))
sys.stdout.flush()
# Create a mechanism for monitoring when all threads are finished.
coord = tf.train.Coordinator()
# Create a generic TensorFlow-based utility for converting all image codings.
coder = ImageCoder()
threads = []
for thread_index in range(len(ranges)):
args = (coder, thread_index, ranges, name, filenames,
synsets, labels, humans, bboxes, num_shards)
t = threading.Thread(target=_process_image_files_batch, args=args)
t.start()
threads.append(t)
# Wait for all the threads to terminate.
coord.join(threads)
print('%s: Finished writing all %d images in data set.' %
(datetime.now(), len(filenames)))
sys.stdout.flush()
def _find_image_files(data_dir, labels_file):
"""Build a list of all images files and labels in the data set.
Args:
data_dir: string, path to the root directory of images.
Assumes that the ImageNet data set resides in JPEG files located in
the following directory structure.
data_dir/n01440764/ILSVRC2012_val_00000293.JPEG
data_dir/n01440764/ILSVRC2012_val_00000543.JPEG
where 'n01440764' is the unique synset label associated with these images.
labels_file: string, path to the labels file.
The list of valid labels are held in this file. Assumes that the file
contains entries as such:
n01440764
n01443537
n01484850
where each line corresponds to a label expressed as a synset. We map
each synset contained in the file to an integer (based on the alphabetical
ordering) starting with the integer 1 corresponding to the synset
contained in the first line.
The reason we start the integer labels at 1 is to reserve label 0 as an
unused background class.
Returns:
filenames: list of strings; each string is a path to an image file.
synsets: list of strings; each string is a unique WordNet ID.
labels: list of integer; each integer identifies the ground truth.
"""
print('Determining list of input files and labels from %s.' % data_dir)
challenge_synsets = [l.strip() for l in
tf.gfile.FastGFile(labels_file, 'r').readlines()]
labels = []
filenames = []
synsets = []
# Leave label index 0 empty as a background class.
label_index = 1
# Construct the list of JPEG files and labels.
for synset in challenge_synsets:
jpeg_file_path = '%s/%s/*.JPEG' % (data_dir, synset)
matching_files = tf.gfile.Glob(jpeg_file_path)
labels.extend([label_index] * len(matching_files))
synsets.extend([synset] * len(matching_files))
filenames.extend(matching_files)
if not label_index % 100:
print('Finished finding files in %d of %d classes.' % (
label_index, len(challenge_synsets)))
label_index += 1
# Shuffle the ordering of all image files in order to guarantee
# random ordering of the images with respect to label in the
# saved TFRecord files. Make the randomization repeatable.
shuffled_index = list(range(len(filenames)))
random.seed(12345)
random.shuffle(shuffled_index)
filenames = [filenames[i] for i in shuffled_index]
synsets = [synsets[i] for i in shuffled_index]
labels = [labels[i] for i in shuffled_index]
print('Found %d JPEG files across %d labels inside %s.' %
(len(filenames), len(challenge_synsets), data_dir))
return filenames, synsets, labels
def _find_human_readable_labels(synsets, synset_to_human):
"""Build a list of human-readable labels.
Args:
synsets: list of strings; each string is a unique WordNet ID.
synset_to_human: dict of synset to human labels, e.g.,
'n02119022' --> 'red fox, Vulpes vulpes'
Returns:
List of human-readable strings corresponding to each synset.
"""
humans = []
for s in synsets:
assert s in synset_to_human, ('Failed to find: %s' % s)
humans.append(synset_to_human[s])
return humans
def _process_dataset(name, directory, num_shards, synset_to_human,
image_to_bboxes):
"""Process a complete data set and save it as a TFRecord.
Args:
name: string, unique identifier specifying the data set.
directory: string, root path to the data set.
num_shards: integer number of shards for this data set.
synset_to_human: dict of synset to human labels, e.g.,
'n02119022' --> 'red fox, Vulpes vulpes'
image_to_bboxes: dictionary mapping image file names to a list of
bounding boxes. This list contains 0+ bounding boxes.
"""
filenames, synsets, labels = _find_image_files(directory, FLAGS.labels_file)
humans = _find_human_readable_labels(synsets, synset_to_human)
#bboxes = _find_image_bounding_boxes(filenames, image_to_bboxes)
bboxes = []
_process_image_files(name, filenames, synsets, labels,
humans, bboxes, num_shards)
def _build_synset_lookup(imagenet_metadata_file):
"""Build lookup for synset to human-readable label.
Args:
imagenet_metadata_file: string, path to file containing mapping from
synset to human-readable label.
Assumes each line of the file looks like:
n02119247 black fox
n02119359 silver fox
n02119477 red fox, Vulpes fulva
where each line corresponds to a unique mapping. Note that each line is
formatted as <synset>\t<human readable label>.
Returns:
Dictionary of synset to human labels, such as:
'n02119022' --> 'red fox, Vulpes vulpes'
"""
lines = tf.gfile.FastGFile(imagenet_metadata_file, 'r').readlines()
synset_to_human = {}
for l in lines:
if l:
parts = l.strip().split('\t')
assert len(parts) == 2
synset = parts[0]
human = parts[1]
synset_to_human[synset] = human
return synset_to_human
def main(unused_argv):
assert not FLAGS.train_shards % FLAGS.num_threads, (
'Please make the FLAGS.num_threads commensurate with FLAGS.train_shards')
assert not FLAGS.validation_shards % FLAGS.num_threads, (
'Please make the FLAGS.num_threads commensurate with '
'FLAGS.validation_shards')
print('Saving results to %s' % FLAGS.output_directory)
# Build a map from synset to human-readable label.
synset_to_human = _build_synset_lookup(FLAGS.imagenet_metadata_file)
# Run it!
_process_dataset('validation', FLAGS.validation_directory,
FLAGS.validation_shards, synset_to_human, None)
_process_dataset('train', FLAGS.train_directory, FLAGS.train_shards,
synset_to_human, None)
if __name__ == '__main__':
tf.app.run()
|
PyTorch/Detection/Efficientdet | Efficientdet | README | # EfficientDet For PyTorch
This repository provides a script and recipe to train and infer on EfficientDet to achieve state-of-the-art accuracy and is tested and maintained by NVIDIA.
## Table Of Contents
* [Model overview](#model-overview)
* [Model Architecture](#model-architecture)
* [Default configuration](#default-configuration)
* [Feature support matrix](#feature-support-matrix)
* [Features](#features)
* [Mixed precision training](#mixed-precision-training)
* [Enabling mixed precision](#enabling-mixed-precision)
* [Enabling TF32](#enabling-tf32)
* [Setup](#setup)
* [Requirements](#requirements)
* [Quick start guide](#quick-start-guide)
* [Advanced](#advanced)
* [Command-line arguments](#command-line-arguments)
* [Getting the data](#getting-the-data)
* [Dataset guidelines](#dataset-guidelines)
* [Training process](#training-process)
* [Performance](#performance)
* [Benchmarking](#benchmarking)
* [Training performance benchmark](#training-performance-benchmark)
* [Inference performance benchmark](#inference-performance-benchmark)
* [Results](#results)
* [Training accuracy results](#training-accuracy-results)
* [Training accuracy: NVIDIA DGX A100 (8x A100 40GB)](#training-accuracy-nvidia-dgx-a100-8x-a100-40gb)
* [Training accuracy: NVIDIA DGX-1 (8x V100 32GB)](#training-accuracy-nvidia-dgx-1-8x-v100-32gb)
* [Training accuracy: NVIDIA DGX-1 (32x V100 32GB)](#training-accuracy-nvidia-dgx-1-32x-v100-32gb)
* [Training loss curves](#training-loss-curves)
* [Training stability test](#training-stability-test)
* [Training performance results](#training-performance-results)
* [Training performance: NVIDIA DGX A100 (8x A100 40GB)](#training-performance-nvidia-dgx-a100-8x-a100-40gb)
* [Training performance: NVIDIA DGX-1 (8x V100 16GB)](#training-performance-nvidia-dgx-1-8x-v100-16gb)
* [Training performance: NVIDIA DGX-2 (16x V100 32GB)](#training-performance-nvidia-dgx-2-16x-v100-32gb)
* [Inference performance results](#inference-performance-results)
* [Inference performance: NVIDIA DGX A100 (1x A100 40GB)](#inference-performance-nvidia-dgx-a100-1x-a100-40gb)
* [Inference performance: NVIDIA DGX-1 (1x V100 16GB)](#inference-performance-nvidia-dgx-1-1x-v100-16gb)
* [Inference performance: NVIDIA DGX-2 (1x V100 32GB)](#inference-performance-nvidia-dgx-1-1x-v100-16gb)
* [Release notes](#release-notes)
* [Changelog](#changelog)
* [Known issues](#known-issues)
## Model overview
EfficientDet is a convolution-based neural network for the task of object detection. This model is based on [EfficientDet: Scalable and Efficient Object Detection](https://arxiv.org/abs/1911.09070). NVIDIA's implementation of EfficientDet PyTorch is an optimized version of [TensorFlow Model Garden](https://github.com/tensorflow/models/tree/master/research/object_detection) implementation, leveraging mixed precision arithmetic on NVIDIA Volta, NVIDIA Turing, and the NVIDIA Ampere GPU architectures for faster training times while maintaining target accuracy.
The repository also contains scripts to launch training, benchmarking, and inference routines in a Docker container interactively.
The major differences between the official implementation of the paper and our version of EfficientDet are as follows:
- Mixed precision support with [PyTorch AMP](https://github.com/NVIDIA/apex).
- Multi-node training support.
- Custom fused CUDA kernels for faster computations.
- Lightweight logging using [dllogger](https://github.com/NVIDIA/dllogger)
- PyTorch multi-tensor ops for faster computation.
These techniques/optimizations improve model performance and reduce training time by a factor of 1.3x, allowing you to perform more efficient object detection with no additional effort.
Other publicly available implementations of EfficientDet include:
- [Yet-Another-EfficientDet-Pytorch](https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch)
- [rwightman](https://github.com/rwightman/efficientdet-pytorch)
### Model architecture
EfficientDet is a one-stage detector with the following architecture components:
- ImageNet-pretrained EfficientNet backbone
- Weighted bi-directional feature pyramid network (BiFPN)
- Bounding and classification box head
- A compound scaling method that uniformly scales the resolution, depth, and width for all backbone, feature network, and box/class prediction networks at the same time
### Default Configuration
The default configuration of this model can be found at `train.py`. The default hyper-parameters are as follows:
- General:
- Base Global Learning Rate set to 0.01
- Epochs set to 300
- Local train batch size - 32
- Local test batch size - 32
- Backbone:
- Backend network set to EfficientNet-B0
This repository implements multi-gpu to support larger batches and mixed precision support. This implementation also includes the following optimizations.
- Custom CUDA kernels for Focal Loss and NMS.
- Custom optimized implementation of EMA.
The source files can be found under `effdet/csrc`.
### Feature support matrix
The model supports the following features.
| **Feature** | **EfficientDet** |
|:---------:|:----------:|
|PyTorch native AMP|Yes|
|PyTorch native DDP|Yes|
|Custom Fused CUDA kernels|Yes|
#### Features
[PyTorch native AMP](https://pytorch.org/docs/stable/amp.html) is part of PyTorch, which provides convenience methods for mixed precision.
[DDP](https://pytorch.org/tutorials/beginner/dist_overview.html) stands for DistributedDataParallel and is used for multi-GPU training.
### Mixed precision training
Mixed precision is the combined use of different numerical precisions in a computational method. [Mixed precision](https://arxiv.org/abs/1710.03740) training offers significant computational speedup by performing operations in half-precision format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of [tensor cores](https://developer.nvidia.com/tensor-cores) in NVIDIA Volta, and following with both the NVIDIA Turing and NVIDIA Ampere Architectures, significant training speedups are observed by switching to mixed precision—up to 3x overall speedup on the most arithmetically intense model architectures. Using mixed precision training requires two steps:
1. Porting the model to use the FP16 data type where appropriate.
2. Adding loss scaling to preserve small gradient values.
For information about:
- How to train using mixed precision, refer to the [Mixed Precision Training](https://arxiv.org/abs/1710.03740) paper and [Training With Mixed Precision](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html) documentation.
- Techniques used for mixed precision training, refer to the [Mixed-Precision Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/) blog.
NVIDIA Apex tools for mixed precision training, refer to the [NVIDIA Apex: Tools for Easy Mixed-Precision Training in PyTorch](https://devblogs.nvidia.com/apex-pytorch-easy-mixed-precision-training/).
#### Enabling mixed precision
In this repository, mixed precision training is enabled by the [PyTorch native AMP](https://pytorch.org/docs/stable/amp.html) library. PyTorch has an automatic mixed precision module that allows mixed precision to be enabled with minimal code changes.
Automatic mixed precision can be enabled with the following code changes:
```
# Create gradient scaler
scaler = torch.cuda.amp.GradScaler(enabled=args.amp)
# Wrap the forward pass and loss in torch.cuda.amp.autocast
with torch.cuda.amp.autocast(enabled=args.amp):
output = model(input, target)
loss = output['loss']
```
Where `args.amp` is the flag to turn on or off AMP. Shell scripts all have a positional argument `--amp` available to enable mixed precision training.
#### Enabling TF32
TensorFloat-32 (TF32) is the new math mode in [NVIDIA A100](https://www.nvidia.com/en-us/data-center/a100/) GPUs for handling the matrix math, also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on NVIDIA Volta GPUs.
TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models that require a high dynamic range for weights or activations.
For more information, refer to the [TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) blog post.
TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.
## Setup
The following sections list the requirements in order to start training the EfficientDet model.
### Requirements
This repository contains `Dockerfile` which extends the PyTorch NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components:
- [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
- [PyTorch 21.06-py3 NGC container](https://ngc.nvidia.com/registry/nvidia-pytorch)
- Supported GPUs:
- [NVIDIA Volta architecture](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/)
- [NVIDIA Turing architecture](https://www.nvidia.com/en-us/geforce/turing/)
- [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/)
For more information about how to get started with NGC containers, refer to the
following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning
Documentation:
- [Getting Started Using NVIDIA GPU Cloud](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html)
- [Accessing And Pulling From The NGC Container Registry](https://docs.nvidia.com/deeplearning/dgx/user-guide/index.html#accessing_registry)
- [Running PyTorch](https://docs.nvidia.com/deeplearning/dgx/pytorch-release-notes/running.html#running)
For those unable to use the [Pytorch](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) NGC container, to set up the required environment or create your own container, refer to the versioned [NVIDIA Container Support Matrix](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html).
## Quick Start Guide
To train your model using mixed or TF32 precision with Tensor Cores or using FP32, perform the following steps using the default parameters of the EfficientDet on the COCO 2017 dataset. For the specifics concerning training and inference, refer to the [Advanced](#advanced) section.
### 1. Clone the repository.
```
git clone https://github.com/NVIDIA/DeepLearningExamples.git
cd DeepLearningExamples/PyTorch/Detection/EfficientDet
```
### 2. Download and preprocess the dataset.
This repository provides scripts to download and extract the COCO 2017 dataset. Data will be downloaded to the `current working` directory on the host and extracted to a user-defined directory
To download, verify, and extract the COCO dataset, use the following scripts:
```
./download_dataset.sh <data/dir>
```
By default, the data is organized into the following structure:
```
<data/dir>
annotations/
instances_train2017.json
instances_val2017.json
train2017/
COCO_train2017_*.jpg
val2017/
COCO_val2017_*.jpg
```
### 3. Build the EfficientDet PyTorch NGC container.
```
bash scripts/docker/build.sh
```
### 4. Start an interactive session in the NGC container to run training/inference.
After you build the container image, you can start an interactive CLI session with
```
bash scripts/docker/launch.sh
```
The `launch.sh` script requires that the location on the dataset is specified in the script.
### 5. Start training.
```
bash ./scripts/D0/train_{AMP, FP32, TF32}_8x{V100-32G, A100-80G}.sh
```
The training scripts train an EfficientDet-D0 model and performs evaluation on the COCO 2017 dataset. By default, the training script run training on standard configuration (DGX A100/DGX-1 V100, AMP/FP32/TF32, 300 epochs). Run one of the scripts in `./scripts/D0` directory using `bash ./scripts/D0/train_{AMP, FP32, TF32}_8x{V100-32G, A100-80G}.sh`. Ensure COCO-2017 is mounted in `/workspace/object_detection/datasets/coco` and EfficientNet-B0 backbone weights are mounted in `/backbone_checkpoints`. The backbone checkpoint can be downloaded from [this](https://ngc.nvidia.com/catalog/models/nvidia:efficientdet_backbone_efficientnet_b0_pyt_amp_ckpt) location.
### 6. Start validation/evaluation.
To run validation/evaluation for a standard configuration (DGX A100/DGX-1 V100, AMP/TF32/FP32, EfficientDet-D0),
run one of the scripts in the `./scripts/D0` directory using `bash ./scripts/D0/validation_{AMP, FP32, TF32}_8x{A100-80G, V100-16G, V100-32G}.sh`.
Ensure COCO-2017 is mounted in `/workspace/object_detection/datasets/coco`.
(Optional) Mount the checkpoint in the `/checkpoints` location to evaluate on a checkpoint and in the script add the path to the checkpoint as `--checkpoint /checkpoints/<NAME OF CHECKPOINT>`.
### 7. Start inference/predictions.
Model predictions can be obtained on a test dataset and a model checkpoint by running the `scripts/D0/inference_{AMP, FP32, TF32}_{A100-80G, V100-32G}.sh` script. The script requires:
- the location of the checkpoint folder and dataset to be specified and present within/mounted to the container.
- number of GPUs to run inference on.
For example:
```
NUM_PROC=<number_of_processes> CKPT_PATH=<checkpoint_path> BATCH_SIZE=<batch_size> bash scripts/inference_{AMP, FP32, TF32}_{A100-80G, V100-32G}.sh
```
Model prediction files get saved in the `--results` path if provided; otherwise, they will be saved in the current working directory.
To perform just inference and skip computation of mAP scores, use the `--inference` flag.
## Advanced
The following sections provide greater details of the dataset, running training and inference, and the training results.
### Scripts and sample code
Descriptions of the key scripts and folders are provided below.
- effdet - Contains code to build individual components of the model such as backbone, FPN, RPN, classification and bbox heads, and so on.
- data - Contains code to build the data pipeline such as dataloader, transforms, dataset builder.
- download_dataset.sh - Launches download and processing of required datasets. `dtrx` package needs to be installed for this script to run without errors.
- scripts/ - Contains shell scripts to launch training and evaluation of the model and perform inferences.
- D0/train_{AMP, TF32, FP32}_8x{V100-32G, A100-80G}.sh - Launches model training
- D0/evaluation_{AMP, FP32, TF32}_8x{A100-80G, V100-16G, V100-32G}.sh - Performs inference and computes mAP of predictions.
- docker/ - Scripts to build the docker image and to start an interactive session.
- utils/
- Contains utility components like samplers, EMA, optimizers, schedulers, and so on.
- train.py - End to end to script to load data, build and train the model.
- validate.py - End to end script to load data, checkpoint and perform inference and compute mAP score.
### Parameters
#### train.py script parameters
Important parameters for training are listed below with defaults.
### Command-line options
To display the full list of available options and their descriptions, use the -h or --help command-line option, for example:
- `data` - Path to coco dataset
- `model` - Name of the model to train (default: "efficientdet_d0")
- `lr` - Learning rate
- `epochs` - Maximum number of epochs to train for
- `warmup-epochs` - Epochs to warmup LR, if scheduler supports
- `batch-size` - Input batch size
`python train.py --help` will give all the command-line parameters specific to `train.py`:
```
--model MODEL Name of the model to train (default: "countception"
--redundant-bias Override model config for redundant bias
--no-redundant-bias Override model config for redundant bias
--pretrained Start with the pretrained version of a specified network (if avail)
--pretrained-backbone-path PATH
Start from pre-trained backbone weights.
--initial-checkpoint PATH
Initialize model from this checkpoint (default: none)
--resume Resume full model and optimizer state from checkpoint (default: False)
--no-resume-opt Prevent resume of optimizer state when resuming model
--interpolation NAME Image resize interpolation type (overrides model)
--fill-color NAME Image augmentation fill (background) color ("mean" or int)
-b N, --batch-size N input batch size for training (default: 32)
-vb N, --validation-batch-size-multiplier N
ratio of validation batch size to training batch size (default: 1)
--input_size PCT Image size (default: None) if this is not set default model image size is taken
--drop PCT Dropout rate (default: 0.)
--clip-grad NORM Clip gradient norm (default: 10.0)
--opt OPTIMIZER Optimizer (default: "momentum"
--opt-eps EPSILON Optimizer Epsilon (default: 1e-3)
--momentum M SGD momentum (default: 0.9)
--weight-decay WEIGHT_DECAY
weight decay (default: 0.00004)
--sched SCHEDULER LR scheduler (default: "step"
--lr LR learning rate (default: 0.01)
--lr-noise pct, pct [pct, pct ...]
learning rate noise on/off epoch percentages
--lr-noise-pct PERCENT
learning rate noise limit percent (default: 0.67)
--lr-noise-std STDDEV
learning rate noise std-dev (default: 1.0)
--lr-cycle-mul MULT learning rate cycle len multiplier (default: 1.0)
--lr-cycle-limit N learning rate cycle limit
--warmup-lr LR warmup learning rate (default: 0.0001)
--min-lr LR lower lr bound for cyclic schedulers that hit 0 (1e-5)
--epochs N number of epochs to train (default: 2)
--start-epoch N manual epoch number (useful on restarts)
--decay-epochs N epoch interval to decay LR
--warmup-epochs N epochs to warmup LR, if scheduler supports
--cooldown-epochs N epochs to cooldown LR at min_lr, after cyclic schedule ends
--patience-epochs N patience epochs for Plateau LR scheduler (default: 10
--decay-rate RATE, --dr RATE
LR decay rate (default: 0.1)
--mixup MIXUP mixup alpha, mixup enabled if > 0. (default: 0.)
--mixup-off-epoch N turn off mixup after this epoch, disabled if 0 (default: 0)
--smoothing SMOOTHING
label smoothing (default: 0.0)
--train-interpolation TRAIN_INTERPOLATION
Training interpolation (random, bilinear, bicubic default: "random")
--sync-bn Enable NVIDIA Apex or Torch synchronized BatchNorm.
--dist-bn DIST_BN Distribute BatchNorm stats between nodes after each epoch ("broadcast", "reduce", or "")
--model-ema Enable tracking moving average of model weights
--model-ema-decay MODEL_EMA_DECAY
decay factor for model weights moving average (default: 0.9998)
--dist-group-size DIST_GROUP_SIZE
Group size for sync-bn
--seed S random seed (default: 42)
--log-interval N how many batches to wait before logging training status
--eval-after N Start evaluating after eval-after epochs
--benchmark Turn this on when measuring performance
--benchmark-steps N Run training for this number of steps for performance measurement
--dllogger-file PATH File name of dllogger json file (default: log.json, current dir)
--save-checkpoint-interval N
Save checkpoints after so many epochs
-j N, --workers N how many training processes to use (default: 1)
--amp use NVIDIA amp for mixed precision training
--no-pin-mem Disable pin CPU memory in DataLoader.
--no-prefetcher disable fast prefetcher
--output PATH path to the output folder (default: none, current dir)
--eval-metric EVAL_METRIC
Best metric (default: "map"
--local_rank LOCAL_RANK
--memory-format {nchw,nhwc}
memory layout, nchw or nhwc
--fused-focal-loss Use fused focal loss for better performance.
--waymo Train on Waymo dataset or COCO dataset. Default: False (COCO dataset)
--num_classes PCT Number of classes the model needs to be trained for (default: None)
--remove-weights [REMOVE_WEIGHTS [REMOVE_WEIGHTS ...]]
Remove these weights from the state dict before loading checkpoint (use case can be not loading heads)
--freeze-layers [FREEZE_LAYERS [FREEZE_LAYERS ...]]
Freeze these layers
--waymo-train-annotation WAYMO_TRAIN_ANNOTATION
Absolute Path to waymo training annotation (default: "None")
--waymo-val-annotation WAYMO_VAL_ANNOTATION
Absolute Path to waymo validation annotation (default: "None")
--waymo-train WAYMO_TRAIN
Path to waymo training relative to waymo data (default: "None")
--waymo-val WAYMO_VAL
Path to waymo validation relative to waymo data (default: "None")
```
### Getting the data
By default, the EfficientDet model is trained on the [COCO 2017](http://cocodataset.org/#download) dataset. This dataset comes with a training and validation set.
This repository contains the `./download_dataset.sh` scripts that automatically downloads and preprocesses the training and validation sets.
#### Dataset guidelines
This repository contains the `./download_dataset.sh` scripts that automatically downloads and preprocesses the training and validation sets.
This repository also provides support for fine-tuning and evaluating on Waymo dataset.
In order to run on the Waymo dataset, ensure your dataset is present/mounted to the Docker container and the dataset is in COCO format. For that, this repository has scripts to download, preprocess and convert Waymo dataset into COCO format, which is ingestible by EfficientDet.
- `waymo_tool/waymo_data_converter.py` - downloads and converts the data into COCO format
Since the original Waymo dataset is in TFRecords format, to convert it into COCO format, Tensorflow needs to be installed.
### Training Process
Training is performed using the `train.py` script. The default parameters can be overridden by command-line arguments.
The training process can start from scratch or resume from a checkpoint.
By default, bash script `scripts/D0/train_{AMP, FP32, TF32}_8x{A100-80G, V100-32G}.sh` will start the training process from scratch with the following settings.
- Use 8 GPUs
- Saves checkpoints after every 10 epochs to `/workspace/output/` folder
- AMP or FP32 or TF32 based on the folder `scripts/D0/train_{AMP, FP32, TF32}_8x{A100-80G, V100-32G}.sh`
To resume from a checkpoint, include `--resume` in the command-line and place the checkpoint into `/workspace/output/`.
#### Multi-node
Multi-node runs can be launched on a Pyxis/enroot Slurm cluster (see [Requirements](#requirements)) with the `./scripts/D0/train_{AMP, FP32}_32xV100-32G.sub` script with the following command for a 4-node NVIDIA DGX V100 example:
```
sbatch N 4 --ntasks-per-node=8 ./scripts/D0/train_{AMP, FP32}_32xV100-32G.sub
```
Note that the `./scripts/D0/train_{AMP, FP32}_32xV100-32G.sub` script is a starting point that has to be adapted depending on the environment. In particular, variables such as `--container-image` handle the container image to train using, and `datadir` handle the location of the COCO-2017 data. The backbone (EfficientNet) weights need to be put in `/backbone_checkpoints`.
Refer to the files contents to view the full list of variables to adjust for your system.
## Performance
### Benchmarking
Benchmarking can be performed for both training and inference. Both the scripts run the EfficientDet model. You can specify whether benchmarking is performed in AMP, TF32, or FP32 by specifying it as an argument to the benchmarking scripts.
#### Training performance benchmark
Training benchmarking can be performed by running the script:
```
scripts/D0/train-benchmark_{AMP, TF32, FP32}_{V100-32G, A100-80G}.sh
```
#### Inference performance benchmark
Inference benchmarking can be performed by running the script:
```
scripts/D0/inference_{AMP, FP32, TF32}_{A100-80G, V100-32G}.sh
```
### Results
The following sections provide details on how we achieved our performance and accuracy in training and inference.
#### Training Accuracy Results
##### Training accuracy: NVIDIA DGX A100 (8x A100 80GB)
Our results were obtained by running the `scripts/D0/train_{AMP, TF32}_8xA100-80G.sh` training script in the 21.06-py3 NGC container on NVIDIA DGX A100 (8x A100 80GB) GPUs with no intermediate evaluation.
| GPUs | BBOX mAP - TF32 | BBOX mAP - FP16| Time to train - TF32 | Time to train - mixed precision | Time to train - speedup (TF32 to mixed precision)
| --| --| -- | -- | -- | --
| 8 | 0.3399 | 0.3407 | 8.57 | 6.5 | 1.318
##### Training accuracy: NVIDIA DGX-1 (8x V100 32GB)
Our results were obtained by running the `scripts/D0/train_{AMP, FP32}_8xV100-32G.sh` training script in the PyTorch 21.06-py3 NGC container on NVIDIA DGX-1 with 8x V100 32GB GPUs with no intermediate evaluation.
| GPUs | BBOX mAP - FP32| BBOX mAP - FP16| Time to train - FP32 | Time to train - mixed precision | Time to train - speedup (FP32 to mixed precision)
| --| -- | -- | -- | -- | --
| 8 | 0.3410 | 0.3413 | 16 | 10.5 | 1.52
##### Training accuracy: NVIDIA DGX-1 (32x V100 32GB)
Our results were obtained by running the `scripts/D0/train_{AMP, FP32}_32xV100-32G.sh` training script in the PyTorch 21.06-py3 NGC container on NVIDIA DGX-1 with 32x V100 32GB GPUs with no intermediate evaluation.
| GPUs | BBOX mAP - FP32| BBOX mAP - FP16| Time to train - FP32 | Time to train - mixed precision | Time to train - speedup (FP32 to mixed precision)
| --| -- | -- | -- | -- | --
| 32 | 0.3418 | 0.3373 | 6 | 4.95 | 1.22
##### Training accuracy on Waymo dataset: NVIDIA DGX A100 (8x A100 80GB)
Our results were obtained by running the `scripts/waymo/train_waymo_AMP_8xA100-80G.sh` training script in the 21.06-py3 NGC container on the Waymo dataset on NVIDIA DGX A100 (8x A100 80GB) GPUs with no intermediate evaluation. These results were obtained by training the EfficientDet-D0 model with a frozen backbone.
| category | mAP | category | AP @ IoU 0.7 | category | AP @ IoU 0.5 | category | AP @ IoU 0.5 |
|:-----------|:-------|:-----------|:---------------|:-----------|:---------------|:-----------|:---------------|
| L2_ALL_NS | 50.377 | Vehicle | 50.271 | Pedestrian | 61.788 | Cyclist | 39.072 |
The following results were obtained by training the EfficientDet-D0 model without freezing any part of the architecture. This can be done by removing the `--freeze_layer` argument from the script.
| category | mAP | category | AP @ IoU 0.7 | category | AP @ IoU 0.5 | category | AP @ IoU 0.5 |
|:-----------|:-------|:-----------|:---------------|:-----------|:---------------|:-----------|:---------------|
| L2_ALL_NS | 51.249 | Vehicle | 51.091 | Pedestrian | 62.816 | Cyclist | 39.841 |
##### Training loss curves

Here, multihead loss is simply the weighted sum of losses on the classification head and the bounding box head.
##### Training Stability Test
The following tables compare mAP scores across five different training runs with different seeds. The runs showcase consistent convergence on all five seeds with very little deviation.
| **Config** | **Seed 1** | **Seed 2** | **Seed 3** | **Seed 4** | **Seed 5** | **Mean** | **Standard Deviation** |
| --- | --- | ----- | ----- | --- | --- | ----- | ----- |
| 8 GPUs, final AP BBox | 0.3422 | 0.3379 | 0.3437 | 0.3424 | 0.3402 | 0.3412 | 0.002 |
#### Training Performance Results
##### Training performance: NVIDIA DGX A100 (8x A100 80GB)
Our results were obtained by running the `scripts/D0/train_benchmark_{AMP, TP32}_8xA100-80G.sh` training script in the 21.06-py3 NGC container on NVIDIA DGX A100 (8x A100 80GB) GPUs. Performance numbers in images per second were averaged over an entire training epoch.
| GPUs | Throughput - TF32 | Throughput - mixed precision | Throughput speedup (TF32 - mixed precision) | Weak scaling - TF32 | Weak scaling - mixed precision
| --- | ----- | ----- | --- | --- | ----- |
| 1 | 170 | 255 | 1.5 | 1 | 1 |
| 4 | 616 | 866 | 1.4 | 3.62 | 3.39 |
| 8 | 1213 | 1835 | 1.5 | 7.05 | 7.05 |
##### Training performance: NVIDIA DGX-1 (8x V100 32GB)
Our results were obtained by running the `scripts/D0/train_benchmark_{AMP, FP32}_8xV100-32G.sh` training script in the 21.06-py3 NGC container on NVIDIA DGX-1 with (8x V100 32GB) GPUs. Performance numbers in images per second were averaged over an entire training epoch.
| GPUs | Throughput - FP32 | Throughput - mixed precision | Throughput speedup (FP32 - mixed precision) | Weak scaling - FP32 | Weak scaling - mixed precision |
| --- | ----- | ----- | --- | --- | ----- |
| 1 | 110 | 186 | 1.69 | 1 | 1 |
| 4 | 367 | 610 | 1.66 | 3.33 | 3.28 |
| 8 | 613 | 1040 | 1.69 | 5.57 | 5.59 |
To achieve similar results, follow the steps in the [Quick Start Guide](#quick-start-guide).
#### Inference performance results
##### Inference performance: NVIDIA DGX A100 (1x A100 40GB)
Our results were obtained by running the `scripts/inference_{AMP, TF32}_A100-80G.sh` training script in the PyTorch 21.06-py3 NGC container on NVIDIA DGX A100 (1x A100 80GB) GPU.
| GPUs | Batch size / GPU | Throughput - TF32 | Throughput - mixed precision | Throughput speedup (TF32 - mixed precision)
| --- | --- | ----- | ----- | ----- |
| 1 | 8 | 45.61 | 50.23 | 1.101 |
To achieve similar results, follow the steps in the [Quick Start Guide](#quick-start-guide).
##### Inference performance: NVIDIA DGX-1 (1x V100 32GB)
Our results were obtained by running the `scripts/inference_{AMP, FP32}_V100-32G.sh` training script in the PyTorch 21.06-py3 NGC container on NVIDIA DGX-1 with 1x V100 32GB GPUs. Performance numbers (in items/images per second) were averaged over an entire training epoch.
| GPUs | Batch size / GPU | Throughput - FP32 | Throughput - mixed precision | Throughput speedup (FP32 - mixed precision)
| --- | --- | ----- | ----- | ----- |
| 1 | 8 | 38.81 | 42.25 | 1.08 |
To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide).
## Release notes
### Changelog
July 2021
- Initial Release
### Known Issues
There are no known issues with this model.
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/plugins/taco2DenoiseTransformPlugin | taco2DenoiseTransformPlugin | taco2DenoiseTransformKernel | /*
* Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the NVIDIA CORPORATION nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef TT2I_DENOISETRANSFORMKERNEL_H
#define TT2I_DENOISETRANSFORMKERNEL_H
#include "cuda_runtime.h"
namespace nvinfer1
{
namespace plugin
{
class Taco2DenoiseTransformKernel
{
public:
/**
* @brief Compute the reduced noise version of signal with real and imaginary
* components.
*
* @param batchSize The size of the batch.
* @param inputDevice The input tensor, with the first half containing the
* real component, and the second half containing the imaginary component.
* @param noiseDevice The magnitude of the noise.
* @param outputDevice The output tensor, with the first half containing the
* real component, and the second half containing the imaginary component.
* @param width The width of the components.
* @param inputLength The length of each half of the input.
* @param stream The stream to operate on.
*/
static void compute(const int batchSize, const float* const inputDevice, const float* const noiseDevice,
float* const outputDevice, const int width, const int inputLength, cudaStream_t stream);
};
} // namespace plugin
} // namespace nvinfer1
#endif
|
TensorFlow/Detection/SSD/models/research/object_detection/core | core | prefetcher_test | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for object_detection.core.prefetcher."""
import tensorflow as tf
from object_detection.core import prefetcher
slim = tf.contrib.slim
class PrefetcherTest(tf.test.TestCase):
def test_prefetch_tensors_with_fully_defined_shapes(self):
with self.test_session() as sess:
batch_size = 10
image_size = 32
num_batches = 5
examples = tf.Variable(tf.constant(0, dtype=tf.int64))
counter = examples.count_up_to(num_batches)
image = tf.random_normal([batch_size, image_size,
image_size, 3],
dtype=tf.float32,
name='images')
label = tf.random_uniform([batch_size, 1], 0, 10,
dtype=tf.int32, name='labels')
prefetch_queue = prefetcher.prefetch(tensor_dict={'counter': counter,
'image': image,
'label': label},
capacity=100)
tensor_dict = prefetch_queue.dequeue()
self.assertAllEqual(tensor_dict['image'].get_shape().as_list(),
[batch_size, image_size, image_size, 3])
self.assertAllEqual(tensor_dict['label'].get_shape().as_list(),
[batch_size, 1])
tf.initialize_all_variables().run()
with slim.queues.QueueRunners(sess):
for _ in range(num_batches):
results = sess.run(tensor_dict)
self.assertEquals(results['image'].shape,
(batch_size, image_size, image_size, 3))
self.assertEquals(results['label'].shape, (batch_size, 1))
with self.assertRaises(tf.errors.OutOfRangeError):
sess.run(tensor_dict)
def test_prefetch_tensors_with_partially_defined_shapes(self):
with self.test_session() as sess:
batch_size = 10
image_size = 32
num_batches = 5
examples = tf.Variable(tf.constant(0, dtype=tf.int64))
counter = examples.count_up_to(num_batches)
image = tf.random_normal([batch_size,
tf.Variable(image_size),
tf.Variable(image_size), 3],
dtype=tf.float32,
name='image')
image.set_shape([batch_size, None, None, 3])
label = tf.random_uniform([batch_size, tf.Variable(1)], 0,
10, dtype=tf.int32, name='label')
label.set_shape([batch_size, None])
prefetch_queue = prefetcher.prefetch(tensor_dict={'counter': counter,
'image': image,
'label': label},
capacity=100)
tensor_dict = prefetch_queue.dequeue()
self.assertAllEqual(tensor_dict['image'].get_shape().as_list(),
[batch_size, None, None, 3])
self.assertAllEqual(tensor_dict['label'].get_shape().as_list(),
[batch_size, None])
tf.initialize_all_variables().run()
with slim.queues.QueueRunners(sess):
for _ in range(num_batches):
results = sess.run(tensor_dict)
self.assertEquals(results['image'].shape,
(batch_size, image_size, image_size, 3))
self.assertEquals(results['label'].shape, (batch_size, 1))
with self.assertRaises(tf.errors.OutOfRangeError):
sess.run(tensor_dict)
if __name__ == '__main__':
tf.test.main()
|
PaddlePaddle/LanguageModeling/BERT/vocab | vocab | bert-base-cased-vocab | [PAD]
[unused1]
[unused2]
[unused3]
[unused4]
[unused5]
[unused6]
[unused7]
[unused8]
[unused9]
[unused10]
[unused11]
[unused12]
[unused13]
[unused14]
[unused15]
[unused16]
[unused17]
[unused18]
[unused19]
[unused20]
[unused21]
[unused22]
[unused23]
[unused24]
[unused25]
[unused26]
[unused27]
[unused28]
[unused29]
[unused30]
[unused31]
[unused32]
[unused33]
[unused34]
[unused35]
[unused36]
[unused37]
[unused38]
[unused39]
[unused40]
[unused41]
[unused42]
[unused43]
[unused44]
[unused45]
[unused46]
[unused47]
[unused48]
[unused49]
[unused50]
[unused51]
[unused52]
[unused53]
[unused54]
[unused55]
[unused56]
[unused57]
[unused58]
[unused59]
[unused60]
[unused61]
[unused62]
[unused63]
[unused64]
[unused65]
[unused66]
[unused67]
[unused68]
[unused69]
[unused70]
[unused71]
[unused72]
[unused73]
[unused74]
[unused75]
[unused76]
[unused77]
[unused78]
[unused79]
[unused80]
[unused81]
[unused82]
[unused83]
[unused84]
[unused85]
[unused86]
[unused87]
[unused88]
[unused89]
[unused90]
[unused91]
[unused92]
[unused93]
[unused94]
[unused95]
[unused96]
[unused97]
[unused98]
[unused99]
[UNK]
[CLS]
[SEP]
[MASK]
[unused100]
[unused101]
!
"
#
$
%
&
'
(
)
*
+
,
-
.
/
0
1
2
3
4
5
6
7
8
9
:
;
<
=
>
?
@
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
[
\
]
^
_
`
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
{
|
}
~
¡
¢
£
¥
§
¨
©
ª
«
¬
®
°
±
²
³
´
µ
¶
·
¹
º
»
¼
½
¾
¿
À
Á
Â
Ä
Å
Æ
Ç
È
É
Í
Î
Ñ
Ó
Ö
×
Ø
Ú
Ü
Þ
ß
à
á
â
ã
ä
å
æ
ç
è
é
ê
ë
ì
í
î
ï
ð
ñ
ò
ó
ô
õ
ö
÷
ø
ù
ú
û
ü
ý
þ
ÿ
Ā
ā
ă
ą
Ć
ć
Č
č
ď
Đ
đ
ē
ė
ę
ě
ğ
ġ
Ħ
ħ
ĩ
Ī
ī
İ
ı
ļ
Ľ
ľ
Ł
ł
ń
ņ
ň
ŋ
Ō
ō
ŏ
ő
Œ
œ
ř
Ś
ś
Ş
ş
Š
š
Ţ
ţ
ť
ũ
ū
ŭ
ů
ű
ų
ŵ
ŷ
ź
Ż
ż
Ž
ž
Ə
ƒ
ơ
ư
ǎ
ǐ
ǒ
ǔ
ǫ
Ș
ș
Ț
ț
ɐ
ɑ
ɔ
ɕ
ə
ɛ
ɡ
ɣ
ɨ
ɪ
ɲ
ɾ
ʀ
ʁ
ʂ
ʃ
ʊ
ʋ
ʌ
ʐ
ʑ
ʒ
ʔ
ʰ
ʲ
ʳ
ʷ
ʻ
ʼ
ʾ
ʿ
ˈ
ː
ˡ
ˢ
ˣ
́
̃
̍
̯
͡
Α
Β
Γ
Δ
Ε
Η
Θ
Ι
Κ
Λ
Μ
Ν
Ο
Π
Σ
Τ
Φ
Χ
Ψ
Ω
ά
έ
ή
ί
α
β
γ
δ
ε
ζ
η
θ
ι
κ
λ
μ
ν
ξ
ο
π
ρ
ς
σ
τ
υ
φ
χ
ψ
ω
ό
ύ
ώ
І
Ј
А
Б
В
Г
Д
Е
Ж
З
И
К
Л
М
Н
О
П
Р
С
Т
У
Ф
Х
Ц
Ч
Ш
Э
Ю
Я
а
б
в
г
д
е
ж
з
и
й
к
л
м
н
о
п
р
с
т
у
ф
х
ц
ч
ш
щ
ъ
ы
ь
э
ю
я
ё
і
ї
ј
њ
ћ
Ա
Հ
ա
ե
ի
կ
մ
յ
ն
ո
ս
տ
ր
ւ
ְ
ִ
ֵ
ֶ
ַ
ָ
ֹ
ּ
א
ב
ג
ד
ה
ו
ז
ח
ט
י
כ
ל
ם
מ
ן
נ
ס
ע
פ
צ
ק
ר
ש
ת
،
ء
آ
أ
إ
ئ
ا
ب
ة
ت
ث
ج
ح
خ
د
ذ
ر
ز
س
ش
ص
ض
ط
ظ
ع
غ
ف
ق
ك
ل
م
ن
ه
و
ى
ي
َ
ِ
ٹ
پ
چ
ک
گ
ہ
ی
ے
ं
आ
क
ग
च
ज
ण
त
द
ध
न
प
ब
भ
म
य
र
ल
व
श
ष
स
ह
ा
ि
ी
ु
े
ो
्
।
॥
আ
ই
এ
ও
ক
খ
গ
চ
ছ
জ
ট
ত
থ
দ
ধ
ন
প
ব
ম
য
র
ল
শ
স
হ
়
া
ি
ী
ু
ে
ো
্
য়
க
த
ப
ம
ய
ர
ல
வ
ா
ி
ு
்
ร
་
ག
ང
ད
ན
བ
མ
ར
ལ
ས
ི
ུ
ེ
ོ
ა
ე
ი
ლ
ნ
ო
რ
ს
ᴬ
ᴵ
ᵀ
ᵃ
ᵇ
ᵈ
ᵉ
ᵍ
ᵏ
ᵐ
ᵒ
ᵖ
ᵗ
ᵘ
ᵢ
ᵣ
ᵤ
ᵥ
ᶜ
ᶠ
ḍ
Ḥ
ḥ
Ḩ
ḩ
ḳ
ṃ
ṅ
ṇ
ṛ
ṣ
ṭ
ạ
ả
ấ
ầ
ẩ
ậ
ắ
ế
ề
ể
ễ
ệ
ị
ọ
ố
ồ
ổ
ộ
ớ
ờ
ợ
ụ
ủ
ứ
ừ
ử
ữ
ự
ỳ
ỹ
ἀ
ἐ
ὁ
ὐ
ὰ
ὶ
ὸ
ῆ
ῖ
ῦ
ῶ
‐
‑
‒
–
—
―
‖
‘
’
‚
“
”
„
†
‡
•
…
‰
′
″
⁄
⁰
ⁱ
⁴
⁵
⁶
⁷
⁸
⁹
⁺
⁻
ⁿ
₀
₁
₂
₃
₄
₅
₆
₇
₈
₉
₊
₍
₎
ₐ
ₑ
ₒ
ₓ
ₕ
ₖ
ₘ
ₙ
ₚ
ₛ
ₜ
₤
€
₱
₹
ℓ
№
ℝ
⅓
←
↑
→
↔
⇌
⇒
∂
∈
−
∗
∘
√
∞
∧
∨
∩
∪
≈
≠
≡
≤
≥
⊂
⊆
⊕
⋅
─
│
■
●
★
☆
☉
♠
♣
♥
♦
♭
♯
⟨
⟩
ⱼ
、
。
《
》
「
」
『
』
〜
い
う
え
お
か
き
く
け
こ
さ
し
す
せ
そ
た
ち
つ
て
と
な
に
の
は
ひ
ま
み
む
め
も
や
ゆ
よ
ら
り
る
れ
ん
ア
ィ
イ
ウ
エ
オ
カ
ガ
キ
ク
グ
コ
サ
シ
ジ
ス
ズ
タ
ダ
ッ
テ
デ
ト
ド
ナ
ニ
ハ
バ
パ
フ
ブ
プ
マ
ミ
ム
ャ
ュ
ラ
リ
ル
レ
ロ
ン
・
ー
一
三
上
下
中
事
二
井
京
人
亻
仁
佐
侍
光
公
力
北
十
南
原
口
史
司
吉
同
和
囗
国
國
土
城
士
大
天
太
夫
女
子
宀
安
宮
宿
小
尚
山
島
川
州
平
年
心
愛
戸
文
新
方
日
明
星
書
月
木
本
李
村
東
松
林
正
武
氏
水
氵
江
河
海
版
犬
王
生
田
白
皇
省
真
石
社
神
竹
美
義
花
藤
西
谷
車
辶
道
郎
郡
部
野
金
長
門
陽
青
食
馬
高
龍
龸
사
씨
의
이
한
fi
fl
!
(
)
,
-
/
:
the
of
and
to
in
was
The
is
for
as
on
with
that
##s
his
by
he
at
from
it
her
He
had
an
were
you
be
In
she
are
but
which
It
not
or
have
my
him
one
this
me
has
also
up
their
first
out
who
been
they
She
into
all
would
its
##ing
time
two
##a
##e
said
about
when
over
more
other
can
after
back
them
then
##ed
there
like
so
only
##n
could
##d
##i
##y
what
no
##o
where
This
made
than
if
You
##ly
through
we
before
##r
just
some
##er
years
do
New
##t
down
between
new
now
will
three
most
On
around
year
used
such
being
well
during
They
know
against
under
later
did
part
known
off
while
His
re
...
##l
people
until
way
American
didn
University
your
both
many
get
United
became
head
There
second
As
work
any
But
still
again
born
even
eyes
After
including
de
took
And
long
team
season
family
see
right
same
called
name
because
film
don
10
found
much
school
##es
going
won
place
away
We
day
left
John
000
hand
since
World
these
how
make
number
each
life
area
man
four
go
No
here
very
National
##m
played
released
never
began
States
album
home
last
too
held
several
May
own
##on
take
end
School
##h
ll
series
What
want
use
another
city
When
2010
side
At
may
That
came
face
June
think
game
those
high
March
early
September
##al
2011
looked
July
state
small
thought
went
January
October
##u
based
August
##us
world
good
April
York
us
12
2012
2008
For
2009
group
along
few
South
little
##k
following
November
something
2013
December
set
2007
old
2006
2014
located
##an
music
County
City
former
##in
room
ve
next
All
##man
got
father
house
##g
body
15
20
18
started
If
2015
town
our
line
War
large
population
named
British
company
member
five
My
single
##en
age
State
moved
February
11
Her
should
century
government
built
come
best
show
However
within
look
men
door
without
need
wasn
2016
water
One
system
knew
every
died
League
turned
asked
North
St
wanted
building
received
song
served
though
felt
##ia
station
band
##ers
local
public
himself
different
death
say
##1
30
##2
2005
16
night
behind
children
English
members
near
saw
together
son
14
voice
village
13
hands
help
##3
due
French
London
top
told
open
published
third
2017
play
across
During
put
final
often
include
25
##le
main
having
2004
once
ever
let
book
led
gave
late
front
find
club
##4
German
included
species
College
form
opened
mother
women
enough
West
must
2000
power
really
17
making
half
##6
order
might
##is
given
million
times
days
point
full
service
With
km
major
##7
original
become
seen
II
north
six
##te
love
##0
national
International
##5
24
So
District
lost
run
couldn
career
always
##9
2003
##th
country
##z
House
air
tell
south
worked
woman
player
##A
almost
war
River
##ic
married
continued
Then
James
close
black
short
##8
##na
using
history
returned
light
car
##ra
sure
William
things
General
##ry
2002
better
support
100
among
From
feet
King
anything
21
19
established
district
2001
feel
great
##ton
level
Cup
These
written
games
others
already
title
story
##p
law
thing
US
record
role
however
By
students
England
white
control
least
inside
land
##C
22
give
community
hard
##ie
non
##c
produced
George
round
period
Park
business
various
##ne
does
present
wife
far
taken
per
reached
David
able
version
working
young
live
created
joined
East
living
appeared
case
High
done
23
important
President
Award
France
position
office
looking
total
general
class
To
production
##S
football
party
brother
keep
mind
free
Street
hair
announced
development
either
nothing
moment
Church
followed
wrote
why
India
San
election
1999
lead
How
##ch
##rs
words
European
course
considered
America
arms
Army
political
##la
28
26
west
east
ground
further
church
less
site
First
Not
Australia
toward
California
##ness
described
works
An
Council
heart
past
military
27
##or
heard
field
human
soon
founded
1998
playing
trying
##x
##ist
##ta
television
mouth
although
taking
win
fire
Division
##ity
Party
Royal
program
Some
Don
Association
According
tried
TV
Paul
outside
daughter
Best
While
someone
match
recorded
Canada
closed
region
Air
above
months
elected
##da
##ian
road
##ar
brought
move
1997
leave
##um
Thomas
1996
am
low
Robert
formed
person
services
points
Mr
miles
##b
stop
rest
doing
needed
international
release
floor
start
sound
call
killed
real
dark
research
finished
language
Michael
professional
change
sent
50
upon
29
track
hit
event
2018
term
example
Germany
similar
return
##ism
fact
pulled
stood
says
ran
information
yet
result
developed
girl
##re
God
1995
areas
signed
decided
##ment
Company
seemed
##el
co
turn
race
common
video
Charles
Indian
##ation
blood
art
red
##able
added
rather
1994
met
director
addition
design
average
minutes
##ies
##ted
available
bed
coming
friend
idea
kind
Union
Road
remained
##ting
everything
##ma
running
care
finally
Chinese
appointed
1992
Australian
##ley
popular
mean
teams
probably
##land
usually
project
social
Championship
possible
word
Russian
instead
mi
herself
##T
Peter
Hall
Center
seat
style
money
1993
else
Department
table
Music
current
31
features
special
events
character
Two
square
sold
debut
##v
process
Although
Since
##ka
40
Central
currently
education
placed
lot
China
quickly
forward
seven
##ling
Europe
arm
performed
Japanese
1991
Henry
Now
Dr
##ion
week
Group
myself
big
UK
Washington
ten
deep
1990
Club
Japan
space
La
directed
smile
episode
hours
whole
##de
##less
Why
wouldn
designed
strong
training
changed
Society
stage
involved
hadn
towards
leading
police
eight
kept
Institute
study
largest
child
eventually
private
modern
Court
throughout
getting
originally
attack
##E
talk
Great
longer
songs
alone
##ine
wide
dead
walked
shot
##ri
Oh
force
##st
Art
today
friends
Island
Richard
1989
center
construction
believe
size
White
ship
completed
##B
gone
Just
rock
sat
##R
radio
below
entire
families
league
includes
type
lived
official
range
hold
featured
Most
##ter
president
passed
means
##f
forces
lips
Mary
Do
guitar
##ce
food
wall
Of
spent
Its
performance
hear
##P
Western
reported
sister
##et
morning
##M
especially
##ive
Minister
itself
post
bit
groups
1988
##tion
Black
##ng
Well
raised
sometimes
Canadian
Paris
Spanish
replaced
schools
Academy
leaving
central
female
Christian
Jack
whose
college
onto
provided
##D
##ville
players
actually
stopped
##son
Museum
doesn
##ts
books
fight
allowed
##ur
beginning
Records
awarded
parents
coach
##os
Red
saying
##ck
Smith
Yes
Lake
##L
aircraft
1987
##ble
previous
ft
action
Italian
African
happened
vocals
Act
future
court
##ge
1986
degree
phone
##ro
Is
countries
winning
breath
Love
river
matter
Lord
Other
list
self
parts
##ate
provide
cut
shows
plan
1st
interest
##ized
Africa
stated
Sir
fell
owned
earlier
ended
competition
attention
1985
lower
nearly
bad
older
stay
Saint
##se
certain
1984
fingers
blue
try
fourth
Grand
##as
king
##nt
makes
chest
movement
states
moving
data
introduced
model
date
section
Los
deal
##I
skin
entered
middle
success
Texas
##w
summer
island
##N
Republic
length
husband
1980
##ey
reason
anyone
forced
via
base
500
job
covered
Festival
Roman
successful
rights
cover
Man
writing
Ireland
##F
related
goal
takes
buildings
true
weeks
1983
Because
opening
novel
ISBN
meet
gold
##ous
mid
km²
standing
Football
Chicago
shook
whom
##ki
1982
Day
feeling
scored
boy
higher
Force
leader
heavy
fall
question
sense
army
Second
energy
meeting
themselves
kill
##am
board
census
##ya
##ns
mine
meant
market
required
battle
campaign
attended
approximately
Kingdom
runs
active
##ha
contract
clear
previously
health
1979
Arts
complete
Catholic
couple
units
##ll
##ty
Committee
shoulder
sea
systems
listed
##O
caught
tournament
##G
northern
author
Film
Your
##men
holding
offered
personal
1981
southern
artist
traditional
studio
200
capital
##ful
regular
ask
giving
organization
month
news
Are
read
managed
helped
studied
student
defeated
natural
industry
Year
noted
decision
Government
quite
##id
smiled
1972
Maybe
tracks
##ke
Mark
al
media
engine
hour
Their
relationship
plays
property
structure
1976
ago
Hill
Martin
1978
ready
Many
Like
Bay
immediately
generally
Italy
Greek
practice
caused
division
significant
Joseph
speed
Let
thinking
completely
1974
primary
mostly
##field
##K
1975
##to
Even
writer
##led
dropped
magazine
collection
understand
route
highest
particular
films
lines
network
Science
loss
carried
direction
green
1977
location
producer
according
Women
Queen
neck
thus
independent
view
1970
Angeles
Soviet
distance
problem
Board
tour
western
income
appearance
access
Mexico
nodded
street
surface
arrived
believed
Old
1968
1973
becoming
whether
1945
figure
singer
stand
Following
issue
window
wrong
pain
everyone
lives
issues
park
slowly
la
act
##va
bring
Lee
operations
key
comes
fine
cold
famous
Navy
1971
Me
additional
individual
##ner
Zealand
goals
county
contains
Service
minute
2nd
reach
talking
particularly
##ham
movie
Director
glass
paper
studies
##co
railway
standard
Education
45
represented
Chief
Louis
launched
Star
terms
60
1969
experience
watched
Another
Press
Tom
staff
starting
subject
break
Virginia
nine
eye
##age
evidence
foot
##est
companies
Prince
##V
gun
create
Big
People
guy
Green
simply
numerous
##line
increased
twenty
##ga
##do
1967
award
officer
stone
Before
material
Northern
grew
male
plant
Life
legs
step
Al
unit
35
except
answer
##U
report
response
Edward
commercial
edition
trade
science
##ca
Irish
Law
shown
rate
failed
##ni
remains
changes
mm
limited
larger
Later
cause
waiting
Time
##wood
cost
Bill
manager
activities
likely
allow
operated
retired
##ping
65
directly
Who
associated
effect
hell
Florida
straight
hot
Valley
management
girls
expected
eastern
Mike
chance
cast
centre
chair
hurt
problems
##li
walk
programs
Team
characters
Battle
edge
pay
maybe
corner
majority
medical
Joe
Summer
##io
attempt
Pacific
command
Radio
##by
names
municipality
1964
train
economic
Brown
feature
sex
source
agreed
remember
Three
1966
1965
Pennsylvania
victory
senior
annual
III
Southern
results
Sam
serving
religious
Jones
appears
##der
despite
claimed
Both
musical
matches
fast
security
selected
Young
double
complex
hospital
chief
Times
##ve
Championships
filled
Public
Despite
beautiful
Research
plans
Province
##ally
Wales
##ko
artists
metal
nearby
Spain
##il
32
houses
supported
piece
##no
stared
recording
nature
legal
Russia
##ization
remaining
looks
##sh
bridge
closer
cases
scene
marriage
Little
##é
uses
Earth
specific
Frank
theory
Good
discovered
referred
bass
culture
university
presented
Congress
##go
metres
continue
1960
isn
Awards
meaning
cell
composed
separate
Series
forms
Blue
cross
##tor
increase
test
computer
slightly
Where
Jewish
Town
tree
status
1944
variety
responsible
pretty
initially
##way
realized
pass
provides
Captain
Alexander
recent
score
broke
Scott
drive
financial
showed
Line
stories
ordered
soldiers
genus
operation
gaze
sitting
society
Only
hope
actor
follow
Empire
Yeah
technology
happy
focus
policy
spread
situation
##ford
##ba
Mrs
watch
Can
1963
Commission
touch
earned
troops
Under
1962
individuals
cannot
19th
##lin
mile
expression
exactly
suddenly
weight
dance
stepped
places
appear
difficult
Railway
anti
numbers
kilometres
star
##ier
department
ice
Britain
removed
Once
##lo
Boston
value
##ant
mission
trees
Order
sports
join
serve
Major
poor
Poland
mainly
Theatre
pushed
Station
##it
Lady
federal
silver
##ler
foreign
##ard
Eastern
##den
box
hall
subsequently
lies
acquired
1942
ancient
CD
History
Jean
beyond
##ger
El
##les
growing
championship
native
Parliament
Williams
watching
direct
overall
offer
Also
80
Secretary
spoke
Latin
ability
##ated
safe
presence
##ial
headed
regional
planned
1961
Johnson
throat
consists
##W
extended
Or
bar
walls
Chris
stations
politician
Olympics
influence
share
fighting
speak
hundred
Carolina
die
stars
##tic
color
Chapter
##ish
fear
sleep
goes
Francisco
oil
Bank
sign
physical
##berg
Dutch
seasons
##rd
Games
Governor
sorry
lack
Centre
memory
baby
smaller
charge
Did
multiple
ships
shirt
Assembly
amount
leaves
3rd
Foundation
conditions
1943
Rock
Democratic
Daniel
##at
winner
products
##ina
store
latter
Professor
civil
prior
host
1956
soft
vote
needs
Each
rules
1958
pressure
letter
normal
proposed
levels
records
1959
paid
intended
Victoria
purpose
okay
historical
issued
1980s
broadcast
rule
simple
picked
firm
Sea
1941
Elizabeth
1940
serious
featuring
highly
graduated
mentioned
choice
1948
replied
percent
Scotland
##hi
females
constructed
1957
settled
Steve
recognized
cities
crew
glanced
kiss
competed
flight
knowledge
editor
More
Conference
##H
fifth
elements
##ee
##tes
function
newspaper
recently
Miss
cultural
brown
twice
Office
1939
truth
Creek
1946
households
USA
1950
quality
##tt
border
seconds
destroyed
pre
wait
ahead
build
image
90
cars
##mi
33
promoted
professor
et
bank
medal
text
broken
Middle
revealed
sides
wing
seems
channel
1970s
Ben
loved
effort
officers
Will
##ff
70
Israel
Jim
upper
fully
label
Jr
assistant
powerful
pair
positive
##ary
gives
1955
20th
races
remain
kitchen
primarily
##ti
Sydney
easy
Tour
whispered
buried
300
News
Polish
1952
Duke
Columbia
produce
accepted
00
approach
minor
1947
Special
44
Asian
basis
visit
Fort
Civil
finish
formerly
beside
leaned
##ite
median
rose
coast
effects
supposed
Cross
##hip
Corps
residents
Jackson
##ir
Bob
basketball
36
Asia
seem
Bishop
Book
##ber
ring
##ze
owner
BBC
##ja
transferred
acting
De
appearances
walking
Le
press
grabbed
1954
officially
1953
##pe
risk
taught
review
##X
lay
##well
council
Avenue
seeing
losing
Ohio
Super
province
ones
travel
##sa
projects
equipment
spot
Berlin
administrative
heat
potential
shut
capacity
elections
growth
fought
Republican
mixed
Andrew
teacher
turning
strength
shoulders
beat
wind
1949
Health
follows
camp
suggested
perhaps
Alex
mountain
contact
divided
candidate
fellow
34
Show
necessary
workers
ball
horse
ways
questions
protect
gas
activity
younger
bottom
founder
Scottish
screen
treatment
easily
com
##house
dedicated
Master
warm
Night
Georgia
Long
von
##me
perfect
website
1960s
piano
efforts
##ide
Tony
sort
offers
Development
Simon
executive
##nd
save
Over
Senate
1951
1990s
draw
master
Police
##ius
renamed
boys
initial
prominent
damage
Co
##ov
##za
online
begin
occurred
captured
youth
Top
account
tells
Justice
conducted
forest
##town
bought
teeth
Jersey
##di
purchased
agreement
Michigan
##ure
campus
prison
becomes
product
secret
guess
Route
huge
types
drums
64
split
defeat
estate
housing
##ot
brothers
Coast
declared
happen
titled
therefore
sun
commonly
alongside
Stadium
library
Home
article
steps
telling
slow
assigned
refused
laughed
wants
Nick
wearing
Rome
Open
##ah
Hospital
pointed
Taylor
lifted
escape
participated
##j
drama
parish
Santa
##per
organized
mass
pick
Airport
gets
Library
unable
pull
Live
##ging
surrounding
##ries
focused
Adam
facilities
##ning
##ny
38
##ring
notable
era
connected
gained
operating
laid
Regiment
branch
defined
Christmas
machine
Four
academic
Iran
adopted
concept
Men
compared
search
traffic
Max
Maria
greater
##ding
widely
##burg
serves
1938
37
Go
hotel
shared
typically
scale
1936
leg
suffered
yards
pieces
Ministry
Wilson
episodes
empty
1918
safety
continues
yellow
historic
settlement
400
Come
Corporation
enemy
content
picture
evening
territory
method
trial
solo
driver
Here
##ls
entrance
Prize
spring
whatever
##ent
75
##ji
reading
Arthur
##cy
Our
clothes
Prime
Illinois
Kong
code
##ria
sit
Harry
Federal
chosen
administration
bodies
begins
stomach
Though
seats
Hong
density
Sun
leaders
Field
museum
chart
platform
languages
##ron
birth
holds
Gold
##un
fish
combined
##ps
4th
1937
largely
captain
trust
Game
van
boat
Oxford
basic
beneath
Islands
painting
nice
Toronto
path
males
sources
block
conference
parties
murder
clubs
crowd
calling
About
Business
peace
knows
lake
speaking
stayed
Brazil
allowing
Born
unique
thick
Technology
##que
receive
des
semi
alive
noticed
format
##ped
coffee
digital
##ned
handed
guard
tall
faced
setting
plants
partner
claim
reduced
temple
animals
determined
classes
##out
estimated
##ad
Olympic
providing
Massachusetts
learned
Inc
Philadelphia
Social
carry
42
possibly
hosted
tonight
respectively
Today
shape
Mount
roles
designated
brain
etc
Korea
thoughts
Brian
Highway
doors
background
drew
models
footballer
tone
turns
1935
quiet
tower
wood
bus
write
software
weapons
flat
marked
1920
newly
tight
Eric
finger
Journal
FC
Van
rise
critical
Atlantic
granted
returning
communities
humans
quick
39
48
ranked
sight
pop
Swedish
Stephen
card
analysis
attacked
##wa
Sunday
identified
Jason
champion
situated
1930
expanded
tears
##nce
reaching
Davis
protection
Emperor
positions
nominated
Bridge
tax
dress
allows
avoid
leadership
killing
actress
guest
steel
knowing
electric
cells
disease
grade
unknown
##ium
resulted
Pakistan
confirmed
##ged
tongue
covers
##Y
roof
entirely
applied
votes
drink
interview
exchange
Township
reasons
##ised
page
calls
dog
agent
nose
teaching
##ds
##ists
advanced
wish
Golden
existing
vehicle
del
1919
develop
attacks
pressed
Sports
planning
resulting
facility
Sarah
notes
1933
Class
Historic
winter
##mo
audience
Community
household
Netherlands
creation
##ize
keeping
1914
claims
dry
guys
opposite
##ak
explained
Ontario
secondary
difference
Francis
actions
organizations
yard
animal
Up
Lewis
titles
Several
1934
Ryan
55
Supreme
rolled
1917
distribution
figures
afraid
rural
yourself
##rt
sets
barely
Instead
passing
awards
41
silence
authority
occupied
environment
windows
engineering
surprised
flying
crime
reports
Mountain
powers
driving
succeeded
reviews
1929
Head
missing
Song
Jesus
opportunity
inspired
ends
albums
conversation
impact
injury
surprise
billion
learning
heavily
oldest
union
creating
##ky
festival
literature
letters
sexual
##tte
apartment
Final
comedy
nation
orders
##sen
contemporary
Power
drawn
existence
connection
##ating
Post
Junior
remembered
message
Medal
castle
note
engineer
sounds
Beach
crossed
##dy
ear
scientific
sales
##ai
theme
starts
clearly
##ut
trouble
##gan
bag
##han
BC
sons
1928
silent
versions
daily
Studies
ending
Rose
guns
1932
headquarters
reference
obtained
Squadron
concert
none
du
Among
##don
prevent
Member
answered
staring
Between
##lla
portion
drug
liked
association
performances
Nations
formation
Castle
lose
learn
scoring
relatively
quarter
47
Premier
##ors
Sweden
baseball
attempted
trip
worth
perform
airport
fields
enter
honor
Medical
rear
commander
officials
condition
supply
materials
52
Anna
volume
threw
Persian
43
interested
Gallery
achieved
visited
laws
relief
Area
Matt
singles
Lieutenant
Country
fans
Cambridge
sky
Miller
effective
tradition
Port
##ana
minister
extra
entitled
System
sites
authorities
acres
committee
racing
1931
desk
trains
ass
weren
Family
farm
##ance
industrial
##head
iron
49
abandoned
Out
Holy
chairman
waited
frequently
display
Light
transport
starring
Patrick
Engineering
eat
FM
judge
reaction
centuries
price
##tive
Korean
defense
Get
arrested
1927
send
urban
##ss
pilot
Okay
Media
reality
arts
soul
thirty
##be
catch
generation
##nes
apart
Anne
drop
See
##ving
sixth
trained
Management
magic
cm
height
Fox
Ian
resources
vampire
principal
Was
haven
##au
Walter
Albert
rich
1922
causing
entry
##ell
shortly
46
worry
doctor
composer
rank
Network
bright
showing
regions
1924
wave
carrying
kissed
finding
missed
Earl
lying
target
vehicles
Military
controlled
dinner
##board
briefly
lyrics
motion
duty
strange
attempts
invited
kg
villages
5th
Land
##mer
Christ
prepared
twelve
check
thousand
earth
copies
en
transfer
citizens
Americans
politics
nor
theatre
Project
##bo
clean
rooms
laugh
##ran
application
contained
anyway
containing
Sciences
1925
rare
speech
exist
1950s
falling
passenger
##im
stands
51
##ol
##ow
phase
governor
kids
details
methods
Vice
employed
performing
counter
Jane
heads
Channel
wine
opposition
aged
1912
Every
1926
highway
##ura
1921
aired
978
permanent
Forest
finds
joint
approved
##pur
brief
doubt
acts
brand
wild
closely
Ford
Kevin
chose
shall
port
sweet
fun
asking
Be
##bury
sought
Dave
Mexican
mom
Right
Howard
Moscow
Charlie
Stone
##mann
admitted
##ver
wooden
1923
Officer
relations
Hot
combat
publication
chain
shop
inhabitants
proved
ideas
address
1915
Memorial
explain
increasing
conflict
Anthony
Melbourne
narrow
temperature
slid
1916
worse
selling
documentary
Ali
Ray
opposed
vision
dad
extensive
Infantry
commissioned
Doctor
offices
programming
core
respect
storm
##pa
##ay
##om
promotion
der
struck
anymore
shit
Region
receiving
DVD
alternative
##ue
ride
maximum
1910
##ious
Third
Affairs
cancer
Executive
##op
dream
18th
Due
##ker
##worth
economy
IV
Billboard
identity
subsequent
statement
skills
##back
funding
##ons
Round
Foreign
truck
Please
lights
wondered
##ms
frame
yes
Still
districts
fiction
Colonel
converted
150
grown
accident
critics
fit
Information
architecture
Point
Five
armed
Billy
poet
functions
consisted
suit
Turkish
Band
object
desire
##ities
sounded
flow
Norwegian
articles
Marie
pulling
thin
singing
Hunter
Human
Battalion
Federation
Kim
origin
represent
dangerous
weather
fuel
ex
##sing
Last
bedroom
aid
knees
Alan
angry
assumed
plane
Something
founding
concerned
global
Fire
di
please
Portuguese
touched
Roger
nuclear
Register
Jeff
fixed
royal
lie
finals
NFL
Manchester
towns
handle
shaped
Chairman
Dean
launch
understanding
Children
violence
failure
sector
Brigade
wrapped
fired
sharp
tiny
developing
expansion
Free
institutions
technical
Nothing
otherwise
Main
inch
Saturday
wore
Senior
attached
cheek
representing
Kansas
##chi
##kin
actual
advantage
Dan
Austria
##dale
hoped
multi
squad
Norway
streets
1913
Services
hired
grow
pp
wear
painted
Minnesota
stuff
Building
54
Philippines
1900
##ties
educational
Khan
Magazine
##port
Cape
signal
Gordon
sword
Anderson
cool
engaged
Commander
images
Upon
tied
Security
cup
rail
Vietnam
successfully
##red
Muslim
gain
bringing
Native
hers
occurs
negative
Philip
Kelly
Colorado
category
##lan
600
Have
supporting
wet
56
stairs
Grace
observed
##ung
funds
restaurant
1911
Jews
##ments
##che
Jake
Back
53
asks
journalist
accept
bands
bronze
helping
##ice
decades
mayor
survived
usual
influenced
Douglas
Hey
##izing
surrounded
retirement
Temple
derived
Pope
registered
producing
##ral
structures
Johnny
contributed
finishing
buy
specifically
##king
patients
Jordan
internal
regarding
Samuel
Clark
##q
afternoon
Finally
scenes
notice
refers
quietly
threat
Water
Those
Hamilton
promise
freedom
Turkey
breaking
maintained
device
lap
ultimately
Champion
Tim
Bureau
expressed
investigation
extremely
capable
qualified
recognition
items
##up
Indiana
adult
rain
greatest
architect
Morgan
dressed
equal
Antonio
collected
drove
occur
Grant
graduate
anger
Sri
worried
standards
##ore
injured
somewhere
damn
Singapore
Jimmy
pocket
homes
stock
religion
aware
regarded
Wisconsin
##tra
passes
fresh
##ea
argued
Ltd
EP
Diego
importance
Census
incident
Egypt
Missouri
domestic
leads
ceremony
Early
camera
Father
challenge
Switzerland
lands
familiar
hearing
spend
educated
Tennessee
Thank
##ram
Thus
concern
putting
inches
map
classical
Allen
crazy
valley
Space
softly
##my
pool
worldwide
climate
experienced
neighborhood
scheduled
neither
fleet
1908
Girl
##J
Part
engines
locations
darkness
Revolution
establishment
lawyer
objects
apparently
Queensland
Entertainment
bill
mark
Television
##ong
pale
demand
Hotel
selection
##rn
##ino
Labour
Liberal
burned
Mom
merged
Arizona
request
##lia
##light
hole
employees
##ical
incorporated
95
independence
Walker
covering
joining
##ica
task
papers
backing
sell
biggest
6th
strike
establish
##ō
gently
59
Orchestra
Winter
protein
Juan
locked
dates
Boy
aren
shooting
Luke
solid
charged
Prior
resigned
interior
garden
spoken
improve
wonder
promote
hidden
##med
combination
Hollywood
Swiss
consider
##ks
Lincoln
literary
drawing
Marine
weapon
Victor
Trust
Maryland
properties
##ara
exhibition
understood
hung
Tell
installed
loud
fashion
affected
junior
landing
flowers
##he
Internet
beach
Heart
tries
Mayor
programme
800
wins
noise
##ster
##ory
58
contain
fair
delivered
##ul
wedding
Square
advance
behavior
Program
Oregon
##rk
residence
realize
certainly
hill
Houston
57
indicated
##water
wounded
Village
massive
Moore
thousands
personnel
dating
opera
poetry
##her
causes
feelings
Frederick
applications
push
approached
foundation
pleasure
sale
fly
gotten
northeast
costs
raise
paintings
##ney
views
horses
formal
Arab
hockey
typical
representative
rising
##des
clock
stadium
shifted
Dad
peak
Fame
vice
disappeared
users
Way
Naval
prize
hoping
values
evil
Bell
consisting
##ón
Regional
##ics
improved
circle
carefully
broad
##ini
Fine
maintain
operate
offering
mention
Death
stupid
Through
Princess
attend
interests
ruled
somewhat
wings
roads
grounds
##ual
Greece
Champions
facing
hide
voted
require
Dark
Matthew
credit
sighed
separated
manner
##ile
Boys
1905
committed
impossible
lip
candidates
7th
Bruce
arranged
Islamic
courses
criminal
##ened
smell
##bed
08
consecutive
##ening
proper
purchase
weak
Prix
1906
aside
introduction
Look
##ku
changing
budget
resistance
factory
Forces
agency
##tone
northwest
user
1907
stating
##one
sport
Design
environmental
cards
concluded
Carl
250
accused
##ology
Girls
sick
intelligence
Margaret
responsibility
Guard
##tus
17th
sq
goods
1909
hate
##ek
capture
stores
Gray
comic
Modern
Silver
Andy
electronic
wheel
##ied
Deputy
##bs
Czech
zone
choose
constant
reserve
##lle
Tokyo
spirit
sub
degrees
flew
pattern
compete
Dance
##ik
secretary
Imperial
99
reduce
Hungarian
confused
##rin
Pierre
describes
regularly
Rachel
85
landed
passengers
##ise
##sis
historian
meters
Youth
##ud
participate
##cing
arrival
tired
Mother
##gy
jumped
Kentucky
faces
feed
Israeli
Ocean
##Q
##án
plus
snow
techniques
plate
sections
falls
jazz
##ris
tank
loan
repeated
opinion
##res
unless
rugby
journal
Lawrence
moments
shock
distributed
##ded
adjacent
Argentina
crossing
uncle
##ric
Detroit
communication
mental
tomorrow
session
Emma
Without
##gen
Miami
charges
Administration
hits
coat
protected
Cole
invasion
priest
09
Gary
enjoyed
plot
measure
bound
friendly
throw
musician
##lon
##ins
Age
knife
damaged
birds
driven
lit
ears
breathing
Arabic
Jan
faster
Jonathan
##gate
Independent
starred
Harris
teachers
Alice
sequence
mph
file
translated
decide
determine
Review
documents
sudden
threatened
##ft
bear
distinct
decade
burning
##sky
1930s
replace
begun
extension
##time
1904
equivalent
accompanied
Christopher
Danish
##ye
Besides
##more
persons
fallen
Rural
roughly
saved
willing
ensure
Belgium
05
musicians
##ang
giant
Six
Retrieved
worst
purposes
##bly
mountains
seventh
slipped
brick
07
##py
somehow
Carter
Iraq
cousin
favor
islands
journey
FIFA
contrast
planet
vs
calm
##ings
concrete
branches
gray
profit
Russell
##ae
##ux
##ens
philosophy
businesses
talked
parking
##ming
owners
Place
##tle
agricultural
Kate
06
southeast
draft
Eddie
earliest
forget
Dallas
Commonwealth
edited
66
inner
ed
operates
16th
Harvard
assistance
##si
designs
Take
bathroom
indicate
CEO
Command
Louisiana
1902
Dublin
Books
1901
tropical
1903
##tors
Places
tie
progress
forming
solution
62
letting
##ery
studying
##jo
duties
Baseball
taste
Reserve
##ru
Ann
##gh
visible
##vi
notably
link
NCAA
southwest
Never
storage
mobile
writers
favorite
Pro
pages
truly
count
##tta
string
kid
98
Ross
row
##idae
Kennedy
##tan
Hockey
hip
waist
grandfather
listen
##ho
feels
busy
72
stream
obvious
cycle
shaking
Knight
##ren
Carlos
painter
trail
web
linked
04
Palace
existed
##ira
responded
closing
End
examples
Marshall
weekend
jaw
Denmark
lady
township
medium
chin
Story
option
fifteen
Moon
represents
makeup
investment
jump
childhood
Oklahoma
roll
normally
Ten
Operation
Graham
Seattle
Atlanta
paused
promised
rejected
treated
returns
flag
##ita
Hungary
danger
glad
movements
visual
subjects
credited
soldier
Norman
ill
translation
José
Quebec
medicine
warning
theater
praised
municipal
01
commune
churches
acid
folk
8th
testing
add
survive
Sound
devices
residential
severe
presidential
Mississippi
Austin
Perhaps
Charlotte
hanging
Montreal
grin
##ten
racial
partnership
shoot
shift
##nie
Les
downtown
Brothers
Garden
matters
restored
mirror
forever
winners
rapidly
poverty
##ible
Until
DC
faith
hundreds
Real
Ukraine
Nelson
balance
Adams
contest
relative
ethnic
Edinburgh
composition
##nts
emergency
##van
marine
reputation
Down
pack
12th
Communist
Mountains
pro
stages
measures
##ld
ABC
Li
victims
benefit
Iowa
Broadway
gathered
rating
Defense
classic
##ily
ceiling
##ions
snapped
Everything
constituency
Franklin
Thompson
Stewart
entering
Judge
forth
##sk
wanting
smiling
moves
tunnel
premiered
grass
unusual
Ukrainian
bird
Friday
tail
Portugal
coal
element
Fred
guards
Senator
collaboration
beauty
Wood
chemical
beer
justice
signs
##Z
sees
##zi
Puerto
##zed
96
smooth
Bowl
gift
limit
97
heading
Source
wake
requires
Ed
Constitution
factor
Lane
factors
adding
Note
cleared
pictures
pink
##ola
Kent
Local
Singh
moth
Ty
##ture
courts
Seven
temporary
involving
Vienna
emerged
fishing
agree
defensive
stuck
secure
Tamil
##ick
bottle
03
Player
instruments
Spring
patient
flesh
contributions
cry
Malaysia
120
Global
da
Alabama
Within
##work
debuted
expect
Cleveland
concerns
retained
horror
10th
spending
Peace
Transport
grand
Crown
instance
institution
acted
Hills
mounted
Campbell
shouldn
1898
##ably
chamber
soil
88
Ethan
sand
cheeks
##gi
marry
61
weekly
classification
DNA
Elementary
Roy
definitely
Soon
Rights
gate
suggests
aspects
imagine
golden
beating
Studios
Warren
differences
significantly
glance
occasionally
##od
clothing
Assistant
depth
sending
possibility
mode
prisoners
requirements
daughters
dated
Representatives
prove
guilty
interesting
smoke
cricket
93
##ates
rescue
Connecticut
underground
Opera
13th
reign
##ski
thanks
leather
equipped
routes
fan
##ans
script
Wright
bishop
Welsh
jobs
faculty
eleven
Railroad
appearing
anniversary
Upper
##down
anywhere
Rugby
Metropolitan
Meanwhile
Nicholas
champions
forehead
mining
drinking
76
Jerry
membership
Brazilian
Wild
Rio
scheme
Unlike
strongly
##bility
fill
##rian
easier
MP
Hell
##sha
Stanley
banks
Baron
##ique
Robinson
67
Gabriel
Austrian
Wayne
exposed
##wan
Alfred
1899
manage
mix
visitors
eating
##rate
Sean
commission
Cemetery
policies
Camp
parallel
traveled
guitarist
02
supplies
couples
poem
blocks
Rick
Training
Energy
achieve
appointment
Wing
Jamie
63
novels
##em
1890
songwriter
Base
Jay
##gar
naval
scared
miss
labor
technique
crisis
Additionally
backed
destroy
seriously
tools
tennis
91
god
##ington
continuing
steam
obviously
Bobby
adapted
fifty
enjoy
Jacob
publishing
column
##ular
Baltimore
Donald
Liverpool
92
drugs
movies
##ock
Heritage
##je
##istic
vocal
strategy
gene
advice
##bi
Ottoman
riding
##side
Agency
Indonesia
11th
laughing
sleeping
und
muttered
listening
deck
tip
77
ownership
grey
Claire
deeply
provincial
popularity
Cooper
##á
Emily
##sed
designer
Murray
describe
Danny
Around
Parker
##dae
68
rates
suffering
considerable
78
nervous
powered
tons
circumstances
wished
belonged
Pittsburgh
flows
9th
##use
belt
81
useful
15th
context
List
Dead
Iron
seek
Season
worn
frequency
legislation
replacement
memories
Tournament
Again
Barry
organisation
copy
Gulf
waters
meets
struggle
Oliver
1895
Susan
protest
kick
Alliance
components
1896
Tower
Windows
demanded
regiment
sentence
Woman
Logan
Referee
hosts
debate
knee
Blood
##oo
universities
practices
Ward
ranking
correct
happening
Vincent
attracted
classified
##stic
processes
immediate
waste
increasingly
Helen
##po
Lucas
Phil
organ
1897
tea
suicide
actors
lb
crash
approval
waves
##ered
hated
grip
700
amongst
69
74
hunting
dying
lasted
illegal
##rum
stare
defeating
##gs
shrugged
°C
Jon
Count
Orleans
94
affairs
formally
##and
##ves
criticized
Disney
Vol
successor
tests
scholars
palace
Would
celebrated
rounds
grant
Schools
Such
commanded
demon
Romania
##all
Karl
71
##yn
84
Daily
totally
Medicine
fruit
Die
upset
Lower
Conservative
14th
Mitchell
escaped
shoes
Morris
##tz
queen
harder
prime
Thanks
indeed
Sky
authors
rocks
definition
Nazi
accounts
printed
experiences
##ters
divisions
Cathedral
denied
depending
Express
##let
73
appeal
loose
colors
filed
##isation
gender
##ew
throne
forests
Finland
domain
boats
Baker
squadron
shore
remove
##ification
careful
wound
railroad
82
seeking
agents
##ved
Blues
##off
customers
ignored
net
##ction
hiding
Originally
declined
##ess
franchise
eliminated
NBA
merely
pure
appropriate
visiting
forty
markets
offensive
coverage
cave
##nia
spell
##lar
Benjamin
##ire
Convention
filmed
Trade
##sy
##ct
Having
palm
1889
Evans
intense
plastic
Julia
document
jeans
vessel
SR
##fully
proposal
Birmingham
le
##ative
assembly
89
fund
lock
1893
AD
meetings
occupation
modified
Years
odd
aimed
reform
Mission
Works
shake
cat
exception
convinced
executed
pushing
dollars
replacing
soccer
manufacturing
##ros
expensive
kicked
minimum
Josh
coastal
Chase
ha
Thailand
publications
deputy
Sometimes
Angel
effectively
##illa
criticism
conduct
Serbian
landscape
NY
absence
passage
##ula
Blake
Indians
1892
admit
Trophy
##ball
Next
##rated
##ians
charts
kW
orchestra
79
heritage
1894
rough
exists
boundary
Bible
Legislative
moon
medieval
##over
cutting
print
##ett
birthday
##hood
destruction
Julian
injuries
influential
sisters
raising
statue
colour
dancing
characteristics
orange
##ok
##aries
Ken
colonial
twin
Larry
surviving
##shi
Barbara
personality
entertainment
assault
##ering
talent
happens
license
86
couch
Century
soundtrack
shower
swimming
cash
Staff
bent
1885
bay
lunch
##lus
dozen
vessels
CBS
greatly
critic
Test
symbol
panel
shell
output
reaches
87
Front
motor
ocean
##era
##ala
maintenance
violent
scent
Limited
Las
Hope
Theater
Which
survey
Robin
recordings
compilation
##ward
bomb
insurance
Authority
sponsored
satellite
Jazz
refer
stronger
blow
whilst
Wrestling
suggest
##rie
climbed
##els
voices
shopping
1891
Neil
discovery
##vo
##ations
burst
Baby
peaked
Brooklyn
knocked
lift
##try
false
nations
Hugh
Catherine
preserved
distinguished
terminal
resolution
ratio
pants
cited
competitions
completion
DJ
bone
uniform
schedule
shouted
83
1920s
rarely
Basketball
Taiwan
artistic
bare
vampires
arrest
Utah
Marcus
assist
gradually
qualifying
Victorian
vast
rival
Warner
Terry
Economic
##cia
losses
boss
versus
audio
runner
apply
surgery
Play
twisted
comfortable
##cs
Everyone
guests
##lt
Harrison
UEFA
lowered
occasions
##lly
##cher
chapter
youngest
eighth
Culture
##room
##stone
1888
Songs
Seth
Digital
involvement
expedition
relationships
signing
1000
fault
annually
circuit
afterwards
meat
creature
##ou
cable
Bush
##net
Hispanic
rapid
gonna
figured
extent
considering
cried
##tin
sigh
dynasty
##ration
cabinet
Richmond
stable
##zo
1864
Admiral
Unit
occasion
shares
badly
longest
##ify
Connor
extreme
wondering
girlfriend
Studio
##tions
1865
tribe
exact
muscles
hat
Luis
Orthodox
decisions
amateur
description
##lis
hips
kingdom
##ute
Portland
whereas
Bachelor
outer
discussion
partly
Arkansas
1880
dreams
perfectly
Lloyd
##bridge
asleep
##tti
Greg
permission
trading
pitch
mill
Stage
liquid
Keith
##tal
wolf
processing
stick
Jerusalem
profile
rushed
spiritual
argument
Ice
Guy
till
Delhi
roots
Section
missions
Glasgow
penalty
NBC
encouraged
identify
keyboards
##zing
##ston
disc
plain
informed
Bernard
thinks
fled
Justin
##day
newspapers
##wick
Ralph
##zer
unlike
Stars
artillery
##ified
recovered
arrangement
searching
##pers
##tory
##rus
deaths
Egyptian
diameter
##í
marketing
corporate
teach
marks
Turner
staying
hallway
Sebastian
chapel
naked
mistake
possession
1887
dominated
jacket
creative
Fellow
Falls
Defence
suspended
employment
##rry
Hebrew
Hudson
Week
Wars
recognize
Natural
controversial
Tommy
thank
Athletic
benefits
decline
intention
##ets
Lost
Wall
participation
elevation
supports
parliament
1861
concentration
Movement
##IS
competing
stops
behalf
##mm
limits
funded
discuss
Collins
departure
obtain
woods
latest
universe
alcohol
Laura
rush
blade
funny
Dennis
forgotten
Amy
Symphony
apparent
graduating
1862
Rob
Grey
collections
Mason
emotions
##ugh
literally
Any
counties
1863
nomination
fighter
habitat
respond
external
Capital
exit
Video
carbon
sharing
Bad
opportunities
Perry
photo
##mus
Orange
posted
remainder
transportation
portrayed
Labor
recommended
percussion
rated
Grade
rivers
partially
suspected
strip
adults
button
struggled
intersection
Canal
##ability
poems
claiming
Madrid
1886
Together
##our
Much
Vancouver
instrument
instrumental
1870
mad
angle
Control
Phoenix
Leo
Communications
mail
##ette
##ev
preferred
adaptation
alleged
discussed
deeper
##ane
Yet
Monday
volumes
thrown
Zane
##logy
displayed
rolling
dogs
Along
Todd
##ivity
withdrew
representation
belief
##sia
crown
Late
Short
hardly
grinned
romantic
Pete
##ken
networks
enemies
Colin
Eventually
Side
donated
##su
steady
grab
guide
Finnish
Milan
pregnant
controversy
reminded
1884
Stuart
##bach
##ade
Race
Belgian
LP
Production
Zone
lieutenant
infantry
Child
confusion
sang
resident
##ez
victim
1881
channels
Ron
businessman
##gle
Dick
colony
pace
producers
##ese
agencies
Craig
Lucy
Very
centers
Yorkshire
photography
##ched
Album
championships
Metro
substantial
Standard
terrible
directors
contribution
advertising
emotional
##its
layer
segment
sir
folded
Roberts
ceased
Hampshire
##ray
detailed
partners
m²
##pt
Beth
genre
commented
generated
remote
aim
Hans
credits
concerts
periods
breakfast
gay
shadow
defence
Too
Had
transition
Afghanistan
##book
eggs
defend
##lli
writes
Systems
bones
mess
seed
scientists
Shortly
Romanian
##zy
Freedom
muscle
hero
parent
agriculture
checked
Islam
Bristol
Freyja
Arena
cabin
Germans
electricity
ranks
viewed
medals
Wolf
associate
Madison
Sorry
fort
Chile
detail
widespread
attorney
boyfriend
##nan
Students
Spencer
##ig
bite
Maine
demolished
Lisa
erected
Someone
operational
Commissioner
NHL
Coach
Bar
forcing
Dream
Rico
cargo
Murphy
##fish
##ase
distant
##master
##ora
Organization
doorway
Steven
traded
electrical
frequent
##wn
Branch
Sure
1882
placing
Manhattan
attending
attributed
excellent
pounds
ruling
principles
component
Mediterranean
Vegas
machines
percentage
infrastructure
throwing
affiliated
Kings
secured
Caribbean
Track
Ted
honour
opponent
Virgin
Construction
grave
produces
Challenge
stretched
paying
murmured
##ata
integrated
waved
Nathan
##ator
transmission
videos
##yan
##hu
Nova
descent
AM
Harold
conservative
Therefore
venue
competitive
##ui
conclusion
funeral
confidence
releases
scholar
##sson
Treaty
stress
mood
##sm
Mac
residing
Action
Fund
##ship
animated
fitted
##kar
defending
voting
tend
##berry
answers
believes
##ci
helps
Aaron
##tis
themes
##lay
populations
Players
stroke
Trinity
electoral
paint
abroad
charity
keys
Fair
##pes
interrupted
participants
murdered
Days
supporters
##ab
expert
borders
mate
##llo
solar
architectural
tension
##bling
Parish
tape
operator
Cultural
Clinton
indicates
publisher
ordinary
sugar
arrive
rifle
acoustic
##uring
assets
##shire
SS
sufficient
options
HMS
Classic
bars
rebuilt
governments
Beijing
reporter
screamed
Abbey
crying
mechanical
instantly
communications
Political
cemetery
Cameron
Stop
representatives
USS
texts
mathematics
innings
civilian
Serbia
##hill
practical
patterns
dust
Faculty
debt
##end
##cus
junction
suppose
experimental
Computer
Food
wrist
abuse
dealing
bigger
cap
principle
##pin
Muhammad
Fleet
Collection
attempting
dismissed
##burn
regime
Herbert
##ua
shadows
1883
Eve
Lanka
1878
Performance
fictional
##lock
Noah
Run
Voivodeship
exercise
broadcasting
##fer
RAF
Magic
Bangladesh
suitable
##low
##del
styles
toured
Code
identical
links
insisted
110
flash
Model
slave
Derek
Rev
fairly
Greater
sole
##lands
connecting
zero
bench
##ome
switched
Fall
Owen
yours
Electric
shocked
convention
##bra
climb
memorial
swept
Racing
decides
belong
##nk
parliamentary
##und
ages
proof
##dan
delivery
1860
##ów
sad
publicly
leaning
Archbishop
dirt
##ose
categories
1876
burn
##bing
requested
Guinea
Historical
rhythm
relation
##heim
ye
pursue
merchant
##mes
lists
continuous
frowned
colored
tool
gods
involves
Duncan
photographs
Cricket
slight
Gregory
atmosphere
wider
Cook
##tar
essential
Being
FA
emperor
wealthy
nights
##bar
licensed
Hawaii
viewers
Language
load
nearest
milk
kilometers
platforms
##ys
territories
Rogers
sheet
Rangers
contested
##lation
isolated
assisted
swallowed
Small
Contemporary
Technical
Edwards
express
Volume
endemic
##ei
tightly
Whatever
indigenous
Colombia
##ulation
hp
characterized
##ida
Nigeria
Professional
duo
Soccer
slaves
Farm
smart
Attorney
Attendance
Common
salt
##vin
tribes
nod
sentenced
bid
sample
Drive
switch
instant
21st
Cuba
drunk
Alaska
proud
awareness
hitting
sessions
Thai
locally
elsewhere
Dragon
gentle
touching
##lee
Springs
Universal
Latino
spin
1871
Chart
recalled
Type
pointing
##ii
lowest
##ser
grandmother
Adelaide
Jacques
spotted
Buffalo
restoration
Son
Joan
farmers
Lily
1879
lucky
##dal
luck
eldest
##rant
Market
drummer
deployed
warned
prince
sing
amazing
sailed
##oon
1875
Primary
traveling
Masters
Sara
cattle
Trail
gang
Further
desert
relocated
##tch
##ord
Flight
illness
Munich
ninth
repair
Singles
##lated
Tyler
tossed
boots
Work
sized
earning
shoved
magazines
housed
dam
researchers
Former
spun
premiere
spaces
organised
wealth
crimes
devoted
stones
Urban
automatic
hop
affect
outstanding
tanks
mechanism
Muslims
Ms
shots
argue
Jeremy
connections
Armenian
increases
rubbed
1867
retail
gear
Pan
bonus
jurisdiction
weird
concerning
whisper
##gal
Microsoft
tenure
hills
www
Gmina
porch
files
reportedly
venture
Storm
##ence
Nature
killer
panic
fate
Secret
Wang
scream
drivers
belongs
Chamber
clan
monument
mixing
Peru
bet
Riley
Friends
Isaac
submarine
1877
130
judges
harm
ranging
affair
prepare
pupils
householder
Policy
decorated
Nation
slammed
activist
implemented
Room
qualify
Publishing
establishing
Baptist
touring
subsidiary
##nal
legend
1872
laughter
PC
Athens
settlers
ties
dual
dear
Draft
strategic
Ivan
reveal
closest
dominant
Ah
##ult
Denver
bond
boundaries
drafted
tables
##TV
eyed
Edition
##ena
1868
belonging
1874
Industrial
cream
Ridge
Hindu
scholarship
Ma
opens
initiated
##ith
yelled
compound
random
Throughout
grades
physics
sank
grows
exclusively
settle
Saints
brings
Amsterdam
Make
Hart
walks
battery
violin
##born
explanation
##ware
1873
##har
provinces
thrust
exclusive
sculpture
shops
##fire
VI
constitution
Barcelona
monster
Devon
Jefferson
Sullivan
bow
##din
desperate
##ć
Julie
##mon
##ising
terminus
Jesse
abilities
golf
##ple
##via
##away
Raymond
measured
jury
firing
revenue
suburb
Bulgarian
1866
##cha
timber
Things
##weight
Morning
spots
Alberta
Data
explains
Kyle
friendship
raw
tube
demonstrated
aboard
immigrants
reply
breathe
Manager
ease
##ban
##dia
Diocese
##vy
##ía
pit
ongoing
##lie
Gilbert
Costa
1940s
Report
voters
cloud
traditions
##MS
gallery
Jennifer
swung
Broadcasting
Does
diverse
reveals
arriving
initiative
##ani
Give
Allied
Pat
Outstanding
monastery
blind
Currently
##war
bloody
stopping
focuses
managing
Florence
Harvey
creatures
900
breast
internet
Artillery
purple
##mate
alliance
excited
fee
Brisbane
lifetime
Private
##aw
##nis
##gue
##ika
phrase
regulations
reflected
manufactured
conventional
pleased
client
##ix
##ncy
Pedro
reduction
##con
welcome
jail
comfort
Iranian
Norfolk
Dakota
##tein
evolution
everywhere
Initially
sensitive
Olivia
Oscar
implementation
sits
stolen
demands
slide
grandson
##ich
merger
##mic
Spirit
##°
ticket
root
difficulty
Nevada
##als
lined
Dylan
Original
Call
biological
EU
dramatic
##hn
Operations
treaty
gap
##list
Am
Romanized
moral
Butler
perspective
Furthermore
Manuel
absolutely
unsuccessful
disaster
dispute
preparation
tested
discover
##ach
shield
squeezed
brushed
battalion
Arnold
##ras
superior
treat
clinical
##so
Apple
Syria
Cincinnati
package
flights
editions
Leader
minority
wonderful
hang
Pop
Philippine
telephone
bell
honorary
##mar
balls
Democrat
dirty
thereafter
collapsed
Inside
slip
wrestling
##ín
listened
regard
bowl
None
Sport
completing
trapped
##view
copper
Wallace
Honor
blame
Peninsula
##ert
##oy
Anglo
bearing
simultaneously
honest
##ias
Mix
Got
speaker
voiced
impressed
prices
error
1869
##feld
trials
Nine
Industry
substitute
Municipal
departed
slept
##ama
Junction
Socialist
flower
dropping
comment
fantasy
##ress
arrangements
travelled
furniture
fist
relieved
##tics
Leonard
linear
earn
expand
Soul
Plan
Leeds
Sierra
accessible
innocent
Winner
Fighter
Range
winds
vertical
Pictures
101
charter
cooperation
prisoner
interviews
recognised
sung
manufacturer
exposure
submitted
Mars
leaf
gauge
screaming
likes
eligible
##ac
gathering
columns
##dra
belly
UN
maps
messages
speakers
##ants
garage
unincorporated
Number
Watson
sixteen
lots
beaten
Could
Municipality
##ano
Horse
talks
Drake
scores
Venice
genetic
##mal
##ère
Cold
Jose
nurse
traditionally
##bus
Territory
Key
Nancy
##win
thumb
São
index
dependent
carries
controls
Comics
coalition
physician
referring
Ruth
Based
restricted
inherited
internationally
stretch
THE
plates
margin
Holland
knock
significance
valuable
Kenya
carved
emotion
conservation
municipalities
overseas
resumed
Finance
graduation
blinked
temperatures
constantly
productions
scientist
ghost
cuts
permitted
##ches
firmly
##bert
patrol
##yo
Croatian
attacking
1850
portrait
promoting
sink
conversion
##kov
locomotives
Guide
##val
nephew
relevant
Marc
drum
originated
Chair
visits
dragged
Price
favour
corridor
properly
respective
Caroline
reporting
inaugural
1848
industries
##ching
edges
Christianity
Maurice
Trent
Economics
carrier
Reed
##gon
tribute
Pradesh
##ale
extend
attitude
Yale
##lu
settlements
glasses
taxes
targets
##ids
quarters
##ological
connect
hence
metre
collapse
underneath
banned
Future
clients
alternate
explosion
kinds
Commons
hungry
dragon
Chapel
Buddhist
lover
depression
pulls
##ges
##uk
origins
computers
crosses
kissing
assume
emphasis
lighting
##ites
personally
crashed
beam
touchdown
lane
comparison
##mont
Hitler
##las
execution
##ene
acre
sum
Pearl
ray
##point
essentially
worker
convicted
tear
Clay
recovery
Literature
Unfortunately
##row
partial
Petersburg
Bulgaria
coaching
evolved
reception
enters
narrowed
elevator
therapy
defended
pairs
##lam
breaks
Bennett
Uncle
cylinder
##ison
passion
bases
Actor
cancelled
battles
extensively
oxygen
Ancient
specialized
negotiations
##rat
acquisition
convince
interpretation
##00
photos
aspect
colleges
Artist
keeps
##wing
Croatia
##ona
Hughes
Otto
comments
##du
Ph
Sweet
adventure
describing
Student
Shakespeare
scattered
objective
Aviation
Phillips
Fourth
athletes
##hal
##tered
Guitar
intensity
née
dining
curve
Obama
topics
legislative
Mill
Cruz
##ars
Members
recipient
Derby
inspiration
corresponding
fed
YouTube
coins
pressing
intent
Karen
cinema
Delta
destination
shorter
Christians
imagined
canal
Newcastle
Shah
Adrian
super
Males
160
liberal
lord
bat
supplied
Claude
meal
worship
##atic
Han
wire
°F
##tha
punishment
thirteen
fighters
##ibility
1859
Ball
gardens
##ari
Ottawa
pole
indicating
Twenty
Higher
Bass
Ivy
farming
##urs
certified
Saudi
plenty
##ces
restaurants
Representative
Miles
payment
##inger
##rit
Confederate
festivals
references
##ić
Mario
PhD
playoffs
witness
rice
mask
saving
opponents
enforcement
automatically
relegated
##oe
radar
whenever
Financial
imperial
uncredited
influences
Abraham
skull
Guardian
Haven
Bengal
impressive
input
mixture
Warsaw
altitude
distinction
1857
collective
Annie
##ean
##bal
directions
Flying
##nic
faded
##ella
contributing
##ó
employee
##lum
##yl
ruler
oriented
conductor
focusing
##die
Giants
Mills
mines
Deep
curled
Jessica
guitars
Louise
procedure
Machine
failing
attendance
Nepal
Brad
Liam
tourist
exhibited
Sophie
depicted
Shaw
Chuck
##can
expecting
challenges
##nda
equally
resignation
##logical
Tigers
loop
pitched
outdoor
reviewed
hopes
True
temporarily
Borough
torn
jerked
collect
Berkeley
Independence
cotton
retreat
campaigns
participating
Intelligence
Heaven
##ked
situations
borough
Democrats
Harbor
##len
Liga
serial
circles
fourteen
##lot
seized
filling
departments
finance
absolute
Roland
Nate
floors
raced
struggling
deliver
protests
##tel
Exchange
efficient
experiments
##dar
faint
3D
binding
Lions
lightly
skill
proteins
difficulties
##cal
monthly
camps
flood
loves
Amanda
Commerce
##oid
##lies
elementary
##tre
organic
##stein
##ph
receives
Tech
enormous
distinctive
Joint
experiment
Circuit
citizen
##hy
shelter
ideal
practically
formula
addressed
Foster
Productions
##ax
variable
punk
Voice
fastest
concentrated
##oma
##yer
stored
surrender
vary
Sergeant
Wells
ward
Wait
##ven
playoff
reducing
cavalry
##dle
Venezuela
tissue
amounts
sweat
##we
Non
##nik
beetle
##bu
##tu
Jared
Hunt
##₂
fat
Sultan
Living
Circle
Secondary
Suddenly
reverse
##min
Travel
##bin
Lebanon
##mas
virus
Wind
dissolved
enrolled
holiday
Keep
helicopter
Clarke
constitutional
technologies
doubles
instructions
##ace
Azerbaijan
##ill
occasional
frozen
trick
wiped
writings
Shanghai
preparing
challenged
mainstream
summit
180
##arian
##rating
designation
##ada
revenge
filming
tightened
Miguel
Montana
reflect
celebration
bitch
flashed
signals
rounded
peoples
##tation
renowned
Google
characteristic
Campaign
sliding
##rman
usage
Record
Using
woke
solutions
holes
theories
logo
Protestant
relaxed
brow
nickname
Reading
marble
##tro
symptoms
Overall
capita
##ila
outbreak
revolution
deemed
Principal
Hannah
approaches
inducted
Wellington
vulnerable
Environmental
Drama
incumbent
Dame
1854
travels
samples
accurate
physically
Sony
Nashville
##sville
##lic
##og
Producer
Lucky
tough
Stanford
resort
repeatedly
eyebrows
Far
choir
commenced
##ep
##ridge
rage
swing
sequel
heir
buses
ad
Grove
##late
##rick
updated
##SA
Delaware
##fa
Athletics
warmth
Off
excitement
verse
Protection
Villa
corruption
intellectual
Jenny
##lyn
mystery
prayer
healthy
##ologist
Bear
lab
Ernest
Remix
register
basement
Montgomery
consistent
tier
1855
Preston
Brooks
##maker
vocalist
laboratory
delayed
wheels
rope
bachelor
pitcher
Block
Nevertheless
suspect
efficiency
Nebraska
siege
FBI
planted
##AC
Newton
breeding
##ain
eighteen
Argentine
encounter
servant
1858
elder
Shadow
Episode
fabric
doctors
survival
removal
chemistry
volunteers
Kane
variant
arrives
Eagle
Left
##fe
Jo
divorce
##ret
yesterday
Bryan
handling
diseases
customer
Sheriff
Tiger
Harper
##oi
resting
Linda
Sheffield
gasped
sexy
economics
alien
tale
footage
Liberty
yeah
fundamental
Ground
flames
Actress
photographer
Maggie
Additional
joke
custom
Survey
Abu
silk
consumption
Ellis
bread
##uous
engagement
puts
Dog
##hr
poured
guilt
CDP
boxes
hardware
clenched
##cio
stem
arena
extending
##com
examination
Steel
encountered
revised
140
picking
Car
hasn
Minor
pride
Roosevelt
boards
##mia
blocked
curious
drag
narrative
brigade
Prefecture
mysterious
namely
connects
Devil
historians
CHAPTER
quit
installation
Golf
empire
elevated
##eo
releasing
Bond
##uri
harsh
ban
##BA
contracts
cloth
presents
stake
chorus
##eau
swear
##mp
allies
generations
Motor
meter
pen
warrior
veteran
##EC
comprehensive
missile
interaction
instruction
Renaissance
rested
Dale
fix
fluid
les
investigate
loaded
widow
exhibit
artificial
select
rushing
tasks
signature
nowhere
Engineer
feared
Prague
bother
extinct
gates
Bird
climbing
heels
striking
artwork
hunt
awake
##hin
Formula
thereby
commitment
imprisoned
Beyond
##MA
transformed
Agriculture
Low
Movie
radical
complicated
Yellow
Auckland
mansion
tenth
Trevor
predecessor
##eer
disbanded
sucked
circular
witch
gaining
lean
Behind
illustrated
rang
celebrate
bike
consist
framework
##cent
Shane
owns
350
comprises
collaborated
colleagues
##cast
engage
fewer
##ave
1856
observation
diplomatic
legislature
improvements
Interstate
craft
MTV
martial
administered
jet
approaching
permanently
attraction
manuscript
numbered
Happy
Andrea
shallow
Gothic
Anti
##bad
improvement
trace
preserve
regardless
rode
dies
achievement
maintaining
Hamburg
spine
##air
flowing
encourage
widened
posts
##bound
125
Southeast
Santiago
##bles
impression
receiver
Single
closure
##unt
communist
honors
Northwest
105
##ulated
cared
un
hug
magnetic
seeds
topic
perceived
prey
prevented
Marvel
Eight
Michel
Transportation
rings
Gate
##gne
Byzantine
accommodate
floating
##dor
equation
ministry
##ito
##gled
Rules
earthquake
revealing
Brother
Celtic
blew
chairs
Panama
Leon
attractive
descendants
Care
Ambassador
tours
breathed
threatening
##cho
smiles
Lt
Beginning
##iness
fake
assists
fame
strings
Mobile
Liu
parks
http
1852
brush
Aunt
bullet
consciousness
##sta
##ther
consequences
gather
dug
1851
bridges
Doug
##sion
Artists
ignore
Carol
brilliant
radiation
temples
basin
clouds
##cted
Stevens
spite
soap
consumer
Damn
Snow
recruited
##craft
Advanced
tournaments
Quinn
undergraduate
questioned
Palmer
Annual
Others
feeding
Spider
printing
##orn
cameras
functional
Chester
readers
Alpha
universal
Faith
Brandon
François
authored
Ring
el
aims
athletic
possessed
Vermont
programmes
##uck
bore
Fisher
statements
shed
saxophone
neighboring
pronounced
barrel
bags
##dge
organisations
pilots
casualties
Kenneth
##brook
silently
Malcolm
span
Essex
anchor
##hl
virtual
lessons
Henri
Trump
Page
pile
locomotive
wounds
uncomfortable
sustained
Diana
Eagles
##pi
2000s
documented
##bel
Cassie
delay
kisses
##ines
variation
##ag
growled
##mark
##ways
Leslie
studios
Friedrich
aunt
actively
armor
eaten
historically
Better
purse
honey
ratings
##ée
naturally
1840
peer
Kenny
Cardinal
database
Looking
runners
handsome
Double
PA
##boat
##sted
protecting
##jan
Diamond
concepts
interface
##aki
Watch
Article
Columbus
dialogue
pause
##rio
extends
blanket
pulse
1853
affiliate
ladies
Ronald
counted
kills
demons
##zation
Airlines
Marco
Cat
companion
mere
Yugoslavia
Forum
Allan
pioneer
Competition
Methodist
patent
nobody
Stockholm
##ien
regulation
##ois
accomplished
##itive
washed
sake
Vladimir
crops
prestigious
humor
Sally
labour
tributary
trap
altered
examined
Mumbai
bombing
Ash
noble
suspension
ruins
##bank
spare
displays
guided
dimensional
Iraqi
##hon
sciences
Franz
relating
fence
followers
Palestine
invented
proceeded
Batman
Bradley
##yard
##ova
crystal
Kerala
##ima
shipping
handled
Want
abolished
Drew
##tter
Powell
Half
##table
##cker
exhibitions
Were
assignment
assured
##rine
Indonesian
Grammy
acknowledged
Kylie
coaches
structural
clearing
stationed
Say
Total
Rail
besides
glow
threats
afford
Tree
Musical
##pp
elite
centered
explore
Engineers
Stakes
Hello
tourism
severely
assessment
##tly
crack
politicians
##rrow
sheets
volunteer
##borough
##hold
announcement
recover
contribute
lungs
##ille
mainland
presentation
Johann
Writing
1849
##bird
Study
Boulevard
coached
fail
airline
Congo
Plus
Syrian
introduce
ridge
Casey
manages
##fi
searched
Support
succession
progressive
coup
cultures
##lessly
sensation
Cork
Elena
Sofia
Philosophy
mini
trunk
academy
Mass
Liz
practiced
Reid
##ule
satisfied
experts
Wilhelm
Woods
invitation
Angels
calendar
joy
Sr
Dam
packed
##uan
bastard
Workers
broadcasts
logic
cooking
backward
##ack
Chen
creates
enzyme
##xi
Davies
aviation
VII
Conservation
fucking
Knights
##kan
requiring
hectares
wars
ate
##box
Mind
desired
oak
absorbed
Really
Vietnamese
Paulo
athlete
##car
##eth
Talk
Wu
##cks
survivors
Yang
Joel
Almost
Holmes
Armed
Joshua
priests
discontinued
##sey
blond
Rolling
suggesting
CA
clay
exterior
Scientific
##sive
Giovanni
Hi
farther
contents
Winners
animation
neutral
mall
Notes
layers
professionals
Armstrong
Against
Piano
involve
monitor
angel
parked
bears
seated
feat
beliefs
##kers
Version
suffer
##ceae
guidance
##eur
honored
raid
alarm
Glen
Ellen
Jamaica
trio
enabled
##ils
procedures
##hus
moderate
upstairs
##ses
torture
Georgian
rebellion
Fernando
Nice
##are
Aires
Campus
beast
##hing
1847
##FA
Isle
##logist
Princeton
cathedral
Oakland
Solomon
##tto
Milwaukee
upcoming
midfielder
Neither
sacred
Eyes
appreciate
Brunswick
secrets
Rice
Somerset
Chancellor
Curtis
##gel
Rich
separation
grid
##los
##bon
urge
##ees
##ree
freight
towers
psychology
requirement
dollar
##fall
##sman
exile
tomb
Salt
Stefan
Buenos
Revival
Porter
tender
diesel
chocolate
Eugene
Legion
Laboratory
sheep
arched
hospitals
orbit
Full
##hall
drinks
ripped
##RS
tense
Hank
leagues
##nberg
PlayStation
fool
Punjab
relatives
Comedy
sur
1846
Tonight
Sox
##if
Rabbi
org
speaks
institute
defender
painful
wishes
Weekly
literacy
portions
snake
item
deals
##tum
autumn
sharply
reforms
thighs
prototype
##ition
argues
disorder
Physics
terror
provisions
refugees
predominantly
independently
march
##graphy
Arabia
Andrews
Bus
Money
drops
##zar
pistol
matrix
revolutionary
##ust
Starting
##ptic
Oak
Monica
##ides
servants
##hed
archaeological
divorced
rocket
enjoying
fires
##nel
assembled
qualification
retiring
##fied
Distinguished
handful
infection
Durham
##itz
fortune
renewed
Chelsea
##sley
curved
gesture
retain
exhausted
##ifying
Perth
jumping
Palestinian
Simpson
colonies
steal
##chy
corners
Finn
arguing
Martha
##var
Betty
emerging
Heights
Hindi
Manila
pianist
founders
regret
Napoleon
elbow
overhead
bold
praise
humanity
##ori
Revolutionary
##ere
fur
##ole
Ashley
Official
##rm
lovely
Architecture
##sch
Baronet
virtually
##OS
descended
immigration
##das
##kes
Holly
Wednesday
maintains
theatrical
Evan
Gardens
citing
##gia
segments
Bailey
Ghost
##city
governing
graphics
##ined
privately
potentially
transformation
Crystal
Cabinet
sacrifice
hesitated
mud
Apollo
Desert
bin
victories
Editor
Railways
Web
Case
tourists
Brussels
Franco
compiled
topped
Gene
engineers
commentary
egg
escort
nerve
arch
necessarily
frustration
Michelle
democracy
genes
Facebook
halfway
##ient
102
flipped
Won
##mit
NASA
Lynn
Provincial
ambassador
Inspector
glared
Change
McDonald
developments
tucked
noting
Gibson
circulation
dubbed
armies
resource
Headquarters
##iest
Mia
Albanian
Oil
Albums
excuse
intervention
Grande
Hugo
integration
civilians
depends
reserves
Dee
compositions
identification
restrictions
quarterback
Miranda
Universe
favourite
ranges
hint
loyal
Op
entity
Manual
quoted
dealt
specialist
Zhang
download
Westminster
Rebecca
streams
Anglican
variations
Mine
detective
Films
reserved
##oke
##key
sailing
##gger
expanding
recall
discovers
particles
behaviour
Gavin
blank
permit
Java
Fraser
Pass
##non
##TA
panels
statistics
notion
courage
dare
venues
##roy
Box
Newport
travelling
Thursday
warriors
Glenn
criteria
360
mutual
restore
varied
bitter
Katherine
##lant
ritual
bits
##à
Henderson
trips
Richardson
Detective
curse
psychological
Il
midnight
streak
facts
Dawn
Indies
Edmund
roster
Gen
##nation
1830
congregation
shaft
##ically
##mination
Indianapolis
Sussex
loving
##bit
sounding
horrible
Continental
Griffin
advised
magical
millions
##date
1845
Safety
lifting
determination
valid
dialect
Penn
Know
triple
avoided
dancer
judgment
sixty
farmer
lakes
blast
aggressive
Abby
tag
chains
inscription
##nn
conducting
Scout
buying
##wich
spreading
##OC
array
hurried
Environment
improving
prompted
fierce
Taking
Away
tune
pissed
Bull
catching
##ying
eyebrow
metropolitan
terrain
##rel
Lodge
manufacturers
creator
##etic
happiness
ports
##ners
Relations
fortress
targeted
##ST
allegedly
blues
##osa
Bosnia
##dom
burial
similarly
stranger
pursued
symbols
rebels
reflection
routine
traced
indoor
eventual
##ska
##ão
##una
MD
##phone
oh
grants
Reynolds
rid
operators
##nus
Joey
vital
siblings
keyboard
br
removing
societies
drives
solely
princess
lighter
Various
Cavalry
believing
SC
underwent
relay
smelled
syndrome
welfare
authorized
seemingly
Hard
chicken
##rina
Ages
Bo
democratic
barn
Eye
shorts
##coming
##hand
disappointed
unexpected
centres
Exhibition
Stories
Site
banking
accidentally
Agent
conjunction
André
Chloe
resist
width
Queens
provision
##art
Melissa
Honorary
Del
prefer
abruptly
duration
##vis
Glass
enlisted
##ado
discipline
Sisters
carriage
##ctor
##sburg
Lancashire
log
fuck
##iz
closet
collecting
holy
rape
trusted
cleaning
inhabited
Rocky
104
editorial
##yu
##ju
succeed
strict
Cuban
##iya
Bronze
outcome
##ifies
##set
corps
Hero
barrier
Kumar
groaned
Nina
Burton
enable
stability
Milton
knots
##ination
slavery
##borg
curriculum
trailer
warfare
Dante
Edgar
revival
Copenhagen
define
advocate
Garrett
Luther
overcome
pipe
750
construct
Scotia
kings
flooding
##hard
Ferdinand
Felix
forgot
Fish
Kurt
elaborate
##BC
graphic
gripped
colonel
Sophia
Advisory
Self
##uff
##lio
monitoring
seal
senses
rises
peaceful
journals
1837
checking
legendary
Ghana
##power
ammunition
Rosa
Richards
nineteenth
ferry
aggregate
Troy
inter
##wall
Triple
steep
tent
Cyprus
1844
##woman
commanding
farms
doi
navy
specified
na
cricketer
transported
Think
comprising
grateful
solve
##core
beings
clerk
grain
vector
discrimination
##TC
Katie
reasonable
drawings
veins
consideration
Monroe
repeat
breed
dried
witnessed
ordained
Current
spirits
remarkable
consultant
urged
Remember
anime
singers
phenomenon
Rhode
Carlo
demanding
findings
manual
varying
Fellowship
generate
safely
heated
withdrawn
##ao
headquartered
##zon
##lav
##ency
Col
Memphis
imposed
rivals
Planet
healing
##hs
ensemble
Warriors
##bone
cult
Frankfurt
##HL
diversity
Gerald
intermediate
##izes
reactions
Sister
##ously
##lica
quantum
awkward
mentions
pursuit
##ography
varies
profession
molecular
consequence
lectures
cracked
103
slowed
##tsu
cheese
upgraded
suite
substance
Kingston
1800
Idaho
Theory
##een
ain
Carson
Molly
##OR
configuration
Whitney
reads
audiences
##tie
Geneva
Outside
##nen
##had
transit
volleyball
Randy
Chad
rubber
motorcycle
respected
eager
Level
coin
##lets
neighbouring
##wski
confident
##cious
poll
uncertain
punch
thesis
Tucker
IATA
Alec
##ographic
##law
1841
desperately
1812
Lithuania
accent
Cox
lightning
skirt
##load
Burns
Dynasty
##ug
chapters
Working
dense
Morocco
##kins
casting
Set
activated
oral
Brien
horn
HIV
dawn
stumbled
altar
tore
considerably
Nicole
interchange
registration
biography
Hull
Stan
bulk
consent
Pierce
##ER
Fifth
marched
terrorist
##piece
##itt
Presidential
Heather
staged
Plant
relegation
sporting
joins
##ced
Pakistani
dynamic
Heat
##lf
ourselves
Except
Elliott
nationally
goddess
investors
Burke
Jackie
##ā
##RA
Tristan
Associate
Tuesday
scope
Near
bunch
##abad
##ben
sunlight
##aire
manga
Willie
trucks
boarding
Lion
lawsuit
Learning
Der
pounding
awful
##mine
IT
Legend
romance
Serie
AC
gut
precious
Robertson
hometown
realm
Guards
Tag
batting
##vre
halt
conscious
1838
acquire
collar
##gg
##ops
Herald
nationwide
citizenship
Aircraft
decrease
em
Fiction
Female
corporation
Located
##ip
fights
unconscious
Tampa
Poetry
lobby
Malta
##sar
##bie
layout
Tate
reader
stained
##bre
##rst
##ulate
loudly
Eva
Cohen
exploded
Merit
Maya
##rable
Rovers
##IC
Morrison
Should
vinyl
##mie
onwards
##gie
vicinity
Wildlife
probability
Mar
Barnes
##ook
spinning
Moses
##vie
Surrey
Planning
conferences
protective
Plaza
deny
Canterbury
manor
Estate
tilted
comics
IBM
destroying
server
Dorothy
##horn
Oslo
lesser
heaven
Marshal
scales
strikes
##ath
firms
attract
##BS
controlling
Bradford
southeastern
Amazon
Travis
Janet
governed
1842
Train
Holden
bleeding
gifts
rent
1839
palms
##ū
judicial
Ho
Finals
conflicts
unlikely
draws
##cies
compensation
adds
elderly
Anton
lasting
Nintendo
codes
ministers
pot
associations
capabilities
##cht
libraries
##sie
chances
performers
runway
##af
##nder
Mid
Vocals
##uch
##eon
interpreted
priority
Uganda
ruined
Mathematics
cook
AFL
Lutheran
AIDS
Capitol
chase
axis
Moreover
María
Saxon
storyline
##ffed
Tears
Kid
cent
colours
Sex
##long
pm
blonde
Edwin
CE
diocese
##ents
##boy
Inn
##ller
Saskatchewan
##kh
stepping
Windsor
##oka
##eri
Xavier
Resources
1843
##top
##rad
##lls
Testament
poorly
1836
drifted
slope
CIA
remix
Lords
mature
hosting
diamond
beds
##ncies
luxury
trigger
##lier
preliminary
hybrid
journalists
Enterprise
proven
expelled
insects
Beautiful
lifestyle
vanished
##ake
##ander
matching
surfaces
Dominican
Kids
referendum
Orlando
Truth
Sandy
privacy
Calgary
Speaker
sts
Nobody
shifting
##gers
Roll
Armenia
Hand
##ES
106
##ont
Guild
larvae
Stock
flame
gravity
enhanced
Marion
surely
##tering
Tales
algorithm
Emmy
darker
VIII
##lash
hamlet
deliberately
occurring
choices
Gage
fees
settling
ridiculous
##ela
Sons
cop
custody
##ID
proclaimed
Cardinals
##pm
Metal
Ana
1835
clue
Cardiff
riders
observations
MA
sometime
##och
performer
intact
Points
allegations
rotation
Tennis
tenor
Directors
##ats
Transit
thigh
Complex
##works
twentieth
Factory
doctrine
Daddy
##ished
pretend
Winston
cigarette
##IA
specimens
hydrogen
smoking
mathematical
arguments
openly
developer
##iro
fists
somebody
##san
Standing
Caleb
intelligent
Stay
Interior
echoed
Valentine
varieties
Brady
cluster
Ever
voyage
##of
deposits
ultimate
Hayes
horizontal
proximity
##ás
estates
exploration
NATO
Classical
##most
bills
condemned
1832
hunger
##ato
planes
deserve
offense
sequences
rendered
acceptance
##ony
manufacture
Plymouth
innovative
predicted
##RC
Fantasy
##une
supporter
absent
Picture
bassist
rescued
##MC
Ahmed
Monte
##sts
##rius
insane
novelist
##és
agrees
Antarctic
Lancaster
Hopkins
calculated
startled
##star
tribal
Amendment
##hoe
invisible
patron
deer
Walk
tracking
Lyon
tickets
##ED
philosopher
compounds
chuckled
##wi
pound
loyalty
Academic
petition
refuses
marking
Mercury
northeastern
dimensions
scandal
Canyon
patch
publish
##oning
Peak
minds
##boro
Presbyterian
Hardy
theoretical
magnitude
bombs
cage
##ders
##kai
measuring
explaining
avoiding
touchdowns
Card
theology
##ured
Popular
export
suspicious
Probably
photograph
Lou
Parks
Arms
compact
Apparently
excess
Banks
lied
stunned
territorial
Filipino
spectrum
learns
wash
imprisonment
ugly
##rose
Albany
Erik
sends
##hara
##rid
consumed
##gling
Belgrade
Da
opposing
Magnus
footsteps
glowing
delicate
Alexandria
Ludwig
gorgeous
Bros
Index
##PA
customs
preservation
bonds
##mond
environments
##nto
instructed
parted
adoption
locality
workshops
goalkeeper
##rik
##uma
Brighton
Slovenia
##ulating
##tical
towel
hugged
stripped
Bears
upright
Wagner
##aux
secretly
Adventures
nest
Course
Lauren
Boeing
Abdul
Lakes
450
##cu
USSR
caps
Chan
##nna
conceived
Actually
Belfast
Lithuanian
concentrate
possess
militia
pine
protagonist
Helena
##PS
##band
Belle
Clara
Reform
currency
pregnancy
1500
##rim
Isabella
hull
Name
trend
journalism
diet
##mel
Recording
acclaimed
Tang
Jace
steering
vacant
suggestion
costume
laser
##š
##ink
##pan
##vić
integral
achievements
wise
classroom
unions
southwestern
##uer
Garcia
toss
Tara
Large
##tate
evident
responsibilities
populated
satisfaction
##bia
casual
Ecuador
##ght
arose
##ović
Cornwall
embrace
refuse
Heavyweight
XI
Eden
activists
##uation
biology
##shan
fraud
Fuck
matched
legacy
Rivers
missionary
extraordinary
Didn
holder
wickets
crucial
Writers
Hurricane
Iceland
gross
trumpet
accordance
hurry
flooded
doctorate
Albania
##yi
united
deceased
jealous
grief
flute
portraits
##а
pleasant
Founded
Face
crowned
Raja
advisor
Salem
##ec
Achievement
admission
freely
minimal
Sudan
developers
estimate
disabled
##lane
downstairs
Bruno
##pus
pinyin
##ude
lecture
deadly
underlying
optical
witnesses
Combat
Julius
tapped
variants
##like
Colonial
Critics
Similarly
mouse
voltage
sculptor
Concert
salary
Frances
##ground
hook
premises
Software
instructor
nominee
##ited
fog
slopes
##zu
vegetation
sail
##rch
Body
Apart
atop
View
utility
ribs
cab
migration
##wyn
bounded
2019
pillow
trails
##ub
Halifax
shade
Rush
##lah
##dian
Notre
interviewed
Alexandra
Springfield
Indeed
rubbing
dozens
amusement
legally
##lers
Jill
Cinema
ignoring
Choice
##ures
pockets
##nell
laying
Blair
tackles
separately
##teen
Criminal
performs
theorem
Communication
suburbs
##iel
competitors
rows
##hai
Manitoba
Eleanor
interactions
nominations
assassination
##dis
Edmonton
diving
##dine
essay
##tas
AFC
Edge
directing
imagination
sunk
implement
Theodore
trembling
sealed
##rock
Nobel
##ancy
##dorf
##chen
genuine
apartments
Nicolas
AA
Bach
Globe
Store
220
##10
Rochester
##ño
alert
107
Beck
##nin
Naples
Basin
Crawford
fears
Tracy
##hen
disk
##pped
seventeen
Lead
backup
reconstruction
##lines
terrified
sleeve
nicknamed
popped
##making
##ern
Holiday
Gospel
ibn
##ime
convert
divine
resolved
##quet
ski
realizing
##RT
Legislature
reservoir
Rain
sinking
rainfall
elimination
challenging
tobacco
##outs
Given
smallest
Commercial
pin
rebel
comedian
exchanged
airing
dish
Salvador
promising
##wl
relax
presenter
toll
aerial
##eh
Fletcher
brass
disappear
zones
adjusted
contacts
##lk
sensed
Walt
mild
toes
flies
shame
considers
wildlife
Hanna
Arsenal
Ladies
naming
##ishing
anxiety
discussions
cute
undertaken
Cash
strain
Wyoming
dishes
precise
Angela
##ided
hostile
twins
115
Built
##pel
Online
tactics
Newman
##bourne
unclear
repairs
embarrassed
listing
tugged
Vale
##gin
Meredith
bout
##cle
velocity
tips
froze
evaluation
demonstrate
##card
criticised
Nash
lineup
Rao
monks
bacteria
lease
##lish
frightened
den
revived
finale
##rance
flee
Letters
decreased
##oh
Sounds
wrap
Sharon
incidents
renovated
everybody
stole
Bath
boxing
1815
withdraw
backs
interim
react
murders
Rhodes
Copa
framed
flown
Estonia
Heavy
explored
##rra
##GA
##ali
Istanbul
1834
##rite
##aging
##ues
Episcopal
arc
orientation
Maxwell
infected
##rot
BCE
Brook
grasp
Roberto
Excellence
108
withdrawal
Marines
rider
Lo
##sin
##run
Subsequently
garrison
hurricane
facade
Prussia
crushed
enterprise
##mber
Twitter
Generation
Physical
Sugar
editing
communicate
Ellie
##hurst
Ernst
wagon
promotional
conquest
Parliamentary
courtyard
lawyers
Superman
email
Prussian
lately
lecturer
Singer
Majesty
Paradise
sooner
Heath
slot
curves
convoy
##vian
induced
synonym
breeze
##plane
##ox
peered
Coalition
##hia
odds
##esh
##lina
Tomorrow
Nadu
##ico
##rah
damp
autonomous
console
Victory
counts
Luxembourg
intimate
Archived
Carroll
spy
Zero
habit
Always
faction
teenager
Johnston
chaos
ruin
commerce
blog
##shed
##the
reliable
Word
Yu
Norton
parade
Catholics
damned
##iling
surgeon
##tia
Allison
Jonas
remarked
##ès
idiot
Making
proposals
Industries
strategies
artifacts
batteries
reward
##vers
Agricultural
distinguish
lengths
Jeffrey
Progressive
kicking
Patricia
##gio
ballot
##ios
skilled
##gation
Colt
limestone
##AS
peninsula
##itis
LA
hotels
shapes
Crime
depicting
northwestern
HD
silly
Das
##²
##ws
##ash
##matic
thermal
Has
forgive
surrendered
Palm
Nacional
drank
haired
Mercedes
##foot
loading
Timothy
##roll
mechanisms
traces
digging
discussing
Natalie
##zhou
Forbes
landmark
Anyway
Manor
conspiracy
gym
knocking
viewing
Formation
Pink
Beauty
limbs
Phillip
sponsor
Joy
granite
Harbour
##ero
payments
Ballet
conviction
##dam
Hood
estimates
lacked
Mad
Jorge
##wen
refuge
##LA
invaded
Kat
suburban
##fold
investigated
Ari
complained
creek
Georges
##uts
powder
accepting
deserved
carpet
Thunder
molecules
Legal
cliff
strictly
enrollment
ranch
##rg
##mba
proportion
renovation
crop
grabbing
##liga
finest
entries
receptor
helmet
blown
Listen
flagship
workshop
resolve
nails
Shannon
portal
jointly
shining
Violet
overwhelming
upward
Mick
proceedings
##dies
##aring
Laurence
Churchill
##rice
commit
170
inclusion
Examples
##verse
##rma
fury
paths
##SC
ankle
nerves
Chemistry
rectangular
sworn
screenplay
cake
Mann
Seoul
Animal
sizes
Speed
vol
Population
Southwest
Hold
continuously
Qualified
wishing
Fighting
Made
disappointment
Portsmouth
Thirty
##beck
Ahmad
teammate
MLB
graph
Charleston
realizes
##dium
exhibits
preventing
##int
fever
rivalry
Male
mentally
dull
##lor
##rich
consistently
##igan
Madame
certificate
suited
Krishna
accuracy
Webb
Budapest
Rex
1831
Cornell
OK
surveillance
##gated
habitats
Adventure
Conrad
Superior
Gay
sofa
aka
boot
Statistics
Jessie
Liberation
##lip
##rier
brands
saint
Heinrich
Christine
bath
Rhine
ballet
Jin
consensus
chess
Arctic
stack
furious
cheap
toy
##yre
##face
##gging
gastropod
##nne
Romans
membrane
answering
25th
architects
sustainable
##yne
Hon
1814
Baldwin
dome
##awa
##zen
celebrity
enclosed
##uit
##mmer
Electronic
locals
##CE
supervision
mineral
Chemical
Slovakia
alley
hub
##az
heroes
Creative
##AM
incredible
politically
ESPN
yanked
halls
Aboriginal
Greatest
yield
##20
congressional
robot
Kiss
welcomed
MS
speeds
proceed
Sherman
eased
Greene
Walsh
Geoffrey
variables
rocky
##print
acclaim
Reverend
Wonder
tonnes
recurring
Dawson
continent
finite
AP
continental
ID
facilitate
essays
Rafael
Neal
1833
ancestors
##met
##gic
Especially
teenage
frustrated
Jules
cock
expense
##oli
##old
blocking
Notable
prohibited
ca
dock
organize
##wald
Burma
Gloria
dimension
aftermath
choosing
Mickey
torpedo
pub
##used
manuscripts
laps
Ulster
staircase
sphere
Insurance
Contest
lens
risks
investigations
ERA
glare
##play
Graduate
auction
Chronicle
##tric
##50
Coming
seating
Wade
seeks
inland
Thames
Rather
butterfly
contracted
positioned
consumers
contestants
fragments
Yankees
Santos
administrator
hypothesis
retire
Denis
agreements
Winnipeg
##rill
1820
trophy
crap
shakes
Jenkins
##rium
ya
twist
labels
Maritime
##lings
##iv
111
##ensis
Cairo
Anything
##fort
opinions
crowded
##nian
abandon
##iff
drained
imported
##rr
tended
##rain
Going
introducing
sculptures
bankruptcy
danced
demonstration
stance
settings
gazed
abstract
pet
Calvin
stiff
strongest
wrestler
##dre
Republicans
grace
allocated
cursed
snail
advancing
Return
errors
Mall
presenting
eliminate
Amateur
Institution
counting
##wind
warehouse
##nde
Ethiopia
trailed
hollow
##press
Literary
capability
nursing
preceding
lamp
Thomson
Morton
##ctic
Crew
Close
composers
boom
Clare
missiles
112
hunter
snap
##oni
##tail
Us
declaration
##cock
rally
huh
lion
straightened
Philippe
Sutton
alpha
valued
maker
navigation
detected
favorable
perception
Charter
##ña
Ricky
rebounds
tunnels
slapped
Emergency
supposedly
##act
deployment
socialist
tubes
anybody
corn
##NA
Seminary
heating
pump
##AA
achieving
souls
##ass
Link
##ele
##smith
greeted
Bates
Americas
Elder
cure
contestant
240
fold
Runner
Uh
licked
Politics
committees
neighbors
fairy
Silva
Leipzig
tipped
correctly
exciting
electronics
foundations
cottage
governmental
##hat
allied
claws
presidency
cruel
Agreement
slender
accompanying
precisely
##pass
driveway
swim
Stand
crews
##mission
rely
everyday
Wings
demo
##hic
recreational
min
nationality
##duction
Easter
##hole
canvas
Kay
Leicester
talented
Discovery
shells
##ech
Kerry
Ferguson
Leave
##place
altogether
adopt
butt
wolves
##nsis
##ania
modest
soprano
Boris
##ught
electron
depicts
hid
cruise
differ
treasure
##nch
Gun
Mama
Bengali
trainer
merchants
innovation
presumably
Shirley
bottles
proceeds
Fear
invested
Pirates
particle
Dominic
blamed
Fight
Daisy
##pper
##graphic
nods
knight
Doyle
tales
Carnegie
Evil
Inter
Shore
Nixon
transform
Savannah
##gas
Baltic
stretching
worlds
protocol
Percy
Toby
Heroes
brave
dancers
##aria
backwards
responses
Chi
Gaelic
Berry
crush
embarked
promises
Madonna
researcher
realised
inaugurated
Cherry
Mikhail
Nottingham
reinforced
subspecies
rapper
##kie
Dreams
Re
Damon
Minneapolis
monsters
suspicion
Tel
surroundings
afterward
complaints
OF
sectors
Algeria
lanes
Sabha
objectives
Donna
bothered
distracted
deciding
##ives
##CA
##onia
bishops
Strange
machinery
Voiced
synthesis
reflects
interference
##TS
##ury
keen
##ign
frown
freestyle
ton
Dixon
Sacred
Ruby
Prison
##ión
1825
outfit
##tain
curiosity
##ight
frames
steadily
emigrated
horizon
##erly
Doc
philosophical
Table
UTC
Marina
##DA
secular
##eed
Zimbabwe
cops
Mack
sheriff
Sanskrit
Francesco
catches
questioning
streaming
Kill
testimony
hissed
tackle
countryside
copyright
##IP
Buddhism
##rator
ladder
##ON
Past
rookie
depths
##yama
##ister
##HS
Samantha
Dana
Educational
brows
Hammond
raids
envelope
##sco
##hart
##ulus
epic
detection
Streets
Potter
statistical
für
ni
accounting
##pot
employer
Sidney
Depression
commands
Tracks
averaged
lets
Ram
longtime
suits
branded
chip
Shield
loans
ought
Said
sip
##rome
requests
Vernon
bordered
veterans
##ament
Marsh
Herzegovina
Pine
##igo
mills
anticipation
reconnaissance
##ef
expectations
protested
arrow
guessed
depot
maternal
weakness
##ap
projected
pour
Carmen
provider
newer
remind
freed
##rily
##wal
##tones
intentions
Fiji
timing
Match
managers
Kosovo
Herman
Wesley
Chang
135
semifinals
shouting
Indo
Janeiro
Chess
Macedonia
Buck
##onies
rulers
Mail
##vas
##sel
MHz
Programme
Task
commercially
subtle
propaganda
spelled
bowling
basically
Raven
1828
Colony
109
##ingham
##wara
anticipated
1829
##iers
graduates
##rton
##fication
endangered
ISO
diagnosed
##tage
exercises
Battery
bolt
poison
cartoon
##ción
hood
bowed
heal
Meyer
Reagan
##wed
subfamily
##gent
momentum
infant
detect
##sse
Chapman
Darwin
mechanics
NSW
Cancer
Brooke
Nuclear
comprised
hire
sanctuary
wingspan
contrary
remembering
surprising
Basic
stealing
OS
hatred
##lled
masters
violation
Rule
##nger
assuming
conquered
louder
robe
Beatles
legitimate
##vation
massacre
Rica
unsuccessfully
poets
##enberg
careers
doubled
premier
battalions
Dubai
Paper
Louisville
gestured
dressing
successive
mumbled
Vic
referee
pupil
##cated
##rre
ceremonies
picks
##IN
diplomat
alike
geographical
rays
##HA
##read
harbour
factories
pastor
playwright
Ultimate
nationalist
uniforms
obtaining
kit
Amber
##pling
screenwriter
ancestry
##cott
Fields
PR
Coleman
rat
Bavaria
squeeze
highlighted
Adult
reflecting
Mel
1824
bicycle
organizing
sided
Previously
Underground
Prof
athletics
coupled
mortal
Hampton
worthy
immune
Ava
##gun
encouraging
simplified
##ssa
##nte
##ann
Providence
entities
Pablo
Strong
Housing
##ista
##ators
kidnapped
mosque
Kirk
whispers
fruits
shattered
fossil
Empress
Johns
Webster
Thing
refusing
differently
specimen
Ha
##EN
##tina
##elle
##night
Horn
neighbourhood
Bolivia
##rth
genres
Pre
##vich
Amelia
swallow
Tribune
Forever
Psychology
Use
##bers
Gazette
ash
##usa
Monster
##cular
delegation
blowing
Oblast
retreated
automobile
##ex
profits
shirts
devil
Treasury
##backs
Drums
Ronnie
gameplay
expertise
Evening
resides
Caesar
unity
Crazy
linking
Vision
donations
Isabel
valve
Sue
WWE
logical
availability
fitting
revolt
##mill
Linux
taxi
Access
pollution
statues
Augustus
##pen
cello
##some
lacking
##ati
Gwen
##aka
##ovich
1821
Wow
initiatives
Uruguay
Cain
stroked
examine
##ī
mentor
moist
disorders
buttons
##tica
##anna
Species
Lynch
museums
scorer
Poor
eligibility
op
unveiled
cats
Title
wheat
critically
Syracuse
##osis
marketed
enhance
Ryder
##NG
##ull
##rna
embedded
throws
foods
happily
##ami
lesson
formats
punched
##rno
expressions
qualities
##sal
Gods
##lity
elect
wives
##lling
jungle
Toyota
reversed
Grammar
Cloud
Agnes
##ules
disputed
verses
Lucien
threshold
##rea
scanned
##bled
##dley
##lice
Kazakhstan
Gardner
Freeman
##rz
inspection
Rita
accommodation
advances
chill
Elliot
thriller
Constantinople
##mos
debris
whoever
1810
Santo
Carey
remnants
Guatemala
##irs
carriers
equations
mandatory
##WA
anxious
measurement
Summit
Terminal
Erin
##zes
LLC
##uo
glancing
sin
##₃
Downtown
flowering
Euro
Leigh
Lance
warn
decent
recommendations
##ote
Quartet
##rrell
Clarence
colleague
guarantee
230
Clayton
Beast
addresses
prospect
destroyer
vegetables
Leadership
fatal
prints
190
##makers
Hyde
persuaded
illustrations
Southampton
Joyce
beats
editors
mount
##grave
Malaysian
Bombay
endorsed
##sian
##bee
applying
Religion
nautical
bomber
Na
airfield
gravel
##rew
Cave
bye
dig
decree
burden
Election
Hawk
Fe
##iled
reunited
##tland
liver
Teams
Put
delegates
Ella
##fect
Cal
invention
Castro
bored
##kawa
##ail
Trinidad
NASCAR
pond
develops
##pton
expenses
Zoe
Released
##rf
organs
beta
parameters
Neill
##lene
lateral
Beat
blades
Either
##hale
Mitch
##ET
##vous
Rod
burnt
phones
Rising
##front
investigating
##dent
Stephanie
##keeper
screening
##uro
Swan
Sinclair
modes
bullets
Nigerian
melody
##ques
Rifle
##12
128
##jin
charm
Venus
##tian
fusion
advocated
visitor
pinned
genera
3000
Ferry
Solo
quantity
regained
platinum
shoots
narrowly
preceded
update
##ichi
equality
unaware
regiments
ally
##tos
transmitter
locks
Seeing
outlets
feast
reopened
##ows
struggles
Buddy
1826
bark
elegant
amused
Pretty
themed
schemes
Lisbon
Te
patted
terrorism
Mystery
##croft
##imo
Madagascar
Journey
dealer
contacted
##quez
ITV
vacation
Wong
Sacramento
organisms
##pts
balcony
coloured
sheer
defines
MC
abortion
forbidden
accredited
Newfoundland
tendency
entrepreneur
Benny
Tanzania
needing
finalist
mythology
weakened
gown
sentences
Guest
websites
Tibetan
UFC
voluntary
annoyed
Welcome
honestly
correspondence
geometry
Deutsche
Biology
Help
##aya
Lines
Hector
##ael
reluctant
##ages
wears
inquiry
##dell
Holocaust
Tourism
Wei
volcanic
##mates
Visual
sorts
neighborhoods
Running
apple
shy
Laws
bend
Northeast
feminist
Speedway
Murder
visa
stuffed
fangs
transmitted
fiscal
Ain
enlarged
##ndi
Cecil
Peterson
Benson
Bedford
acceptable
##CC
##wer
purely
triangle
foster
Alberto
educator
Highland
acute
LGBT
Tina
Mi
adventures
Davidson
Honda
translator
monk
enacted
summoned
##ional
collector
Genesis
Un
liner
Di
Statistical
##CS
filter
Knox
Religious
Stella
Estonian
Turn
##ots
primitive
parishes
##lles
complexity
autobiography
rigid
cannon
pursuing
exploring
##gram
##mme
freshman
caves
Expedition
Traditional
iTunes
certification
cooling
##ort
##gna
##IT
##lman
##VA
Motion
explosive
licence
boxer
shrine
loosely
Brigadier
Savage
Brett
MVP
heavier
##elli
##gged
Buddha
Easy
spells
fails
incredibly
Georg
stern
compatible
Perfect
applies
cognitive
excessive
nightmare
neighbor
Sicily
appealed
static
##₁
Aberdeen
##leigh
slipping
bride
##guard
Um
Clyde
1818
##gible
Hal
Frost
Sanders
interactive
Hour
##vor
hurting
bull
termed
shelf
capturing
##pace
rolls
113
##bor
Chilean
teaches
##rey
exam
shipped
Twin
borrowed
##lift
Shit
##hot
Lindsay
Below
Kiev
Lin
leased
##sto
Eli
Diane
Val
subtropical
shoe
Bolton
Dragons
##rification
Vatican
##pathy
Crisis
dramatically
talents
babies
##ores
surname
##AP
##cology
cubic
opted
Archer
sweep
tends
Karnataka
Judy
stint
Similar
##nut
explicitly
##nga
interact
Mae
portfolio
clinic
abbreviated
Counties
##iko
hearts
##ı
providers
screams
Individual
##etti
Monument
##iana
accessed
encounters
gasp
##rge
defunct
Avery
##rne
nobility
useless
Phase
Vince
senator
##FL
1813
surprisingly
##illo
##chin
Boyd
rumors
equity
Gone
Hearts
chassis
overnight
Trek
wrists
submit
civic
designers
##rity
prominence
decorative
derives
starter
##AF
wisdom
Powers
reluctantly
measurements
doctoral
Noel
Gideon
Baden
Cologne
lawn
Hawaiian
anthology
##rov
Raiders
embassy
Sterling
##pal
Telugu
troubled
##FC
##bian
fountain
observe
ore
##uru
##gence
spelling
Border
grinning
sketch
Benedict
Xbox
dialects
readily
immigrant
Constitutional
aided
nevertheless
SE
tragedy
##ager
##rden
Flash
##MP
Europa
emissions
##ield
panties
Beverly
Homer
curtain
##oto
toilet
Isn
Jerome
Chiefs
Hermann
supernatural
juice
integrity
Scots
auto
Patriots
Strategic
engaging
prosecution
cleaned
Byron
investments
adequate
vacuum
laughs
##inus
##nge
Usually
Roth
Cities
Brand
corpse
##ffy
Gas
rifles
Plains
sponsorship
Levi
tray
owed
della
commanders
##ead
tactical
##rion
García
harbor
discharge
##hausen
gentleman
endless
highways
##itarian
pleaded
##eta
archive
Midnight
exceptions
instances
Gibraltar
cart
##NS
Darren
Bonnie
##yle
##iva
OCLC
bra
Jess
##EA
consulting
Archives
Chance
distances
commissioner
##AR
LL
sailors
##sters
enthusiasm
Lang
##zia
Yugoslav
confirm
possibilities
Suffolk
##eman
banner
1822
Supporting
fingertips
civilization
##gos
technically
1827
Hastings
sidewalk
strained
monuments
Floyd
Chennai
Elvis
villagers
Cumberland
strode
albeit
Believe
planets
combining
Mohammad
container
##mouth
##tures
verb
BA
Tank
Midland
screened
Gang
Democracy
Helsinki
screens
thread
charitable
##version
swiftly
ma
rational
combine
##SS
##antly
dragging
Cliff
Tasmania
quest
professionally
##aj
rap
##lion
livestock
##hua
informal
specially
lonely
Matthews
Dictionary
1816
Observatory
correspondent
constitute
homeless
waving
appreciated
Analysis
Meeting
dagger
##AL
Gandhi
flank
Giant
Choir
##not
glimpse
toe
Writer
teasing
springs
##dt
Glory
healthcare
regulated
complaint
math
Publications
makers
##hips
cement
Need
apologize
disputes
finishes
Partners
boring
ups
gains
1793
Congressional
clergy
Folk
##made
##nza
Waters
stays
encoded
spider
betrayed
Applied
inception
##urt
##zzo
wards
bells
UCLA
Worth
bombers
Mo
trademark
Piper
##vel
incorporates
1801
##cial
dim
Twelve
##word
Appeals
tighter
spacecraft
##tine
coordinates
##iac
mistakes
Zach
laptop
Teresa
##llar
##yr
favored
Nora
sophisticated
Irving
hammer
División
corporations
niece
##rley
Patterson
UNESCO
trafficking
Ming
balanced
plaque
Latvia
broader
##owed
Save
confined
##vable
Dalton
tide
##right
##ural
##num
swords
caring
##eg
IX
Acting
paved
##moto
launching
Antoine
substantially
Pride
Philharmonic
grammar
Indoor
Ensemble
enabling
114
resided
Angelo
publicity
chaired
crawled
Maharashtra
Telegraph
lengthy
preference
differential
anonymous
Honey
##itation
wage
##iki
consecrated
Bryant
regulatory
Carr
##én
functioning
watches
##ú
shifts
diagnosis
Search
app
Peters
##SE
##cat
Andreas
honours
temper
counsel
Urdu
Anniversary
maritime
##uka
harmony
##unk
essence
Lorenzo
choked
Quarter
indie
##oll
loses
##prints
amendment
Adolf
scenario
similarities
##rade
##LC
technological
metric
Russians
thoroughly
##tead
cruiser
1806
##nier
1823
Teddy
##psy
au
progressed
exceptional
broadcaster
partnered
fitness
irregular
placement
mothers
unofficial
Garion
Johannes
1817
regain
Solar
publishes
Gates
Broken
thirds
conversations
dive
Raj
contributor
quantities
Worcester
governance
##flow
generating
pretending
Belarus
##voy
radius
skating
Marathon
1819
affection
undertook
##wright
los
##bro
locate
PS
excluded
recreation
tortured
jewelry
moaned
##logue
##cut
Complete
##rop
117
##II
plantation
whipped
slower
crater
##drome
Volunteer
attributes
celebrations
regards
Publishers
oath
utilized
Robbie
Giuseppe
fiber
indication
melted
archives
Damien
storey
affecting
identifying
dances
alumni
comparable
upgrade
rented
sprint
##kle
Marty
##lous
treating
railways
Lebanese
erupted
occupy
sympathy
Jude
Darling
Qatar
drainage
McCarthy
heel
Klein
computing
wireless
flip
Du
Bella
##ast
##ssen
narrator
mist
sings
alignment
121
2020
securing
##rail
Progress
missionaries
brutal
mercy
##shing
Hip
##ache
##olo
switching
##here
Malay
##ob
constituted
Mohammed
Often
standings
surge
teachings
ink
detached
systematic
Trial
Myanmar
##wo
offs
Reyes
decoration
translations
wherever
reviewer
speculation
Bangkok
terminated
##ester
beard
RCA
Aidan
Associated
Emerson
Charity
1803
generous
Dudley
ATP
##haven
prizes
toxic
gloves
##iles
##dos
Turning
myth
Parade
##building
Hits
##eva
teamed
Above
Duchess
Holt
##oth
Sub
Ace
atomic
inform
Ship
depend
Jun
##bes
Norwich
globe
Baroque
Christina
Cotton
Tunnel
kidding
Concerto
Brittany
tasted
phases
stems
angles
##TE
##nam
##40
charted
Alison
intensive
Willis
glory
##lit
Bergen
est
taller
##dicate
labeled
##ido
commentator
Warrior
Viscount
shortened
aisle
Aria
Spike
spectators
goodbye
overlooking
mammals
##lude
wholly
Barrett
##gus
accompany
seventy
employ
##mb
ambitious
beloved
basket
##mma
##lding
halted
descendant
pad
exclaimed
cloak
##pet
Strait
Bang
Aviv
sadness
##ffer
Donovan
1880s
agenda
swinging
##quin
jerk
Boat
##rist
nervously
Silence
Echo
shout
implies
##iser
##cking
Shiva
Weston
damages
##tist
effectiveness
Horace
cycling
Rey
ache
Photography
PDF
Dear
leans
Lea
##vision
booth
attained
disbelief
##eus
##ution
Hop
pension
toys
Eurovision
faithful
##heads
Andre
owe
default
Atlas
Megan
highlights
lovers
Constantine
Sixth
masses
##garh
emerge
Auto
Slovak
##oa
##vert
Superintendent
flicked
inventor
Chambers
Frankie
Romeo
pottery
companions
Rudolf
##liers
diary
Unless
tap
alter
Randall
##ddle
##eal
limitations
##boards
utterly
knelt
guaranteed
Cowboys
Islander
horns
##ike
Wendy
sexually
Smart
breasts
##cian
compromise
Duchy
AT
Galaxy
analog
Style
##aking
weighed
Nigel
optional
Czechoslovakia
practicing
Ham
##0s
feedback
batted
uprising
operative
applicable
criminals
classrooms
Somehow
##ode
##OM
Naomi
Winchester
##pping
Bart
Regina
competitor
Recorded
Yuan
Vera
lust
Confederation
##test
suck
1809
Lambert
175
Friend
##ppa
Slowly
##⁺
Wake
Dec
##aneous
chambers
Color
Gus
##site
Alternative
##world
Exeter
Omaha
celebrities
striker
210
dwarf
meals
Oriental
Pearson
financing
revenues
underwater
Steele
screw
Feeling
Mt
acids
badge
swore
theaters
Moving
admired
lung
knot
penalties
116
fork
##cribed
Afghan
outskirts
Cambodia
oval
wool
fossils
Ned
Countess
Darkness
delicious
##nica
Evelyn
Recordings
guidelines
##CP
Sandra
meantime
Antarctica
modeling
granddaughter
##rial
Roma
Seventh
Sunshine
Gabe
##nton
Shop
Turks
prolific
soup
parody
##nta
Judith
disciplines
resign
Companies
Libya
Jets
inserted
Mile
retrieve
filmmaker
##rand
realistic
unhappy
##30
sandstone
##nas
##lent
##ush
##rous
Brent
trash
Rescue
##unted
Autumn
disgust
flexible
infinite
sideways
##oss
##vik
trailing
disturbed
50th
Newark
posthumously
##rol
Schmidt
Josef
##eous
determining
menu
Pole
Anita
Luc
peaks
118
Yard
warrant
generic
deserted
Walking
stamp
tracked
##berger
paired
surveyed
sued
Rainbow
##isk
Carpenter
submarines
realization
touches
sweeping
Fritz
module
Whether
resembles
##form
##lop
unsure
hunters
Zagreb
unemployment
Senators
Georgetown
##onic
Barker
foul
commercials
Dresden
Words
collision
Carlton
Fashion
doubted
##ril
precision
MIT
Jacobs
mob
Monk
retaining
gotta
##rod
remake
Fast
chips
##pled
sufficiently
##lights
delivering
##enburg
Dancing
Barton
Officers
metals
##lake
religions
##ré
motivated
differs
dorsal
##birds
##rts
Priest
polished
##aling
Saxony
Wyatt
knockout
##hor
Lopez
RNA
##link
metallic
##kas
daylight
Montenegro
##lining
wrapping
resemble
Jam
Viking
uncertainty
angels
enables
##fy
Stuttgart
tricks
tattoo
127
wicked
asset
breach
##yman
MW
breaths
Jung
im
1798
noon
vowel
##qua
calmly
seasonal
chat
ingredients
cooled
Randolph
ensuring
##ib
##idal
flashing
1808
Macedonian
Cool
councils
##lick
advantages
Immediately
Madras
##cked
Pain
fancy
chronic
Malayalam
begged
##nese
Inner
feathers
##vey
Names
dedication
Sing
pan
Fischer
nurses
Sharp
inning
stamps
Meg
##ello
edged
motioned
Jacksonville
##ffle
##dic
##US
divide
garnered
Ranking
chasing
modifications
##oc
clever
midst
flushed
##DP
void
##sby
ambulance
beaches
groan
isolation
strengthen
prevention
##ffs
Scouts
reformed
geographic
squadrons
Fiona
Kai
Consequently
##uss
overtime
##yas
Fr
##BL
Papua
Mixed
glances
Haiti
Sporting
sandy
confronted
René
Tanner
1811
##IM
advisory
trim
##ibe
González
gambling
Jupiter
##ility
##owski
##nar
122
apology
teased
Pool
feminine
wicket
eagle
shiny
##lator
blend
peaking
nasty
nodding
fraction
tech
Noble
Kuwait
brushing
Italia
Canberra
duet
Johan
1805
Written
cameo
Stalin
pig
cord
##zio
Surely
SA
owing
holidays
123
Ranger
lighthouse
##ige
miners
1804
##ë
##gren
##ried
crashing
##atory
wartime
highlight
inclined
Torres
Tax
##zel
##oud
Own
##corn
Divine
EMI
Relief
Northwestern
ethics
BMW
click
plasma
Christie
coordinator
Shepherd
washing
cooked
##dio
##eat
Cerambycidae
algebra
Engine
costumes
Vampire
vault
submission
virtue
assumption
##rell
Toledo
##oting
##rva
crept
emphasized
##lton
##ood
Greeks
surgical
crest
Patrol
Beta
Tessa
##GS
pizza
traits
rats
Iris
spray
##GC
Lightning
binary
escapes
##take
Clary
crowds
##zong
hauled
maid
##fen
Manning
##yang
Nielsen
aesthetic
sympathetic
affiliation
soaked
Mozart
personalities
begging
##iga
clip
Raphael
yearly
Lima
abundant
##lm
1794
strips
Initiative
reporters
##vsky
consolidated
##itated
Civic
rankings
mandate
symbolic
##ively
1807
rental
duck
nave
complications
##nor
Irene
Nazis
haunted
scholarly
Pratt
Gran
Embassy
Wave
pity
genius
bats
canton
Tropical
marker
##cos
escorted
Climate
##posed
appreciation
freezing
puzzle
Internal
pools
Shawn
pathway
Daniels
Fitzgerald
extant
olive
Vanessa
marriages
cocked
##dging
prone
chemicals
doll
drawer
##HF
Stark
Property
##tai
flowed
Sheridan
##uated
Less
Omar
remarks
catalogue
Seymour
wreck
Carrie
##bby
Mercer
displaced
sovereignty
rip
Flynn
Archie
Quarterfinals
Hassan
##ards
vein
Osaka
pouring
wages
Romance
##cript
##phere
550
##eil
##stown
Documentary
ancestor
CNN
Panthers
publishers
Rise
##mu
biting
Bright
String
succeeding
119
loaned
Warwick
Sheikh
Von
Afterwards
Jax
Camden
helicopters
Hence
Laurel
##ddy
transaction
Corp
clause
##owing
##kel
Investment
cups
Lucia
Moss
Giles
chef
López
decisive
30th
distress
linguistic
surveys
Ready
maiden
Touch
frontier
incorporate
exotic
mollusk
Leopold
Ride
##wain
##ndo
teammates
tones
drift
ordering
Feb
Penny
Normandy
Present
Flag
pipes
##rro
delight
motto
Tibet
leap
Eliza
Produced
teenagers
sitcom
Try
Hansen
Cody
wandered
terrestrial
frog
scare
resisted
employers
coined
##DS
resistant
Fly
captive
dissolution
judged
associates
defining
##court
Hale
##mbo
raises
clusters
twelfth
##metric
Roads
##itude
satisfy
Android
Reds
Gloucester
Category
Valencia
Daemon
stabbed
Luna
Churches
Canton
##eller
Attack
Kashmir
annexed
grabs
asteroid
Hartford
recommendation
Rodriguez
handing
stressed
frequencies
delegate
Bones
Erie
Weber
Hands
Acts
millimetres
24th
Fat
Howe
casually
##SL
convent
1790
IF
##sity
1795
yelling
##ises
drain
addressing
amino
Marcel
Sylvia
Paramount
Gerard
Volleyball
butter
124
Albion
##GB
triggered
1792
folding
accepts
##ße
preparations
Wimbledon
dose
##grass
escaping
##tling
import
charging
##dation
280
Nolan
##fried
Calcutta
##pool
Cove
examining
minded
heartbeat
twisting
domains
bush
Tunisia
Purple
Leone
##code
evacuated
battlefield
tiger
Electrical
##ared
chased
##cre
cultivated
Jet
solved
shrug
ringing
Impact
##iant
kilometre
##log
commemorate
migrated
singular
designing
promptly
Higgins
##own
##aves
freshwater
Marketing
Payne
beg
locker
pray
implied
AAA
corrected
Trans
Europeans
Ashe
acknowledge
Introduction
##writer
##llen
Munster
auxiliary
growl
Hours
Poems
##AT
reduces
Plain
plague
canceled
detention
polite
necklace
Gustav
##gu
##lance
En
Angola
##bb
dwelling
##hea
5000
Qing
Dodgers
rim
##ored
##haus
spilled
Elisabeth
Viktor
backpack
1802
amended
##worthy
Phantom
##ctive
keeper
##loom
Vikings
##gua
employs
Tehran
specialty
##bate
Marx
Mirror
Jenna
rides
needle
prayers
clarinet
forewings
##walk
Midlands
convincing
advocacy
Cao
Birds
cycles
Clement
Gil
bubble
Maximum
humanitarian
Tan
cries
##SI
Parsons
Trio
offshore
Innovation
clutched
260
##mund
##duct
Prairie
relied
Falcon
##ste
Kolkata
Gill
Swift
Negro
Zoo
valleys
##OL
Opening
beams
MPs
outline
Bermuda
Personal
exceed
productive
##MT
republic
forum
##sty
tornado
Known
dipped
Edith
folks
mathematician
watershed
Ricardo
synthetic
##dication
deity
##₄
gaming
subjected
suspects
Foot
swollen
Motors
##tty
##ý
aloud
ceremonial
es
nuts
intend
Carlisle
tasked
hesitation
sponsors
unified
inmates
##ctions
##stan
tiles
jokes
whereby
outcomes
Lights
scary
Stoke
Portrait
Blind
sergeant
violations
cultivation
fuselage
Mister
Alfonso
candy
sticks
teen
agony
Enough
invite
Perkins
Appeal
mapping
undergo
Glacier
Melanie
affects
incomplete
##dd
Colombian
##nate
CBC
purchasing
bypass
Drug
Electronics
Frontier
Coventry
##aan
autonomy
scrambled
Recent
bounced
cow
experiencing
Rouge
cuisine
Elite
disability
Ji
inheritance
wildly
Into
##wig
confrontation
Wheeler
shiver
Performing
aligned
consequently
Alexis
Sin
woodland
executives
Stevenson
Ferrari
inevitable
##cist
##dha
##base
Corner
comeback
León
##eck
##urus
MacDonald
pioneering
breakdown
landscapes
Veterans
Rican
Theological
stirred
participant
Credit
Hyderabad
snails
Claudia
##ocene
compliance
##MI
Flags
Middlesex
storms
winding
asserted
er
##ault
##kal
waking
##rates
abbey
Augusta
tooth
trustees
Commodore
##uded
Cunningham
NC
Witch
marching
Sword
Same
spiral
Harley
##ahan
Zack
Audio
1890s
##fit
Simmons
Kara
Veronica
negotiated
Speaking
FIBA
Conservatory
formations
constituencies
explicit
facial
eleventh
##ilt
villain
##dog
##case
##hol
armored
tin
hairs
##umi
##rai
mattress
Angus
cease
verbal
Recreation
savings
Aurora
peers
Monastery
Airways
drowned
additions
downstream
sticking
Shi
mice
skiing
##CD
Raw
Riverside
warming
hooked
boost
memorable
posed
treatments
320
##dai
celebrating
blink
helpless
circa
Flowers
PM
uncommon
Oct
Hawks
overwhelmed
Sparhawk
repaired
Mercy
pose
counterpart
compare
survives
##½
##eum
coordinate
Lil
grandchildren
notorious
Yi
Judaism
Juliet
accusations
1789
floated
marathon
roar
fortified
reunion
145
Nov
Paula
##fare
##toria
tearing
Cedar
disappearance
Si
gifted
scar
270
PBS
Technologies
Marvin
650
roller
cupped
negotiate
##erman
passport
tram
miracle
styled
##tier
necessity
Des
rehabilitation
Lara
USD
psychic
wipe
##lem
mistaken
##lov
charming
Rider
pageant
dynamics
Cassidy
##icus
defenses
##tadt
##vant
aging
##inal
declare
mistress
supervised
##alis
##rest
Ashton
submerged
sack
Dodge
grocery
ramp
Teacher
lineage
imagery
arrange
inscriptions
Organisation
Siege
combines
pounded
Fleming
legends
columnist
Apostolic
prose
insight
Arabian
expired
##uses
##nos
Alone
elbows
##asis
##adi
##combe
Step
Waterloo
Alternate
interval
Sonny
plains
Goals
incorporating
recruit
adjoining
Cheshire
excluding
marrying
ducked
Cherokee
par
##inate
hiking
Coal
##bow
natives
ribbon
Allies
con
descriptions
positively
##lal
defendant
22nd
Vivian
##beat
Weather
possessions
Date
sweetheart
inability
Salisbury
adviser
ideology
Nordic
##eu
Cubs
IP
Administrative
##nick
facto
liberation
Burnett
Javier
fashioned
Electoral
Turin
theft
unanimous
Per
1799
Clan
Hawkins
Teachers
##wes
Cameroon
Parkway
##gment
demolition
atoms
nucleus
##thi
recovering
##yte
##vice
lifts
Must
deposit
Hancock
Semi
darkened
Declaration
moan
muscular
Myers
attractions
sauce
simulation
##weed
Alps
barriers
##baum
Barack
galleries
Min
holders
Greenwich
donation
Everybody
Wolfgang
sandwich
Kendra
Collegiate
casino
Slavic
ensuing
Porto
##grapher
Jesuit
suppressed
tires
Ibrahim
protesters
Ibn
Amos
1796
phenomena
Hayden
Paraguay
Squad
Reilly
complement
aluminum
##eers
doubts
decay
demise
Practice
patience
fireplace
transparent
monarchy
##person
Rodney
mattered
rotating
Clifford
disposal
Standards
paced
##llie
arise
tallest
tug
documentation
node
freeway
Nikolai
##cite
clicked
imaging
Lorraine
Tactical
Different
Regular
Holding
165
Pilot
guarded
##polis
Classics
Mongolia
Brock
monarch
cellular
receptors
Mini
Chandler
financed
financially
Lives
erection
Fuller
unnamed
Kannada
cc
passive
plateau
##arity
freak
##rde
retrieved
transactions
##sus
23rd
swimmer
beef
fulfill
Arlington
offspring
reasoning
Rhys
saves
pseudonym
centimetres
shivered
shuddered
##ME
Feel
##otic
professors
Blackburn
##eng
##life
##haw
interred
lodge
fragile
Della
guardian
##bbled
catalog
clad
observer
tract
declaring
##headed
Lok
dean
Isabelle
1776
irrigation
spectacular
shuttle
mastering
##aro
Nathaniel
Retired
##lves
Brennan
##kha
dick
##dated
##hler
Rookie
leapt
televised
weekends
Baghdad
Yemen
##fo
factions
ion
Lab
mortality
passionate
Hammer
encompasses
confluence
demonstrations
Ki
derivative
soils
##unch
Ranch
Universities
conventions
outright
aiming
hierarchy
reside
illusion
graves
rituals
126
Antwerp
Dover
##ema
campuses
Hobart
lifelong
aliens
##vity
Memory
coordination
alphabet
##mina
Titans
pushes
Flanders
##holder
Normal
excellence
capped
profound
Taipei
portrayal
sparked
scratch
se
##eas
##hir
Mackenzie
##cation
Neo
Shin
##lined
magnificent
poster
batsman
##rgent
persuade
##ement
Icelandic
miserable
collegiate
Feature
geography
##mura
Comic
Circus
processor
barracks
Tale
##11
Bulls
##rap
strengthened
##bell
injection
miniature
broadly
Letter
fare
hostage
traders
##nium
##mere
Fortune
Rivera
Lu
triumph
Browns
Bangalore
cooperative
Basel
announcing
Sawyer
##him
##cco
##kara
darted
##AD
##nova
sucking
##position
perimeter
flung
Holdings
##NP
Basque
sketches
Augustine
Silk
Elijah
analyst
armour
riots
acquiring
ghosts
##ems
132
Pioneer
Colleges
Simone
Economy
Author
semester
Soldier
il
##unting
##bid
freaking
Vista
tumor
##bat
murderer
##eda
unreleased
##grove
##sser
##té
edit
statute
sovereign
##gawa
Killer
stares
Fury
comply
##lord
##nant
barrels
Andhra
Maple
generator
mascot
unusually
eds
##ante
##runner
rod
##tles
Historically
Jennings
dumped
Established
resemblance
##lium
##cise
##body
##voke
Lydia
##hou
##iring
nonetheless
1797
corrupt
patrons
physicist
sneak
Livingston
Citizens
Architects
Werner
trends
Melody
eighty
markings
brakes
##titled
oversaw
processed
mock
Midwest
intervals
##EF
stretches
werewolf
##MG
Pack
controller
##dition
Honours
cane
Griffith
vague
repertoire
Courtney
orgasm
Abdullah
dominance
occupies
Ya
introduces
Lester
instinct
collaborative
Indigenous
refusal
##rank
outlet
debts
spear
155
##keeping
##ulu
Catalan
##osh
tensions
##OT
bred
crude
Dunn
abdomen
accurately
##fu
##lough
accidents
Row
Audrey
rude
Getting
promotes
replies
Paolo
merge
##nock
trans
Evangelical
automated
Canon
##wear
##ggy
##gma
Broncos
foolish
icy
Voices
knives
Aside
dreamed
generals
molecule
AG
rejection
insufficient
##nagar
deposited
sacked
Landing
arches
helpful
devotion
intake
Flower
PGA
dragons
evolutionary
##mail
330
GM
tissues
##tree
arcade
composite
lid
Across
implications
lacks
theological
assessed
concentrations
Den
##mans
##ulous
Fu
homeland
##stream
Harriet
ecclesiastical
troop
ecological
winked
##xed
eighteenth
Casino
specializing
##sworth
unlocked
supreme
devastated
snatched
trauma
GDP
Nord
saddle
Wes
convenient
competes
##nu
##iss
Marian
subway
##rri
successes
umbrella
##far
##ually
Dundee
##cence
spark
##rix
##я
Quality
Geological
cockpit
rpm
Cam
Bucharest
riot
##PM
Leah
##dad
##pose
Ka
m³
Bundesliga
Wolfe
grim
textile
quartet
expressing
fantastic
destroyers
eternal
picnic
##oro
contractor
1775
spanning
declining
##cating
Lowe
Sutherland
Emirates
downward
nineteen
violently
scout
viral
melting
enterprises
##cer
Crosby
Jubilee
antenna
urgent
Rory
##uin
##sure
wandering
##gler
##vent
Suzuki
Lifetime
Dirty
occupying
##quent
Disc
Guru
mound
Lennon
Humanities
listeners
Walton
uh
Braves
Bologna
##bis
##gra
Dwight
crawl
flags
memoir
Thorne
Archdiocese
dairy
##uz
##tery
roared
adjust
patches
inn
Knowing
##bbed
##zan
scan
Papa
precipitation
angrily
passages
postal
Phi
embraced
blacks
economist
triangular
Sen
shooter
punished
Millennium
Swimming
confessed
Aston
defeats
Era
cousins
Williamson
##rer
daytime
dumb
##rek
underway
specification
Buchanan
prayed
concealed
activation
##issa
canon
awesome
Starr
plural
summers
##fields
Slam
unnecessary
1791
resume
trilogy
compression
##rough
selective
dignity
Yan
##xton
immense
##yun
lone
seeded
hiatus
lightweight
summary
Yo
approve
Galway
rejoined
Elise
garbage
burns
speeches
129
Honduras
##liness
inventory
jersey
FK
assure
slumped
Lionel
Suite
##sbury
Lena
continuation
##AN
brightly
##nti
GT
Knowledge
##park
##lius
lethal
##tribution
##sions
Certificate
Mara
##lby
algorithms
Jade
blows
pirates
fleeing
wheelchair
Stein
sophomore
Alt
Territorial
diploma
snakes
##olic
##tham
Tiffany
Pius
flush
urging
Hanover
Reich
##olate
Unity
Pike
collectively
Theme
ballad
kindergarten
rocked
zoo
##page
whip
Rodríguez
strokes
checks
Becky
Stern
upstream
##uta
Silent
volunteered
Sigma
##ingen
##tract
##ede
Gujarat
screwed
entertaining
##action
##ryn
defenders
innocence
lesbian
que
Richie
nodes
Lie
juvenile
Jakarta
safer
confront
Bert
breakthrough
gospel
Cable
##zie
institutional
Archive
brake
liquor
feeds
##iate
chancellor
Encyclopedia
Animation
scanning
teens
##mother
Core
Rear
Wine
##flower
reactor
Ave
cardinal
sodium
strands
Olivier
crouched
Vaughan
Sammy
Image
scars
Emmanuel
flour
bias
nipple
revelation
##ucci
Denny
##ssy
Form
Runners
admits
Rama
violated
Burmese
feud
underwear
Mohamed
Named
swift
statewide
Door
Recently
comparing
Hundred
##idge
##nity
##rds
Rally
Reginald
Auburn
solving
waitress
Treasurer
##ilization
Halloween
Ministers
Boss
Shut
##listic
Rahman
demonstrating
##pies
Gaza
Yuri
installations
Math
schooling
##bble
Bronx
exiled
gasoline
133
bundle
humid
FCC
proportional
relate
VFL
##dez
continuity
##cene
syndicated
atmospheric
arrows
Wanderers
reinforcements
Willow
Lexington
Rotten
##yon
discovering
Serena
portable
##lysis
targeting
£1
Goodman
Steam
sensors
detachment
Malik
##erie
attitudes
Goes
Kendall
Read
Sleep
beans
Nikki
modification
Jeanne
knuckles
Eleven
##iously
Gross
Jaime
dioxide
moisture
Stones
UCI
displacement
Metacritic
Jury
lace
rendering
elephant
Sergei
##quire
GP
Abbott
##type
projection
Mouse
Bishops
whispering
Kathleen
Rams
##jar
whites
##oran
assess
dispatched
##hire
kin
##mir
Nursing
advocates
tremendous
sweater
assisting
##bil
Farmer
prominently
reddish
Hague
cyclone
##SD
Sage
Lawson
Sanctuary
discharged
retains
##ube
shotgun
wilderness
Reformed
similarity
Entry
Watts
Bahá
Quest
Looks
visions
Reservoir
Arabs
curls
Blu
dripping
accomplish
Verlag
drill
sensor
Dillon
physicians
smashed
##dir
painters
Renault
straw
fading
Directorate
lounge
commissions
Brain
##graph
neo
##urg
plug
coordinated
##houses
Critical
lamps
illustrator
Returning
erosion
Crow
##ciation
blessing
Thought
Wife
medalist
synthesizer
Pam
Thornton
Esther
HBO
fond
Associates
##raz
pirate
permits
Wide
tire
##PC
Ernie
Nassau
transferring
RFC
##ntly
um
spit
AS
##mps
Mining
polar
villa
anchored
##zzi
embarrassment
relates
##ă
Rupert
counterparts
131
Baxter
##18
Igor
recognizes
Clive
##hane
##eries
##ibly
occurrence
##scope
fin
colorful
Rapids
banker
tile
##rative
##dus
delays
destinations
##llis
Pond
Dane
grandparents
rewarded
socially
motorway
##hof
##lying
##human
modeled
Dayton
Forward
conscience
Sharma
whistle
Mayer
Sasha
##pical
circuits
Zhou
##ça
Latvian
finalists
predators
Lafayette
closes
obligations
Resolution
##vier
Trustees
reminiscent
##hos
Highlands
Protected
asylum
evacuation
##acy
Chevrolet
confession
Somalia
emergence
separating
##rica
alright
calcium
Laurent
Welfare
Leonardo
ashes
dental
Deal
minerals
##lump
##mount
accounted
staggered
slogan
photographic
builder
##imes
##raft
tragic
144
SEC
Hit
tailed
##ples
##rring
##rson
ethical
wrestlers
concludes
lunar
##ept
nitrogen
Aid
cyclist
quarterfinals
##ه
harvest
##hem
Pasha
IL
##mis
continually
##forth
Intel
bucket
##ended
witches
pretended
dresses
viewer
peculiar
lowering
volcano
Marilyn
Qualifier
clung
##sher
Cut
modules
Bowie
##lded
onset
transcription
residences
##pie
##itor
scrapped
##bic
Monaco
Mayo
eternity
Strike
uncovered
skeleton
##wicz
Isles
bug
Promoted
##rush
Mechanical
XII
##ivo
gripping
stubborn
velvet
TD
decommissioned
operas
spatial
unstable
Congressman
wasted
##aga
##ume
advertisements
##nya
obliged
Cannes
Conway
bricks
##gnant
##mity
##uise
jumps
Clear
##cine
##sche
chord
utter
Su
podium
spokesman
Royce
assassin
confirmation
licensing
liberty
##rata
Geographic
individually
detained
##ffe
Saturn
crushing
airplane
bushes
knights
##PD
Lilly
hurts
unexpectedly
Conservatives
pumping
Forty
candle
Pérez
peasants
supplement
Sundays
##ggs
##rries
risen
enthusiastic
corresponds
pending
##IF
Owens
floods
Painter
inflation
presumed
inscribed
Chamberlain
bizarre
1200
liability
reacted
tub
Legacy
##eds
##pted
shone
##litz
##NC
Tiny
genome
bays
Eduardo
robbery
stall
hatch
Depot
Variety
Flora
reprinted
trembled
outlined
CR
Theresa
spans
##plication
Jensen
##eering
posting
##rky
pays
##ost
Marcos
fortifications
inferior
##ential
Devi
despair
Talbot
##chus
updates
ego
Booth
Darius
tops
##lau
Scene
##DC
Harlem
Trey
Generally
candles
##α
Neville
Admiralty
##hong
iconic
victorious
1600
Rowan
abundance
miniseries
clutching
sanctioned
##words
obscure
##ision
##rle
##EM
disappearing
Resort
Obviously
##eb
exceeded
1870s
Adults
##cts
Cry
Kerr
ragged
selfish
##lson
circled
pillars
galaxy
##asco
##mental
rebuild
caution
Resistance
Start
bind
splitting
Baba
Hogan
ps
partnerships
slam
Peggy
courthouse
##OD
organizational
packages
Angie
##nds
possesses
##rp
Expressway
Gould
Terror
Him
Geoff
nobles
##ope
shark
##nh
identifies
##oor
testified
Playing
##ump
##isa
stool
Idol
##pice
##tana
Byrne
Gerry
grunted
26th
observing
habits
privilege
immortal
wagons
##thy
dot
Bring
##lian
##witz
newest
##uga
constraints
Screen
Issue
##RNA
##vil
reminder
##gles
addiction
piercing
stunning
var
##rita
Signal
accumulated
##wide
float
devastating
viable
cartoons
Uttar
flared
##encies
Theology
patents
##bahn
privileges
##ava
##CO
137
##oped
##NT
orchestral
medication
225
erect
Nadia
École
fried
Sales
scripts
##rease
airs
Cage
inadequate
structured
countless
Avengers
Kathy
disguise
mirrors
Investigation
reservation
##nson
Legends
humorous
Mona
decorations
attachment
Via
motivation
Browne
strangers
##ński
Shadows
Twins
##pressed
Alma
Nominated
##ott
Sergio
canopy
152
Semifinals
devised
##irk
upwards
Traffic
Goddess
Move
beetles
138
spat
##anne
holdings
##SP
tangled
Whilst
Fowler
anthem
##ING
##ogy
snarled
moonlight
songwriting
tolerance
Worlds
exams
##pia
notices
sensitivity
poetic
Stephens
Boone
insect
reconstructed
Fresh
27th
balloon
##ables
Brendan
mug
##gee
1780
apex
exports
slides
Lahore
hiring
Shell
electorate
sexuality
poker
nonprofit
##imate
cone
##uce
Okinawa
superintendent
##HC
referenced
turret
Sprint
Citizen
equilibrium
Stafford
curb
Driver
Valerie
##rona
aching
impacts
##bol
observers
Downs
Shri
##uth
airports
##uda
assignments
curtains
solitary
icon
patrols
substances
Jasper
mountainous
Published
ached
##ingly
announce
dove
damaging
##tism
Primera
Dexter
limiting
batch
##uli
undergoing
refugee
Ye
admiral
pavement
##WR
##reed
pipeline
desires
Ramsey
Sheila
thickness
Brotherhood
Tea
instituted
Belt
Break
plots
##ais
masculine
##where
Theo
##aged
##mined
Experience
scratched
Ethiopian
Teaching
##nov
Aiden
Abe
Samoa
conditioning
##mous
Otherwise
fade
Jenks
##encing
Nat
##lain
Anyone
##kis
smirk
Riding
##nny
Bavarian
blessed
potatoes
Hook
##wise
likewise
hardened
Merry
amid
persecution
##sten
Elections
Hoffman
Pitt
##vering
distraction
exploitation
infamous
quote
averaging
healed
Rhythm
Germanic
Mormon
illuminated
guides
##ische
interfere
##ilized
rector
perennial
##ival
Everett
courtesy
##nham
Kirby
Mk
##vic
Medieval
##tale
Luigi
limp
##diction
Alive
greeting
shove
##force
##fly
Jasmine
Bend
Capt
Suzanne
ditch
134
##nning
Host
fathers
rebuilding
Vocal
wires
##manship
tan
Factor
fixture
##LS
Māori
Plate
pyramid
##umble
slap
Schneider
yell
##ulture
##tional
Goodbye
sore
##pher
depressed
##dox
pitching
Find
Lotus
##wang
strand
Teen
debates
prevalent
##bilities
exposing
hears
billed
##rse
reorganized
compelled
disturbing
displaying
##tock
Clinical
emotionally
##iah
Derbyshire
grouped
##quel
Bahrain
Journalism
IN
persistent
blankets
Crane
camping
Direct
proving
Lola
##dding
Corporate
birthplace
##boats
##ender
Figure
dared
Assam
precursor
##nched
Tribe
Restoration
slate
Meyrick
hunted
stroking
Earlier
Kind
polls
appeals
monetary
##reate
Kira
Langdon
explores
GPS
extensions
squares
Results
draped
announcer
merit
##ennial
##tral
##roved
##cion
robots
supervisor
snorted
##group
Cannon
procession
monkey
freeze
sleeves
Nile
verdict
ropes
firearms
extraction
tensed
EC
Saunders
##tches
diamonds
Marriage
##amble
curling
Amazing
##haling
unrelated
##roads
Daughter
cum
discarded
kidney
cliffs
forested
Candy
##lap
authentic
tablet
notation
##nburg
Bulldogs
Callum
Meet
mouths
coated
##xe
Truman
combinations
##mation
Steelers
Fan
Than
paternal
##father
##uti
Rebellion
inviting
Fun
theatres
##ي
##rom
curator
##cision
networking
Oz
drought
##ssel
granting
MBA
Shelby
Elaine
jealousy
Kyoto
shores
signaling
tenants
debated
Intermediate
Wise
##hes
##pu
Havana
duke
vicious
exited
servers
Nonetheless
Reports
explode
##beth
Nationals
offerings
Oval
conferred
eponymous
folklore
##NR
Shire
planting
1783
Zeus
accelerated
Constable
consuming
troubles
McCartney
texture
bust
Immigration
excavated
hopefully
##cession
##coe
##name
##ully
lining
Einstein
Venezuelan
reissued
minorities
Beatrice
crystals
##nies
circus
lava
Beirut
extinction
##shu
Becker
##uke
issuing
Zurich
extract
##esta
##rred
regulate
progression
hut
alcoholic
plea
AB
Norse
Hubert
Mansfield
ashamed
##put
Bombardment
stripes
electrons
Denise
horrified
Nor
arranger
Hay
Koch
##ddling
##iner
Birthday
Josie
deliberate
explorer
##jiang
##signed
Arrow
wiping
satellites
baritone
mobility
##rals
Dorset
turbine
Coffee
185
##lder
Cara
Colts
pits
Crossing
coral
##birth
Tai
zombie
smoothly
##hp
mates
##ady
Marguerite
##tary
puzzled
tapes
overly
Sonic
Prayer
Thinking
##uf
IEEE
obligation
##cliffe
Basil
redesignated
##mmy
nostrils
Barney
XIII
##phones
vacated
unused
Berg
##roid
Towards
viola
136
Event
subdivided
rabbit
recruiting
##nery
Namibia
##16
##ilation
recruits
Famous
Francesca
##hari
Goa
##lat
Karachi
haul
biblical
##cible
MGM
##rta
horsepower
profitable
Grandma
importantly
Martinez
incoming
##kill
beneficial
nominal
praying
##isch
gable
nail
noises
##ttle
Polytechnic
rub
##cope
Thor
audition
erotic
##ending
##iano
Ultimately
armoured
##mum
presently
pedestrian
##tled
Ipswich
offence
##ffin
##borne
Flemish
##hman
echo
##cting
auditorium
gentlemen
winged
##tched
Nicaragua
Unknown
prosperity
exhaust
pie
Peruvian
compartment
heights
disabilities
##pole
Harding
Humphrey
postponed
moths
Mathematical
Mets
posters
axe
##nett
Nights
Typically
chuckle
councillors
alternating
141
Norris
##ately
##etus
deficit
dreaming
cooler
oppose
Beethoven
##esis
Marquis
flashlight
headache
investor
responding
appointments
##shore
Elias
ideals
shades
torch
lingering
##real
pier
fertile
Diploma
currents
Snake
##horse
##15
Briggs
##ota
##hima
##romatic
Coastal
Kuala
ankles
Rae
slice
Hilton
locking
Approximately
Workshop
Niagara
strangely
##scence
functionality
advertisement
Rapid
Anders
ho
Soviets
packing
basal
Sunderland
Permanent
##fting
rack
tying
Lowell
##ncing
Wizard
mighty
tertiary
pencil
dismissal
torso
grasped
##yev
Sand
gossip
##nae
Beer
implementing
##19
##riya
Fork
Bee
##eria
Win
##cid
sailor
pressures
##oping
speculated
Freddie
originating
##DF
##SR
##outh
28th
melt
Brenda
lump
Burlington
USC
marginal
##bine
Dogs
swamp
cu
Ex
uranium
metro
spill
Pietro
seize
Chorus
partition
##dock
##media
engineered
##oria
conclusions
subdivision
##uid
Illustrated
Leading
##hora
Berkshire
definite
##books
##cin
##suke
noun
winced
Doris
dissertation
Wilderness
##quest
braced
arbitrary
kidnapping
Kurdish
##but
clearance
excavations
wanna
Allmusic
insult
presided
yacht
##SM
Honour
Tin
attracting
explosives
Gore
Bride
##ience
Packers
Devils
Observer
##course
Loser
##erry
##hardt
##mble
Cyrillic
undefeated
##stra
subordinate
##ame
Wigan
compulsory
Pauline
Cruise
Opposition
##ods
Period
dispersed
expose
##60
##has
Certain
Clerk
Wolves
##hibition
apparatus
allegiance
orbital
justified
thanked
##ević
Biblical
Carolyn
Graves
##tton
Hercules
backgrounds
replica
1788
aquatic
Mega
Stirling
obstacles
filing
Founder
vowels
Deborah
Rotterdam
surpassed
Belarusian
##ologists
Zambia
Ren
Olga
Alpine
bi
councillor
Oaks
Animals
eliminating
digit
Managing
##GE
laundry
##rdo
presses
slamming
Tudor
thief
posterior
##bas
Rodgers
smells
##ining
Hole
SUV
trombone
numbering
representations
Domingo
Paralympics
cartridge
##rash
Combined
shelves
Kraków
revision
##frame
Sánchez
##tracted
##bler
Alain
townships
sic
trousers
Gibbs
anterior
symmetry
vaguely
Castile
IRA
resembling
Penguin
##ulent
infections
##stant
raped
##pressive
worrying
brains
bending
JR
Evidence
Venetian
complexes
Jonah
850
exported
Ambrose
Gap
philanthropist
##atus
Marxist
weighing
##KO
##nath
Soldiers
chiefs
reject
repeating
shaky
Zürich
preserving
##xin
cigarettes
##break
mortar
##fin
Already
reproduction
socks
Waiting
amazed
##aca
dash
##path
Airborne
##harf
##get
descending
OBE
Sant
Tess
Lucius
enjoys
##ttered
##ivation
##ete
Leinster
Phillies
execute
geological
unfinished
Courts
SP
Beaver
Duck
motions
Platinum
friction
##aud
##bet
Parts
Stade
entirety
sprang
Smithsonian
coffin
prolonged
Borneo
##vise
unanimously
##uchi
Cars
Cassandra
Australians
##CT
##rgen
Louisa
spur
Constance
##lities
Patent
racism
tempo
##ssion
##chard
##nology
##claim
Million
Nichols
##dah
Numerous
ing
Pure
plantations
donor
##EP
##rip
convenience
##plate
dots
indirect
##written
Dong
failures
adapt
wizard
unfortunately
##gion
practitioners
economically
Enrique
unchanged
kingdoms
refined
definitions
lazy
worries
railing
##nay
Kaiser
##lug
cracks
sells
ninety
##WC
Directed
denotes
developmental
papal
unfortunate
disappointing
sixteenth
Jen
##urier
NWA
drifting
Horror
##chemical
behaviors
bury
surfaced
foreigners
slick
AND
##rene
##ditions
##teral
scrap
kicks
comprise
buddy
##anda
Mental
##ype
Dom
wines
Limerick
Luca
Rand
##won
Tomatoes
homage
geometric
##nted
telescope
Shelley
poles
##fan
shareholders
Autonomous
cope
intensified
Genoa
Reformation
grazing
##tern
Zhao
provisional
##bies
Con
##riel
Cynthia
Raleigh
vivid
threaten
Length
subscription
roses
Müller
##isms
robin
##tial
Laos
Stanton
nationalism
##clave
##ND
##17
##zz
staging
Busch
Cindy
relieve
##spective
packs
neglected
CBE
alpine
Evolution
uneasy
coastline
Destiny
Barber
Julio
##tted
informs
unprecedented
Pavilion
##bei
##ference
betrayal
awaiting
leaked
V8
puppet
adverse
Bourne
Sunset
collectors
##glass
##sque
copied
Demon
conceded
resembled
Rafe
Levy
prosecutor
##ject
flora
manned
deaf
Mosque
reminds
Lizzie
Products
Funny
cassette
congress
##rong
Rover
tossing
prompting
chooses
Satellite
cautiously
Reese
##UT
Huang
Gloucestershire
giggled
Kitty
##å
Pleasant
Aye
##ond
judging
1860s
intentionally
Hurling
aggression
##xy
transfers
employing
##fies
##oda
Archibald
Blessed
Ski
flavor
Rosie
##burgh
sunset
Scholarship
WC
surround
ranged
##jay
Degree
Houses
squeezing
limb
premium
Leningrad
steals
##inated
##ssie
madness
vacancy
hydraulic
Northampton
##prise
Marks
Boxing
##fying
academics
##lich
##TY
CDs
##lma
hardcore
monitors
paperback
cables
Dimitri
upside
advent
Ra
##clusive
Aug
Christchurch
objected
stalked
Simple
colonists
##laid
CT
discusses
fellowship
Carnival
cares
Miracle
pastoral
rooted
shortage
borne
Quentin
meditation
tapping
Novel
##ades
Alicia
Burn
famed
residency
Fernández
Johannesburg
Zhu
offended
Mao
outward
##inas
XV
denial
noticing
##ís
quarry
##hound
##amo
Bernie
Bentley
Joanna
mortgage
##rdi
##sumption
lenses
extracted
depiction
##RE
Networks
Broad
Revenue
flickered
virgin
flanked
##о
Enterprises
probable
Liberals
Falcons
drowning
phrases
loads
assumes
inhaled
awe
logs
slightest
spiders
waterfall
##pate
rocking
shrub
##uil
roofs
##gard
prehistoric
wary
##rak
TO
clips
sustain
treason
microphone
voter
Lamb
psychologist
wrinkled
##ères
mating
Carrier
340
##lbert
sensing
##rino
destiny
distract
weaker
UC
Nearly
neurons
spends
Apache
##rem
genuinely
wells
##lanted
stereo
##girl
Lois
Leaving
consul
fungi
Pier
Cyril
80s
Jungle
##tani
illustration
Split
##hana
Abigail
##patrick
1787
diminished
Selected
packaging
##EG
Martínez
communal
Manufacturing
sentiment
143
unwilling
praising
Citation
pills
##iti
##rax
muffled
neatly
workforce
Yep
leisure
Tu
##nding
Wakefield
ancestral
##uki
destructive
seas
Passion
showcase
##ceptive
heroic
142
exhaustion
Customs
##aker
Scholar
sliced
##inian
Direction
##OW
Swansea
aluminium
##eep
ceramic
McCoy
Career
Sector
chartered
Damascus
pictured
Interest
stiffened
Plateau
obsolete
##tant
irritated
inappropriate
overs
##nko
bail
Talent
Sur
ours
##nah
barred
legged
sociology
Bud
dictionary
##luk
Cover
obey
##oring
annoying
##dong
apprentice
Cyrus
Role
##GP
##uns
##bag
Greenland
Porsche
Rocket
##32
organism
##ntary
reliability
##vocation
##й
Found
##hine
motors
promoter
unfair
##oms
##note
distribute
eminent
rails
appealing
chiefly
meaningful
Stephan
##rehension
Consumer
psychiatric
bowler
saints
##iful
##н
1777
Pol
Dorian
Townsend
hastily
##jima
Quincy
Sol
fascinated
Scarlet
alto
Avon
certainty
##eding
Keys
##chu
Chu
##VE
ions
tributaries
Thanksgiving
##fusion
astronomer
oxide
pavilion
Supply
Casa
Bollywood
sadly
mutations
Keller
##wave
nationals
##rgo
##ym
predict
Catholicism
Vega
##eration
##ums
Mali
tuned
Lankan
Plans
radial
Bosnian
Lexi
##14
##ü
sacks
unpleasant
Empty
handles
##taking
Bon
switches
intently
tuition
antique
##jk
fraternity
notebook
Desmond
##sei
prostitution
##how
deed
##OP
501
Somewhere
Rocks
##mons
campaigned
frigate
gases
suppress
##hang
Merlin
Northumberland
dominate
expeditions
thunder
##ups
##rical
Cap
thorough
Ariel
##kind
renewable
constructing
pacing
terrorists
Bowen
documentaries
westward
##lass
##nage
Merchant
##ued
Beaumont
Din
##hian
Danube
peasant
Garrison
encourages
gratitude
reminding
stormed
##ouse
pronunciation
##ailed
Weekend
suggestions
##ffing
##DI
Active
Colombo
##logists
Merrill
##cens
Archaeological
Medina
captained
##yk
duel
cracking
Wilkinson
Guam
pickup
renovations
##ël
##izer
delighted
##iri
Weaver
##ctional
tens
##hab
Clint
##usion
##each
petals
Farrell
##sable
caste
##will
Ezra
##qi
##standing
thrilled
ambush
exhaled
##SU
Resource
blur
forearm
specifications
contingent
cafe
##iology
Antony
fundraising
grape
##rgy
turnout
##udi
Clifton
laboratories
Irvine
##opus
##lid
Monthly
Bihar
statutory
Roses
Emil
##rig
lumber
optimal
##DR
pumps
plaster
Mozambique
##aco
nightclub
propelled
##hun
ked
surplus
wax
##urai
pioneered
Sunny
imprint
Forget
Eliot
approximate
patronage
##bek
##ely
##mbe
Partnership
curl
snapping
29th
Patriarch
##jord
seldom
##ature
astronomy
Bremen
XIV
airborne
205
1778
recognizing
stranded
arrogant
bombardment
destined
ensured
146
robust
Davenport
Interactive
Offensive
Fi
prevents
probe
propeller
sorrow
Blade
mounting
automotive
##dged
wallet
201
lashes
Forrest
##ift
Cell
Younger
shouts
##cki
folds
##chet
Epic
yields
homosexual
tunes
##minate
##text
Manny
chemist
hindwings
##urn
pilgrimage
##sfield
##riff
MLS
##rive
Huntington
translates
Path
slim
##ndra
##oz
climax
commuter
desperation
##reet
denying
##rious
daring
seminary
polo
##clamation
Teatro
Torah
Cats
identities
Poles
photographed
fiery
popularly
##cross
winters
Hesse
##vio
Nurse
Senegal
Salon
prescribed
justify
##gues
##и
##orted
HQ
##hiro
evaluated
momentarily
##unts
Debbie
##licity
##TP
Mighty
Rabbit
##chal
Events
Savoy
##ht
Brandenburg
Bordeaux
##laus
Release
##IE
##kowski
1900s
SK
Strauss
##aly
Sonia
Updated
synagogue
McKay
flattened
370
clutch
contests
toast
evaluate
pope
heirs
jam
tutor
reverted
##ading
nonsense
hesitate
Lars
Ceylon
Laurie
##guchi
accordingly
customary
148
Ethics
Multiple
instincts
IGN
##ä
bullshit
##hit
##par
desirable
##ducing
##yam
alias
ashore
licenses
##lification
misery
147
Cola
assassinated
fiercely
##aft
las
goat
substrate
lords
Cass
Bridges
ICC
lasts
sights
reproductive
##asi
Ivory
Clean
fixing
##lace
seeming
aide
1850s
harassment
##FF
##LE
reasonably
##coat
##cano
NYC
1784
Fifty
immunity
Canadians
Cheng
comforting
meanwhile
##tera
##blin
breeds
glowed
##vour
Aden
##verted
##aded
##oral
neat
enforced
poisoning
##ews
##hone
enforce
predecessors
survivor
Month
unfamiliar
pierced
waived
dump
responds
Mai
Declan
angular
Doesn
interpretations
##yar
invest
Dhaka
policeman
Congregation
Eighth
painfully
##este
##vior
Württemberg
##cles
blockade
encouragement
##fie
Caucasus
Malone
Universidad
utilize
Nissan
inherent
151
agreeing
syllable
determines
Protocol
conclude
##gara
40th
Xu
Taiwanese
##ather
boiler
printer
Lacey
titular
Klaus
Fallon
Wembley
fox
Chandra
Governorate
obsessed
##Ps
micro
##25
Cooke
gymnasium
weaving
Shall
Hussein
glaring
softball
Reader
Dominion
Trouble
varsity
Cooperation
Chaos
Kang
Kramer
Eisenhower
proves
Connie
consortium
governors
Bethany
opener
Normally
Willy
linebacker
Regent
Used
AllMusic
Twilight
##shaw
Companion
Tribunal
simpler
##gam
Experimental
Slovenian
cellar
deadline
trout
Hubbard
ads
idol
##hetto
Granada
clues
salmon
1700
Omega
Caldwell
softened
Bills
Honolulu
##gn
Terrace
suitcase
##IL
frantic
##oons
Abbot
Sitting
Fortress
Riders
sickness
enzymes
trustee
Bern
forged
##13
##ruff
##rl
##versity
inspector
champagne
##held
##FI
hereditary
Taliban
handball
##wine
Sioux
##dicated
honoured
139
##tude
Skye
meanings
##rkin
cardiac
analyzed
vegetable
##FS
Royals
dial
freelance
##fest
partisan
petroleum
ridden
Lincolnshire
panting
##comb
presidents
Haley
##chs
contributes
Jew
discoveries
panicked
Woody
eyelids
Fate
Tulsa
mg
whiskey
zombies
Wii
##udge
investigators
##bull
centred
##screen
Bone
Lana
##oise
forts
##ske
Conan
Lyons
##writing
SH
##ride
rhythmic
154
##llah
pioneers
##bright
captivity
Sanchez
Oman
##mith
Flint
Platform
##ioned
emission
packet
Persia
##formed
takeover
tempted
Vance
Few
Toni
receptions
##ن
exchanges
Camille
whale
Chronicles
##rent
##ushing
##rift
Alto
Genus
##asing
onward
foremost
longing
Rockefeller
containers
##cribe
intercepted
##olt
pleading
Bye
bee
##umbling
153
undertake
Izzy
cheaper
Ultra
validity
##pse
Sa
hovering
##pert
vintage
engraved
##rise
farmland
##ever
##ifier
Atlantis
propose
Catalonia
plunged
##edly
demonstrates
gig
##cover
156
Osborne
cowboy
herd
investigator
loops
Burning
rests
Instrumental
embarrassing
focal
install
readings
swirling
Chatham
parameter
##zin
##holders
Mandarin
Moody
converting
Escape
warnings
##chester
incarnation
##ophone
adopting
##lins
Cromwell
##laws
Axis
Verde
Kappa
Schwartz
Serbs
caliber
Wanna
Chung
##ality
nursery
principally
Bulletin
likelihood
logging
##erty
Boyle
supportive
twitched
##usive
builds
Marseille
omitted
motif
Lands
##lusion
##ssed
Barrow
Airfield
Harmony
WWF
endured
merging
convey
branding
examinations
167
Italians
##dh
dude
1781
##teau
crawling
thoughtful
clasped
concluding
brewery
Moldova
Wan
Towers
Heidelberg
202
##ict
Lagos
imposing
##eval
##serve
Bacon
frowning
thirteenth
conception
calculations
##ович
##mile
##ivated
mutation
strap
##lund
demographic
nude
perfection
stocks
##renched
##dit
Alejandro
bites
fragment
##hack
##rchy
GB
Surgery
Berger
punish
boiling
consume
Elle
Sid
Dome
relies
Crescent
treasurer
Bloody
1758
upheld
Guess
Restaurant
signatures
font
millennium
mural
stakes
Abel
hailed
insists
Alumni
Breton
##jun
digits
##FM
##thal
Talking
motive
reigning
babe
masks
##ø
Shaun
potato
sour
whitish
Somali
##derman
##rab
##wy
chancel
telecommunications
Noise
messenger
tidal
grinding
##ogenic
Rebel
constituent
peripheral
recruitment
##ograph
##tler
pumped
Ravi
poked
##gley
Olive
diabetes
discs
liking
sting
fits
stir
Mari
Sega
creativity
weights
Macau
mandated
Bohemia
disastrous
Katrina
Baku
Rajasthan
waiter
##psis
Siberia
verbs
##truction
patented
1782
##ndon
Relegated
Hunters
Greenwood
Shock
accusing
skipped
Sessions
markers
subset
monumental
Viola
comparative
Alright
Barbados
setup
Session
standardized
##ík
##sket
appoint
AFB
Nationalist
##WS
Troop
leaped
Treasure
goodness
weary
originates
100th
compassion
expresses
recommend
168
composing
seventeenth
Tex
Atlético
bald
Finding
Presidency
Sharks
favoured
inactive
##lter
suffix
princes
brighter
##ctus
classics
defendants
culminated
terribly
Strategy
evenings
##ção
##iver
##urance
absorb
##rner
Territories
RBI
soothing
Martín
concurrently
##tr
Nicholson
fibers
swam
##oney
Allie
Algerian
Dartmouth
Mafia
##bos
##tts
Councillor
vocabulary
##bla
##lé
intending
##dler
Guerrero
sunshine
pedal
##TO
administrators
periodic
scholarships
Loop
Madeline
exaggerated
##ressed
Regan
##cellular
Explorer
##oids
Alexandre
vows
Reporter
Unable
Average
absorption
##bedience
Fortunately
Auxiliary
Grandpa
##HP
##ovo
potent
temporal
adrenaline
##udo
confusing
guiding
Dry
qualifications
joking
wherein
heavyweight
##ices
nightmares
pharmaceutical
Commanding
##aled
##ove
Gregor
##UP
censorship
degradation
glorious
Austro
##rench
380
Miriam
sped
##orous
offset
##KA
fined
specialists
Pune
João
##dina
propped
fungus
##ς
frantically
Gabrielle
Hare
committing
##plied
Ask
Wilmington
stunt
numb
warmer
preacher
earnings
##lating
integer
##ija
federation
homosexuality
##cademia
epidemic
grumbled
shoving
Milk
Satan
Tobias
innovations
##dington
geology
memoirs
##IR
spared
culminating
Daphne
Focus
severed
stricken
Paige
Mans
flats
Russo
communes
litigation
strengthening
##powered
Staffordshire
Wiltshire
Painting
Watkins
##د
specializes
Select
##rane
##aver
Fulton
playable
##VN
openings
sampling
##coon
##21
Allah
travelers
allocation
##arily
Loch
##hm
commentators
fulfilled
##troke
Emeritus
Vanderbilt
Vijay
pledged
##tative
diagram
drilling
##MD
##plain
Edison
productivity
31st
##rying
##ption
##gano
##oration
##bara
posture
bothering
platoon
politely
##inating
redevelopment
Job
##vale
stark
incorrect
Mansion
renewal
threatens
Bahamas
fridge
##tata
Uzbekistan
##edia
Sainte
##mio
gaps
neural
##storm
overturned
Preservation
shields
##ngo
##physics
ah
gradual
killings
##anza
consultation
premiership
Felipe
coincidence
##ène
##any
Handbook
##loaded
Edit
Guns
arguably
##ş
compressed
depict
seller
##qui
Kilkenny
##kling
Olympia
librarian
##acles
dramas
JP
Kit
Maj
##lists
proprietary
##nged
##ettes
##tok
exceeding
Lock
induction
numerical
##vist
Straight
foyer
imaginary
##pop
violinist
Carla
bouncing
##ashi
abolition
##uction
restoring
scenic
##č
Doom
overthrow
para
##vid
##ughty
Concord
HC
cocaine
deputies
##aul
visibility
##wart
Kapoor
Hutchinson
##agan
flashes
kn
decreasing
##ronology
quotes
vain
satisfying
##iam
##linger
310
Hanson
fauna
##zawa
##rrel
Trenton
##VB
Employment
vocational
Exactly
bartender
butterflies
tow
##chers
##ocks
pigs
merchandise
##game
##pine
Shea
##gration
Connell
Josephine
monopoly
##dled
Cobb
warships
cancellation
someday
stove
##Cs
candidacy
superhero
unrest
Toulouse
admiration
undergone
whirled
Reconnaissance
costly
##ships
290
Cafe
amber
Tory
##mpt
definitive
##dress
proposes
redesigned
acceleration
##asa
##raphy
Presley
exits
Languages
##cel
Mode
spokesperson
##tius
Ban
forthcoming
grounded
ACC
compelling
logistics
retailers
abused
##gating
soda
##yland
##lution
Landmark
XVI
blush
##tem
hurling
dread
Tobago
Foley
##uad
scenarios
##mentation
##rks
Score
fatigue
hairy
correspond
##iard
defences
confiscated
##rudence
1785
Formerly
Shot
advertised
460
Text
ridges
Promise
Dev
exclusion
NHS
tuberculosis
rockets
##offs
sparkling
256
disappears
mankind
##hore
HP
##omo
taxation
Multi
DS
Virgil
##ams
Dell
stacked
guessing
Jump
Nope
cheer
hates
ballots
overlooked
analyses
Prevention
maturity
dos
##cards
##lect
Mare
##yssa
Petty
##wning
differing
iOS
##ior
Joachim
Sentinel
##nstein
90s
Pamela
480
Asher
##lary
Vicente
landings
portray
##rda
##xley
Virtual
##uary
finances
Jain
Somebody
Tri
behave
Michele
##ider
dwellings
FAA
Gallagher
##lide
Monkey
195
aforementioned
##rism
##bey
##kim
##puted
Mesa
hopped
unopposed
recipients
Reality
Been
gritted
149
playground
pillar
##rone
Guinness
##tad
Théâtre
depended
Tipperary
Reuben
frightening
wooded
Target
globally
##uted
Morales
Baptiste
drunken
Institut
characterised
##chemistry
Strip
discrete
Premiership
##zzling
gazing
Outer
##quisition
Sikh
Booker
##yal
contemporaries
Jericho
##chan
##physical
##witch
Militia
##rez
##zard
dangers
##utter
##₀
Programs
darling
participates
railroads
##ienne
behavioral
bureau
##rook
161
Hicks
##rises
Comes
inflicted
bees
kindness
norm
##ković
generators
##pard
##omy
##ili
methodology
Alvin
façade
latitude
##plified
DE
Morse
##mered
educate
intersects
##MF
##cz
##vated
AL
##graded
##fill
constitutes
artery
feudal
avant
cautious
##ogue
immigrated
##chenko
Saul
Clinic
Fang
choke
Cornelius
flexibility
temperate
pins
##erson
oddly
inequality
157
Natasha
Sal
##uter
215
aft
blinking
##ntino
northward
Exposition
cookies
Wedding
impulse
Overseas
terrifying
##ough
Mortimer
##see
440
https
og
imagining
##cars
Nicola
exceptionally
threads
##cup
Oswald
Provisional
dismantled
deserves
1786
Fairy
discourse
Counsel
departing
Arc
guarding
##orse
420
alterations
vibrant
Em
squinted
terrace
rowing
Led
accessories
SF
Sgt
cheating
Atomic
##raj
Blackpool
##iary
boarded
substituted
bestowed
lime
kernel
##jah
Belmont
shaken
sticky
retrospective
Louie
migrants
weigh
sunglasses
thumbs
##hoff
excavation
##nks
Extra
Polo
motives
Drum
infrared
tastes
berth
verge
##stand
programmed
warmed
Shankar
Titan
chromosome
cafeteria
dividing
pepper
CPU
Stevie
satirical
Nagar
scowled
Died
backyard
##gata
##reath
##bir
Governors
portraying
##yah
Revenge
##acing
1772
margins
Bahn
OH
lowland
##razed
catcher
replay
##yoshi
Seriously
##licit
Aristotle
##ald
Habsburg
weekday
Secretariat
CO
##dly
##joy
##stad
litre
ultra
##cke
Mongol
Tucson
correlation
compose
traps
Groups
Hai
Salvatore
##dea
cents
##eese
concession
clash
Trip
Panzer
Moroccan
cruisers
torque
Ba
grossed
##arate
restriction
concentrating
FDA
##Leod
##ones
Scholars
##esi
throbbing
specialised
##heses
Chicken
##fia
##ificant
Erich
Residence
##trate
manipulation
namesake
##tom
Hoover
cue
Lindsey
Lonely
275
##HT
combustion
subscribers
Punjabi
respects
Jeremiah
penned
##gor
##rilla
suppression
##tration
Crimson
piston
Derry
crimson
lyrical
oversee
portrays
CF
Districts
Lenin
Cora
searches
clans
VHS
##hel
Jacqueline
Redskins
Clubs
desktop
indirectly
alternatives
marijuana
suffrage
##smos
Irwin
##liff
Process
##hawks
Sloane
##bson
Sonata
yielded
Flores
##ares
armament
adaptations
integrate
neighbours
shelters
##tour
Skinner
##jet
##tations
1774
Peterborough
##elles
ripping
Liang
Dickinson
charities
Rwanda
monasteries
crossover
racist
barked
guerrilla
##ivate
Grayson
##iques
##vious
##got
Rolls
denominations
atom
affinity
##delity
Wish
##inted
##inae
interrogation
##cey
##erina
##lifting
192
Sands
1779
mast
Likewise
##hyl
##oft
contempt
##por
assaulted
fills
establishments
Mal
consulted
##omi
##sight
greet
##roma
##egan
Pulitzer
##rried
##dius
##ractical
##voked
Hasan
CB
##zzy
Romanesque
Panic
wheeled
recorder
##tters
##warm
##gly
botanist
Balkan
Lockheed
Polly
farewell
suffers
purchases
Eaton
##80
Quick
commenting
Saga
beasts
hides
motifs
##icks
Alonso
Springer
Wikipedia
circulated
encoding
jurisdictions
snout
UAE
Integrated
unmarried
Heinz
##lein
##figured
deleted
##tley
Zen
Cycling
Fuel
Scandinavian
##rants
Conner
reef
Marino
curiously
lingered
Gina
manners
activism
Mines
Expo
Micah
promotions
Server
booked
derivatives
eastward
detailing
reelection
##chase
182
Campeonato
Po
158
Peel
winger
##itch
canyon
##pit
LDS
A1
##shin
Giorgio
pathetic
##rga
##mist
Aren
##lag
confronts
motel
textbook
shine
turbines
1770
Darcy
##cot
Southeastern
##lessness
Banner
recognise
stray
Kitchen
paperwork
realism
Chrysler
filmmakers
fishermen
##hetic
variously
Vishnu
fiddle
Eddy
Origin
##tec
##ulin
Flames
Rs
bankrupt
Extreme
Pomeranian
##emption
ratified
##iu
jockey
Stratford
##ivating
##oire
Babylon
pardon
AI
affordable
deities
disturbance
Trying
##sai
Ida
Papers
advancement
70s
archbishop
Luftwaffe
announces
tugging
##lphin
##sistence
##eel
##ishes
ambition
aura
##fled
##lected
##vue
Prasad
boiled
clarity
Violin
investigative
routing
Yankee
##uckle
McMahon
bugs
eruption
##rooms
Minutes
relics
##ckle
##nse
sipped
valves
weakly
##ital
Middleton
collided
##quer
bamboo
insignia
Tyne
exercised
Ninth
echoing
polynomial
considerations
lunged
##bius
objections
complain
disguised
plaza
##VC
institutes
Judicial
ascent
imminent
Waterford
hello
Lumpur
Niger
Goldman
vendors
Kensington
Wren
browser
##bner
##tri
##mize
##pis
##lea
Cheyenne
Bold
Settlement
Hollow
Paralympic
axle
##toire
##actic
impose
perched
utilizing
slips
Benz
Michaels
manipulate
Chiang
##mian
Dolphins
prohibition
attacker
ecology
Estadio
##SB
##uild
attracts
recalls
glacier
lad
##rima
Barlow
kHz
melodic
##aby
##iracy
assumptions
Cornish
##aru
DOS
Maddie
##mers
lyric
Luton
nm
##tron
Reno
Fin
YOU
Broadcast
Finch
sensory
##bent
Jeep
##uman
additionally
Buildings
businessmen
treaties
235
Stranger
gateway
Charlton
accomplishments
Diary
apologized
zinc
histories
supplier
##tting
162
asphalt
Treatment
Abbas
##pating
##yres
Bloom
sedan
soloist
##cum
antagonist
denounced
Fairfax
##aving
##enko
noticeable
Budget
Buckingham
Snyder
retreating
Jai
spoon
invading
giggle
woven
gunfire
arrests
##vered
##come
respiratory
violet
##aws
Byrd
shocking
tenant
Jamaican
Ottomans
Seal
theirs
##isse
##48
cooperate
peering
##nius
163
Composer
organist
Mongolian
Bauer
Spy
collects
prophecy
congregations
##moor
Brick
calculation
fixtures
exempt
##dden
Ada
Thousand
##lue
tracing
##achi
bodyguard
vicar
supplying
Łódź
interception
monitored
##heart
Paso
overlap
annoyance
##dice
yellowish
stables
elders
illegally
honesty
##oar
skinny
spinal
##puram
Bourbon
##cor
flourished
Medium
##stics
##aba
Follow
##ckey
stationary
##scription
dresser
scrutiny
Buckley
Clearly
##SF
Lyrics
##heimer
drying
Oracle
internally
rains
##last
Enemy
##oes
McLean
Ole
phosphate
Rosario
Rifles
##mium
battered
Pepper
Presidents
conquer
Château
castles
##aldo
##ulf
Depending
Lesser
Boom
trades
Peyton
164
emphasize
accustomed
SM
Ai
Classification
##mins
##35
##rons
leak
piled
deeds
lush
##self
beginnings
breathless
1660
McGill
##ago
##chaft
##gies
humour
Bomb
securities
Might
##zone
##eves
Matthias
Movies
Levine
vengeance
##ads
Challenger
Misty
Traditionally
constellation
##rass
deepest
workplace
##oof
##vina
impatient
##ML
Mughal
Alessandro
scenery
Slater
postseason
troupe
##ń
Volunteers
Facility
militants
Reggie
sanctions
Expeditionary
Nam
countered
interpret
Basilica
coding
expectation
Duffy
def
Tong
wakes
Bowling
Vehicle
Adler
salad
intricate
stronghold
medley
##uries
##bur
joints
##rac
##yx
##IO
Ordnance
Welch
distributor
Ark
cavern
trench
Weiss
Mauritius
decreases
docks
eagerly
irritation
Matilda
biographer
Visiting
##marked
##iter
##ear
##gong
Moreno
attendant
Bury
instrumentation
theologian
clit
nuns
symphony
translate
375
loser
##user
##VR
##meter
##orious
harmful
##yuki
Commissioners
Mendoza
sniffed
Hulk
##dded
##ulator
##nz
Donnell
##eka
deported
Met
SD
Aerospace
##cultural
##odes
Fantastic
cavity
remark
emblem
fearing
##iance
ICAO
Liberia
stab
##yd
Pac
Gymnasium
IS
Everton
##vanna
mantle
##ief
Ramon
##genic
Shooting
Smoke
Random
Africans
MB
tavern
bargain
voluntarily
Ion
Peoples
Rusty
attackers
Patton
sins
##cake
Hat
moderately
##hala
##alia
requesting
mechanic
##eae
Seine
Robbins
##ulum
susceptible
Bravo
Slade
Strasbourg
rubble
entrusted
Creation
##amp
smoothed
##uintet
evenly
reviewers
skip
Sculpture
177
Rough
##rrie
Reeves
##cede
Administrator
garde
minus
carriages
grenade
Ninja
fuscous
##kley
Punk
contributors
Aragon
Tottenham
##cca
##sir
VA
laced
dealers
##sonic
crisp
harmonica
Artistic
Butch
Andes
Farmers
corridors
unseen
##tium
Countries
Lone
envisioned
Katy
##lang
##cc
Quarterly
##neck
consort
##aceae
bidding
Corey
concurrent
##acts
##gum
Highness
##lient
##rators
arising
##unta
pathways
49ers
bolted
complaining
ecosystem
libretto
Ser
narrated
212
Soft
influx
##dder
incorporation
plagued
tents
##ddled
1750
Risk
citation
Tomas
hostilities
seals
Bruins
Dominique
attic
competent
##UR
##cci
hugging
Breuning
bacterial
Shrewsbury
vowed
eh
elongated
hangs
render
centimeters
##ficient
Mu
turtle
besieged
##gaard
grapes
bravery
collaborations
deprived
##amine
##using
##gins
arid
##uve
coats
hanged
##sting
Pa
prefix
##ranged
Exit
Chain
Flood
Materials
suspicions
##ö
hovered
Hidden
##state
Malawi
##24
Mandy
norms
fascinating
airlines
delivers
##rust
Cretaceous
spanned
pillows
##onomy
jar
##kka
regent
fireworks
morality
discomfort
lure
uneven
##jack
Lucian
171
archaeology
##til
mornings
Billie
Marquess
impending
spilling
tombs
##volved
Celia
Coke
underside
##bation
Vaughn
Daytona
Godfrey
Pascal
Alien
##sign
172
##lage
iPhone
Gonna
genocide
##rber
oven
endure
dashed
simultaneous
##phism
Wally
##rō
ants
predator
reissue
##aper
Speech
funk
Rudy
claw
Hindus
Numbers
Bing
lantern
##aurus
scattering
poisoned
##active
Andrei
algebraic
baseman
##ritz
Gregg
##cola
selections
##putation
lick
Laguna
##IX
Sumatra
Warning
turf
buyers
Burgess
Oldham
exploit
worm
initiate
strapped
tuning
filters
haze
##е
##ledge
##ydro
##culture
amendments
Promotion
##union
Clair
##uria
petty
shutting
##eveloped
Phoebe
Zeke
conducts
grains
clashes
##latter
illegitimate
willingly
Deer
Lakers
Reference
chaplain
commitments
interrupt
salvation
Panther
Qualifying
Assessment
cancel
efficiently
attorneys
Dynamo
impress
accession
clinging
randomly
reviewing
Romero
Cathy
charting
clapped
rebranded
Azerbaijani
coma
indicator
punches
##tons
Sami
monastic
prospects
Pastor
##rville
electrified
##CI
##utical
tumbled
Chef
muzzle
selecting
UP
Wheel
protocols
##tat
Extended
beautifully
nests
##stal
Andersen
##anu
##³
##rini
kneeling
##reis
##xia
anatomy
dusty
Safe
turmoil
Bianca
##elo
analyze
##ر
##eran
podcast
Slovene
Locke
Rue
##retta
##uni
Person
Prophet
crooked
disagreed
Versailles
Sarajevo
Utrecht
##ogen
chewing
##ception
##iidae
Missile
attribute
majors
Arch
intellectuals
##andra
ideological
Cory
Salzburg
##fair
Lot
electromagnetic
Distribution
##oper
##pered
Russ
Terra
repeats
fluttered
Riga
##ific
##gt
cows
Hair
labelled
protects
Gale
Personnel
Düsseldorf
Moran
rematch
##OE
Slow
forgiveness
##ssi
proudly
Macmillan
insist
undoubtedly
Québec
Violence
##yuan
##aine
mourning
linen
accidental
##iol
##arium
grossing
lattice
maneuver
##marine
prestige
petrol
gradient
invasive
militant
Galerie
widening
##aman
##quist
disagreement
##ales
creepy
remembers
buzz
##erial
Exempt
Dirk
mon
Addison
##inen
deposed
##agon
fifteenth
Hang
ornate
slab
##lades
Fountain
contractors
das
Warwickshire
1763
##rc
Carly
Essays
Indy
Ligue
greenhouse
slit
##sea
chewed
wink
##azi
Playhouse
##kon
Gram
Ko
Samson
creators
revive
##rians
spawned
seminars
Craft
Tall
diverted
assistants
computational
enclosure
##acity
Coca
##eve
databases
Drop
##loading
##hage
Greco
Privy
entrances
pork
prospective
Memories
robes
##market
transporting
##lik
Rudolph
Horton
visually
##uay
##nja
Centro
Tor
Howell
##rsey
admitting
postgraduate
herbs
##att
Chin
Rutherford
##bot
##etta
Seasons
explanations
##bery
Friedman
heap
##ryl
##sberg
jaws
##agh
Choi
Killing
Fanny
##suming
##hawk
hopeful
##aid
Monty
gum
remarkably
Secrets
disco
harp
advise
##avia
Marathi
##cycle
Truck
abbot
sincere
urine
##mology
masked
bathing
##tun
Fellows
##TM
##gnetic
owl
##jon
hymn
##leton
208
hostility
##cée
baked
Bottom
##AB
shudder
##ater
##von
##hee
reorganization
Cycle
##phs
Lex
##style
##rms
Translation
##erick
##imeter
##ière
attested
Hillary
##DM
gal
wander
Salle
##laming
Perez
Pit
##LP
USAF
contexts
Disease
blazing
aroused
razor
walled
Danielle
Mont
Funk
royalty
thee
203
donors
##erton
famously
processors
reassigned
welcoming
Goldberg
##quities
undisclosed
Orient
Patty
vaccine
refrigerator
Cypriot
consonant
##waters
176
sober
##lement
Racecourse
##uate
Luckily
Selection
conceptual
vines
Breaking
wa
lions
oversight
sheltered
Dancer
ponds
borrow
##BB
##pulsion
Daly
##eek
fertility
spontaneous
Worldwide
gasping
##tino
169
ABS
Vickers
ambient
energetic
prisons
##eson
Stacy
##roach
GmbH
Afro
Marin
farmhouse
pinched
##cursion
##sp
Sabine
##pire
181
nak
swelling
humble
perfume
##balls
Rai
cannons
##taker
Married
Maltese
canals
interceptions
hats
lever
slowing
##ppy
Nike
Silas
Scarborough
skirts
166
inauguration
Shuttle
alloy
beads
belts
Compton
Cause
battling
critique
surf
Dock
roommate
##ulet
invade
Garland
##slow
nutrition
persona
##zam
Wichita
acquaintance
coincided
##cate
Dracula
clamped
##gau
overhaul
##broken
##rrier
melodies
ventures
Paz
convex
Roots
##holding
Tribute
transgender
##ò
chimney
##riad
Ajax
Thereafter
messed
nowadays
pH
##100
##alog
Pomerania
##yra
Rossi
glove
##TL
Races
##asily
tablets
Jase
##ttes
diner
##rns
Hu
Mohan
anytime
weighted
remixes
Dove
cherry
imports
##urity
GA
##TT
##iated
##sford
Clarkson
evidently
rugged
Dust
siding
##ometer
acquitted
choral
##mite
infants
Domenico
gallons
Atkinson
gestures
slated
##xa
Archaeology
unwanted
##ibes
##duced
premise
Colby
Geelong
disqualified
##pf
##voking
simplicity
Walkover
Qaeda
Warden
##bourg
##ān
Invasion
Babe
harness
183
##tated
maze
Burt
bedrooms
##nsley
Horizon
##oast
minimize
peeked
MLA
Trains
tractor
nudged
##iform
Growth
Benton
separates
##about
##kari
buffer
anthropology
brigades
foil
##wu
Domain
licking
whore
##rage
##sham
Initial
Courthouse
Rutgers
dams
villains
supermarket
##brush
Brunei
Palermo
arises
Passenger
outreach
##gill
Labrador
McLaren
##uy
Lori
##fires
Heads
magistrate
¹⁄₂
Weapons
##wai
##roke
projecting
##ulates
bordering
McKenzie
Pavel
midway
Guangzhou
streamed
racer
##lished
eccentric
spectral
206
##mism
Wilde
Grange
preparatory
lent
##tam
starving
Gertrude
##cea
##ricted
Breakfast
Mira
blurted
derive
##lair
blunt
sob
Cheltenham
Henrik
reinstated
intends
##istan
unite
##ector
playful
sparks
mapped
Cadet
luggage
prosperous
##ein
salon
##utes
Biological
##rland
Tyrone
buyer
##lose
amounted
Saw
smirked
Ronan
Reviews
Adele
trait
##proof
Bhutan
Ginger
##junct
digitally
stirring
##isted
coconut
Hamlet
Dinner
Scale
pledge
##RP
Wrong
Goal
Panel
therapeutic
elevations
infectious
priesthood
##inda
Guyana
diagnostic
##mbre
Blackwell
sails
##arm
literal
periodically
gleaming
Robot
Rector
##abulous
##tres
Reaching
Romantic
CP
Wonderful
##tur
ornamental
##nges
traitor
##zilla
genetics
mentioning
##eim
resonance
Areas
Shopping
##nard
Gail
Solid
##rito
##mara
Willem
Chip
Matches
Volkswagen
obstacle
Organ
invites
Coral
attain
##anus
##dates
Midway
shuffled
Cecilia
dessert
Gateway
Ch
Napoleonic
Petroleum
jets
goose
striped
bowls
vibration
Sims
nickel
Thirteen
problematic
intervene
##grading
##unds
Mum
semifinal
Radical
##izations
refurbished
##sation
##harine
Maximilian
cites
Advocate
Potomac
surged
preserves
Curry
angled
ordination
##pad
Cade
##DE
##sko
researched
torpedoes
Resident
wetlands
hay
applicants
depart
Bernstein
##pic
##ario
##rae
favourable
##wari
##р
metabolism
nobleman
Defaulted
calculate
ignition
Celebrity
Belize
sulfur
Flat
Sc
USB
flicker
Hertfordshire
Sept
CFL
Pasadena
Saturdays
Titus
##nir
Canary
Computing
Isaiah
##mler
formidable
pulp
orchid
Called
Solutions
kilograms
steamer
##hil
Doncaster
successors
Stokes
Holstein
##sius
sperm
API
Rogue
instability
Acoustic
##rag
159
undercover
Wouldn
##pra
##medical
Eliminated
honorable
##chel
denomination
abrupt
Buffy
blouse
fi
Regardless
Subsequent
##rdes
Lover
##tford
bacon
##emia
carving
##cripts
Massacre
Ramos
Latter
##ulp
ballroom
##gement
richest
bruises
Rest
Wiley
##aster
explosions
##lastic
Edo
##LD
Mir
choking
disgusted
faintly
Barracks
blasted
headlights
Tours
ensued
presentations
##cale
wrought
##oat
##coa
Quaker
##sdale
recipe
##gny
corpses
##liance
comfortably
##wat
Landscape
niche
catalyst
##leader
Securities
messy
##RL
Rodrigo
backdrop
##opping
treats
Emilio
Anand
bilateral
meadow
VC
socialism
##grad
clinics
##itating
##ppe
##ymphonic
seniors
Advisor
Armoured
Method
Alley
##orio
Sad
fueled
raided
Axel
NH
rushes
Dixie
Otis
wrecked
##22
capitalism
café
##bbe
##pion
##forcing
Aubrey
Lublin
Whenever
Sears
Scheme
##lana
Meadows
treatise
##RI
##ustic
sacrifices
sustainability
Biography
mystical
Wanted
multiplayer
Applications
disliked
##tisfied
impaired
empirical
forgetting
Fairfield
Sunni
blurred
Growing
Avalon
coil
Camera
Skin
bruised
terminals
##fted
##roving
Commando
##hya
##sper
reservations
needles
dangling
##rsch
##rsten
##spect
##mbs
yoga
regretted
Bliss
Orion
Rufus
glucose
Olsen
autobiographical
##dened
222
humidity
Shan
##ifiable
supper
##rou
flare
##MO
campaigning
descend
socio
declares
Mounted
Gracie
Arte
endurance
##ety
Copper
costa
airplay
##MB
Proceedings
dislike
grimaced
occupants
births
glacial
oblivious
cans
installment
muddy
##ł
captains
pneumonia
Quiet
Sloan
Excuse
##nine
Geography
gymnastics
multimedia
drains
Anthology
Gear
cylindrical
Fry
undertaking
##pler
##tility
Nan
##recht
Dub
philosophers
piss
Atari
##pha
Galicia
México
##nking
Continuing
bump
graveyard
persisted
Shrine
##erapy
defects
Advance
Bomber
##oil
##ffling
cheerful
##lix
scrub
##eto
awkwardly
collaborator
fencing
##alo
prophet
Croix
coughed
##lication
roadway
slaughter
elephants
##erated
Simpsons
vulnerability
ivory
Birth
lizard
scarce
cylinders
fortunes
##NL
Hate
Priory
##lai
McBride
##copy
Lenny
liaison
Triangle
coronation
sampled
savage
amidst
Grady
whatsoever
instinctively
Reconstruction
insides
seizure
Drawing
##rlin
Antioch
Gao
Díaz
1760
Sparks
##tien
##bidae
rehearsal
##bbs
botanical
##hers
compensate
wholesale
Seville
shareholder
prediction
astronomical
Reddy
hardest
circling
whereabouts
termination
Rep
Assistance
Dramatic
Herb
##ghter
climbs
188
Poole
301
##pable
wit
##istice
Walters
relying
Jakob
##redo
proceeding
Langley
affiliates
ou
##allo
##holm
Samsung
##ishi
Missing
Xi
vertices
Claus
foam
restless
##uating
##sso
##ttering
Philips
delta
bombed
Catalogue
coaster
Ling
Willard
satire
410
Composition
Net
Orioles
##ldon
fins
Palatinate
Woodward
tease
tilt
brightness
##70
##bbling
##loss
##dhi
##uilt
Whoever
##yers
hitter
Elton
Extension
ace
Affair
restructuring
##loping
Paterson
hi
##rya
spouse
Shay
Himself
piles
preaching
##gical
bikes
Brave
expulsion
Mirza
stride
Trees
commemorated
famine
masonry
Selena
Watt
Banking
Rancho
Stockton
dip
tattoos
Vlad
acquainted
Flyers
ruthless
fourteenth
illustrate
##akes
EPA
##rows
##uiz
bumped
Designed
Leaders
mastered
Manfred
swirled
McCain
##rout
Artemis
rabbi
flinched
upgrades
penetrate
shipyard
transforming
caretaker
##eiro
Maureen
tightening
##founded
RAM
##icular
##mper
##rung
Fifteen
exploited
consistency
interstate
##ynn
Bridget
contamination
Mistress
##rup
coating
##FP
##jective
Libyan
211
Gemma
dependence
shrubs
##ggled
Germain
retaliation
traction
##PP
Dangerous
terminology
psychiatrist
##garten
hurdles
Natal
wasting
Weir
revolves
stripe
##reased
preferences
##entation
##lde
##áil
##otherapy
Flame
##ologies
viruses
Label
Pandora
veil
##ogical
Coliseum
Cottage
creeping
Jong
lectured
##çaise
shoreline
##fference
##hra
Shade
Clock
Faye
bilingual
Humboldt
Operating
##fter
##was
algae
towed
amphibious
Parma
impacted
smacked
Piedmont
Monsters
##omb
Moor
##lberg
sinister
Postal
178
Drummond
Sign
textbooks
hazardous
Brass
Rosemary
Pick
Sit
Architect
transverse
Centennial
confess
polling
##aia
Julien
##mand
consolidation
Ethel
##ulse
severity
Yorker
choreographer
1840s
##ltry
softer
versa
##geny
##quila
##jō
Caledonia
Friendship
Visa
rogue
##zzle
bait
feather
incidence
Foods
Ships
##uto
##stead
arousal
##rote
Hazel
##bolic
Swing
##ej
##cule
##jana
##metry
##uity
Valuable
##ₙ
Shropshire
##nect
365
Ones
realise
Café
Albuquerque
##grown
##stadt
209
##ᵢ
prefers
withstand
Lillian
MacArthur
Hara
##fulness
domination
##VO
##school
Freddy
ethnicity
##while
adorned
hormone
Calder
Domestic
Freud
Shields
##phus
##rgan
BP
Segunda
Mustang
##GI
Bonn
patiently
remarried
##umbria
Crete
Elephant
Nuremberg
tolerate
Tyson
##evich
Programming
##lander
Bethlehem
segregation
Constituency
quarterly
blushed
photographers
Sheldon
porcelain
Blanche
goddamn
lively
##fused
bumps
##eli
curated
coherent
provoked
##vet
Madeleine
##isco
rainy
Bethel
accusation
ponytail
gag
##lington
quicker
scroll
##vate
Bow
Gender
Ira
crashes
ACT
Maintenance
##aton
##ieu
bitterly
strains
rattled
vectors
##arina
##ishly
173
parole
##nx
amusing
Gonzalez
##erative
Caucus
sensual
Penelope
coefficient
Mateo
##mani
proposition
Duty
lacrosse
proportions
Plato
profiles
Botswana
Brandt
reins
mandolin
encompassing
##gens
Kahn
prop
summon
##MR
##yrian
##zaki
Falling
conditional
thy
##bao
##ych
radioactive
##nics
Newspaper
##people
##nded
Gaming
sunny
##look
Sherwood
crafted
NJ
awoke
187
timeline
giants
possessing
##ycle
Cheryl
ng
Ruiz
polymer
potassium
Ramsay
relocation
##leen
Sociology
##bana
Franciscan
propulsion
denote
##erjee
registers
headline
Tests
emerges
Articles
Mint
livery
breakup
kits
Rap
Browning
Bunny
##mington
##watch
Anastasia
Zachary
arranging
biographical
Erica
Nippon
##membrance
Carmel
##sport
##xes
Paddy
##holes
Issues
Spears
compliment
##stro
##graphs
Castillo
##MU
##space
Corporal
##nent
174
Gentlemen
##ilize
##vage
convinces
Carmine
Crash
##hashi
Files
Doctors
brownish
sweating
goats
##conductor
rendition
##bt
NL
##spiration
generates
##cans
obsession
##noy
Danger
Diaz
heats
Realm
priorities
##phon
1300
initiation
pagan
bursts
archipelago
chloride
Screenplay
Hewitt
Khmer
bang
judgement
negotiating
##ait
Mabel
densely
Boulder
knob
430
Alfredo
##kt
pitches
##ées
##ان
Macdonald
##llum
imply
##mot
Smile
spherical
##tura
Derrick
Kelley
Nico
cortex
launches
differed
parallels
Navigation
##child
##rming
canoe
forestry
reinforce
##mote
confirming
tasting
scaled
##resh
##eting
Understanding
prevailing
Pearce
CW
earnest
Gaius
asserts
denoted
landmarks
Chargers
warns
##flies
Judges
jagged
##dain
tails
Historian
Millie
##sler
221
##uard
absurd
Dion
##ially
makeshift
Specifically
ignorance
Eat
##ieri
comparisons
forensic
186
Giro
skeptical
disciplinary
battleship
##45
Libby
520
Odyssey
ledge
##post
Eternal
Missionary
deficiency
settler
wonders
##gai
raging
##cis
Romney
Ulrich
annexation
boxers
sect
204
ARIA
dei
Hitchcock
te
Varsity
##fic
CC
lending
##nial
##tag
##rdy
##obe
Defensive
##dson
##pore
stellar
Lam
Trials
contention
Sung
##uminous
Poe
superiority
##plicate
325
bitten
conspicuous
##olly
Lila
Pub
Petit
distorted
ISIL
distinctly
##family
Cowboy
mutant
##cats
##week
Changes
Sinatra
epithet
neglect
Innocent
gamma
thrill
reggae
##adia
##ational
##due
landlord
##leaf
visibly
##ì
Darlington
Gomez
##iting
scarf
##lade
Hinduism
Fever
scouts
##roi
convened
##oki
184
Lao
boycott
unemployed
##lore
##ß
##hammer
Curran
disciples
odor
##ygiene
Lighthouse
Played
whales
discretion
Yves
##ceived
pauses
coincide
##nji
dizzy
##scopic
routed
Guardians
Kellan
carnival
nasal
224
##awed
Mitsubishi
640
Cast
silky
Projects
joked
Huddersfield
Rothschild
zu
##olar
Divisions
mildly
##eni
##lge
Appalachian
Sahara
pinch
##roon
wardrobe
##dham
##etal
Bubba
##lini
##rumbling
Communities
Poznań
unification
Beau
Kris
SV
Rowing
Minh
reconciliation
##saki
##sor
taped
##reck
certificates
gubernatorial
rainbow
##uing
litter
##lique
##oted
Butterfly
benefited
Images
induce
Balkans
Velvet
##90
##xon
Bowman
##breaker
penis
##nitz
##oint
##otive
crust
##pps
organizers
Outdoor
nominees
##rika
TX
##ucks
Protestants
##imation
appetite
Baja
awaited
##points
windshield
##igh
##zled
Brody
Buster
stylized
Bryce
##sz
Dollar
vest
mold
ounce
ok
receivers
##uza
Purdue
Harrington
Hodges
captures
##ggio
Reservation
##ssin
##tman
cosmic
straightforward
flipping
remixed
##athed
Gómez
Lim
motorcycles
economies
owning
Dani
##rosis
myths
sire
kindly
1768
Bean
graphs
##mee
##RO
##geon
puppy
Stephenson
notified
##jer
Watching
##rama
Sino
urgency
Islanders
##mash
Plata
fumble
##chev
##stance
##rack
##she
facilitated
swings
akin
enduring
payload
##phine
Deputies
murals
##tooth
610
Jays
eyeing
##quito
transparency
##cote
Timor
negatively
##isan
battled
##fected
thankful
Rage
hospitality
incorrectly
207
entrepreneurs
##cula
##wley
hedge
##cratic
Corpus
Odessa
Whereas
##ln
fetch
happier
Amherst
bullying
graceful
Height
Bartholomew
willingness
qualifier
191
Syed
Wesleyan
Layla
##rrence
Webber
##hum
Rat
##cket
##herence
Monterey
contaminated
Beside
Mustafa
Nana
213
##pruce
Reason
##spense
spike
##gé
AU
disciple
charcoal
##lean
formulated
Diesel
Mariners
accreditation
glossy
1800s
##ih
Mainz
unison
Marianne
shear
overseeing
vernacular
bowled
##lett
unpopular
##ckoned
##monia
Gaston
##TI
##oters
Cups
##bones
##ports
Museo
minors
1773
Dickens
##EL
##NBC
Presents
ambitions
axes
Río
Yukon
bedside
Ribbon
Units
faults
conceal
##lani
prevailed
214
Goodwin
Jaguar
crumpled
Cullen
Wireless
ceded
remotely
Bin
mocking
straps
ceramics
##avi
##uding
##ader
Taft
twenties
##aked
Problem
quasi
Lamar
##ntes
##avan
Barr
##eral
hooks
sa
##ône
194
##ross
Nero
Caine
trance
Homeland
benches
Guthrie
dismiss
##lex
César
foliage
##oot
##alty
Assyrian
Ahead
Murdoch
dictatorship
wraps
##ntal
Corridor
Mackay
respectable
jewels
understands
##pathic
Bryn
##tep
ON
capsule
intrigued
Sleeping
communists
##chayat
##current
##vez
doubling
booklet
##uche
Creed
##NU
spies
##sef
adjusting
197
Imam
heaved
Tanya
canonical
restraint
senators
stainless
##gnate
Matter
cache
restrained
conflicting
stung
##ool
Sustainable
antiquity
193
heavens
inclusive
##ador
fluent
303
911
archaeologist
superseded
##plex
Tammy
inspire
##passing
##lub
Lama
Mixing
##activated
##yote
parlor
tactic
198
Stefano
prostitute
recycling
sorted
banana
Stacey
Musée
aristocratic
cough
##rting
authorised
gangs
runoff
thoughtfully
##nish
Fisheries
Provence
detector
hum
##zhen
pill
##árez
Map
Leaves
Peabody
skater
vent
##color
390
cerebral
hostages
mare
Jurassic
swell
##isans
Knoxville
Naked
Malaya
scowl
Cobra
##anga
Sexual
##dron
##iae
196
##drick
Ravens
Blaine
##throp
Ismail
symmetric
##lossom
Leicestershire
Sylvester
glazed
##tended
Radar
fused
Families
Blacks
Sale
Zion
foothills
microwave
slain
Collingwood
##pants
##dling
killers
routinely
Janice
hearings
##chanted
##ltration
continents
##iving
##yster
##shot
##yna
injected
Guillaume
##ibi
kinda
Confederacy
Barnett
disasters
incapable
##grating
rhythms
betting
draining
##hak
Callie
Glover
##iliated
Sherlock
hearted
punching
Wolverhampton
Leaf
Pi
builders
furnished
knighted
Photo
##zle
Touring
fumbled
pads
##ий
Bartlett
Gunner
eerie
Marius
Bonus
pots
##hino
##pta
Bray
Frey
Ortiz
stalls
belongings
Subway
fascination
metaphor
Bat
Boer
Colchester
sway
##gro
rhetoric
##dheim
Fool
PMID
admire
##hsil
Strand
TNA
##roth
Nottinghamshire
##mat
##yler
Oxfordshire
##nacle
##roner
BS
##nces
stimulus
transports
Sabbath
##postle
Richter
4000
##grim
##shima
##lette
deteriorated
analogous
##ratic
UHF
energies
inspiring
Yiddish
Activities
##quential
##boe
Melville
##ilton
Judd
consonants
labs
smuggling
##fari
avid
##uc
truce
undead
##raith
Mostly
bracelet
Connection
Hussain
awhile
##UC
##vention
liable
genetically
##phic
Important
Wildcats
daddy
transmit
##cas
conserved
Yesterday
##lite
Nicky
Guys
Wilder
Lay
skinned
Communists
Garfield
Nearby
organizer
Loss
crafts
walkway
Chocolate
Sundance
Synod
##enham
modify
swayed
Surface
analysts
brackets
drone
parachute
smelling
Andrés
filthy
frogs
vertically
##OK
localities
marries
AHL
35th
##pian
Palazzo
cube
dismay
relocate
##на
Hear
##digo
##oxide
prefecture
converts
hangar
##oya
##ucking
Spectrum
deepened
spoiled
Keeping
##phobic
Verona
outrage
Improvement
##UI
masterpiece
slung
Calling
chant
Haute
mediated
manipulated
affirmed
##hesis
Hangul
skies
##llan
Worcestershire
##kos
mosaic
##bage
##wned
Putnam
folder
##LM
guts
noteworthy
##rada
AJ
sculpted
##iselle
##rang
recognizable
##pent
dolls
lobbying
impatiently
Se
staple
Serb
tandem
Hiroshima
thieves
##ynx
faculties
Norte
##alle
##trusion
chords
##ylon
Gareth
##lops
##escu
FIA
Levin
auspices
groin
Hui
nun
Listed
Honourable
Larsen
rigorous
##erer
Tonga
##pment
##rave
##track
##aa
##enary
540
clone
sediment
esteem
sighted
cruelty
##boa
inverse
violating
Amtrak
Status
amalgamated
vertex
AR
harmless
Amir
mounts
Coronation
counseling
Audi
CO₂
splits
##eyer
Humans
Salmon
##have
##rado
##čić
216
takeoff
classmates
psychedelic
##gni
Gypsy
231
Anger
GAA
ME
##nist
##tals
Lissa
Odd
baptized
Fiat
fringe
##hren
179
elevators
perspectives
##TF
##ngle
Question
frontal
950
thicker
Molecular
##nological
Sixteen
Baton
Hearing
commemorative
dorm
Architectural
purity
##erse
risky
Georgie
relaxing
##ugs
downed
##rar
Slim
##phy
IUCN
##thorpe
Parkinson
217
Marley
Shipping
sweaty
Jesuits
Sindh
Janata
implying
Armenians
intercept
Ankara
commissioners
ascended
sniper
Grass
Walls
salvage
Dewey
generalized
learnt
PT
##fighter
##tech
DR
##itrus
##zza
mercenaries
slots
##burst
##finger
##nsky
Princes
Rhodesia
##munication
##strom
Fremantle
homework
ins
##Os
##hao
##uffed
Thorpe
Xiao
exquisite
firstly
liberated
technician
Oilers
Phyllis
herb
sharks
MBE
##stock
Product
banjo
##morandum
##than
Visitors
unavailable
unpublished
oxidation
Vogue
##copic
##etics
Yates
##ppard
Leiden
Trading
cottages
Principles
##Millan
##wife
##hiva
Vicar
nouns
strolled
##eorological
##eton
##science
precedent
Armand
Guido
rewards
##ilis
##tise
clipped
chick
##endra
averages
tentatively
1830s
##vos
Certainly
305
Société
Commandant
##crats
##dified
##nka
marsh
angered
ventilation
Hutton
Ritchie
##having
Eclipse
flick
motionless
Amor
Fest
Loire
lays
##icit
##sband
Guggenheim
Luck
disrupted
##ncia
Disco
##vigator
criticisms
grins
##lons
##vial
##ody
salute
Coaches
junk
saxophonist
##eology
Uprising
Diet
##marks
chronicles
robbed
##iet
##ahi
Bohemian
magician
wavelength
Kenyan
augmented
fashionable
##ogies
Luce
F1
Monmouth
##jos
##loop
enjoyment
exemption
Centers
##visor
Soundtrack
blinding
practitioner
solidarity
sacrificed
##oso
##cture
##riated
blended
Abd
Copyright
##nob
34th
##reak
Claudio
hectare
rotor
testify
##ends
##iably
##sume
landowner
##cess
##ckman
Eduard
Silesian
backseat
mutually
##abe
Mallory
bounds
Collective
Poet
Winkler
pertaining
scraped
Phelps
crane
flickering
Proto
bubbles
popularized
removes
##86
Cadillac
Warfare
audible
rites
shivering
##sist
##nst
##biotic
Mon
fascist
Bali
Kathryn
ambiguous
furiously
morale
patio
Sang
inconsistent
topology
Greens
monkeys
Köppen
189
Toy
vow
##ías
bombings
##culus
improvised
lodged
subsidiaries
garment
startling
practised
Hume
Thorn
categorized
Till
Eileen
wedge
##64
Federico
patriotic
unlock
##oshi
badminton
Compared
Vilnius
##KE
Crimean
Kemp
decks
spaced
resolutions
sighs
##mind
Imagine
Cartoon
huddled
policemen
forwards
##rouch
equals
##nter
inspected
Charley
MG
##rte
pamphlet
Arturo
dans
scarcely
##ulton
##rvin
parental
unconstitutional
watts
Susannah
Dare
##sitive
Rowland
Valle
invalid
##ué
Detachment
acronym
Yokohama
verified
##lsson
groove
Liza
clarified
compromised
265
##rgon
##orf
hesitant
Fruit
Application
Mathias
icons
##cell
Qin
interventions
##uron
punt
remnant
##rien
Ames
manifold
spines
floral
##zable
comrades
Fallen
orbits
Annals
hobby
Auditorium
implicated
researching
Pueblo
Ta
terminate
##pella
Rings
approximation
fuzzy
##ús
thriving
##ket
Conor
alarmed
etched
Cary
##rdon
Ally
##rington
Pay
mint
##hasa
##unity
##dman
##itate
Oceania
furrowed
trams
##aq
Wentworth
ventured
choreography
prototypes
Patel
mouthed
trenches
##licing
##yya
Lies
deception
##erve
##vations
Bertrand
earthquakes
##tography
Southwestern
##aja
token
Gupta
##yō
Beckett
initials
ironic
Tsar
subdued
shootout
sobbing
liar
Scandinavia
Souls
ch
therapist
trader
Regulation
Kali
busiest
##pation
32nd
Telephone
Vargas
##moky
##nose
##uge
Favorite
abducted
bonding
219
255
correction
mat
drown
fl
unbeaten
Pocket
Summers
Quite
rods
Percussion
##ndy
buzzing
cadet
Wilkes
attire
directory
utilities
naive
populous
Hendrix
##actor
disadvantage
1400
Landon
Underworld
##ense
Occasionally
mercury
Davey
Morley
spa
wrestled
##vender
eclipse
Sienna
supplemented
thou
Stream
liturgical
##gall
##berries
##piration
1769
Bucks
abandoning
##jutant
##nac
232
venom
##31
Roche
dotted
Currie
Córdoba
Milo
Sharif
divides
justification
prejudice
fortunate
##vide
##ābād
Rowe
inflammatory
##eld
avenue
Sources
##rimal
Messenger
Blanco
advocating
formulation
##pute
emphasizes
nut
Armored
##ented
nutrients
##tment
insistence
Martins
landowners
##RB
comparatively
headlines
snaps
##qing
Celebration
##mad
republican
##NE
Trace
##500
1771
proclamation
NRL
Rubin
Buzz
Weimar
##AG
199
posthumous
##ental
##deacon
Distance
intensely
overheard
Arcade
diagonal
hazard
Giving
weekdays
##ù
Verdi
actresses
##hare
Pulling
##erries
##pores
catering
shortest
##ctors
##cure
##restle
##reta
##runch
##brecht
##uddin
Moments
senate
Feng
Prescott
##thest
218
divisional
Bertie
sparse
surrounds
coupling
gravitational
werewolves
##lax
Rankings
##mated
##tries
Shia
##mart
##23
##vocative
interfaces
morphology
newscast
##bide
inputs
solicitor
Olaf
cabinets
puzzles
##tains
Unified
##firmed
WA
solemn
##opy
Tito
Jaenelle
Neolithic
horseback
##ires
pharmacy
prevalence
##lint
Swami
##bush
##tudes
Philipp
mythical
divers
Scouting
aperture
progressively
##bay
##nio
bounce
Floor
##elf
Lucan
adulthood
helm
Bluff
Passage
Salvation
lemon
napkin
scheduling
##gets
Elements
Mina
Novak
stalled
##llister
Infrastructure
##nky
##tania
##uished
Katz
Norma
sucks
trusting
1765
boilers
Accordingly
##hered
223
Crowley
##fight
##ulo
Henrietta
##hani
pounder
surprises
##chor
##glia
Dukes
##cracy
##zier
##fs
Patriot
silicon
##VP
simulcast
telegraph
Mysore
cardboard
Len
##QL
Auguste
accordion
analytical
specify
ineffective
hunched
abnormal
Transylvania
##dn
##tending
Emilia
glittering
Maddy
##wana
1762
External
Lecture
endorsement
Hernández
Anaheim
Ware
offences
##phorus
Plantation
popping
Bonaparte
disgusting
neared
##notes
Identity
heroin
nicely
##raverse
apron
congestion
##PR
padded
##fts
invaders
##came
freshly
Halle
endowed
fracture
ROM
##max
sediments
diffusion
dryly
##tara
Tam
Draw
Spin
Talon
Anthropology
##lify
nausea
##shirt
insert
Fresno
capitalist
indefinitely
apples
Gift
scooped
60s
Cooperative
mistakenly
##lover
murmur
##iger
Equipment
abusive
orphanage
##9th
##lterweight
##unda
Baird
ant
saloon
33rd
Chesapeake
##chair
##sound
##tend
chaotic
pornography
brace
##aret
heiress
SSR
resentment
Arbor
headmaster
##uren
unlimited
##with
##jn
Bram
Ely
Pokémon
pivotal
##guous
Database
Marta
Shine
stumbling
##ovsky
##skin
Henley
Polk
functioned
##layer
##pas
##udd
##MX
blackness
cadets
feral
Damian
##actions
2D
##yla
Apocalypse
##aic
inactivated
##china
##kovic
##bres
destroys
nap
Macy
sums
Madhya
Wisdom
rejects
##amel
60th
Cho
bandwidth
##sons
##obbing
##orama
Mutual
shafts
##estone
##rsen
accord
replaces
waterfront
##gonal
##rida
convictions
##ays
calmed
suppliers
Cummings
GMA
fearful
Scientist
Sinai
examines
experimented
Netflix
Enforcement
Scarlett
##lasia
Healthcare
##onte
Dude
inverted
##36
##regation
##lidae
Munro
##angay
Airbus
overlapping
Drivers
lawsuits
bodily
##udder
Wanda
Effects
Fathers
##finery
##islav
Ridley
observatory
pod
##utrition
Electricity
landslide
##mable
##zoic
##imator
##uration
Estates
sleepy
Nickelodeon
steaming
irony
schedules
snack
spikes
Hmm
##nesia
##bella
##hibit
Greenville
plucked
Harald
##ono
Gamma
infringement
roaring
deposition
##pol
##orum
660
seminal
passports
engagements
Akbar
rotated
##bina
##gart
Hartley
##lown
##truct
uttered
traumatic
Dex
##ôme
Holloway
MV
apartheid
##nee
Counter
Colton
OR
245
Spaniards
Regency
Schedule
scratching
squads
verify
##alk
keyboardist
rotten
Forestry
aids
commemorating
##yed
##érie
Sting
##elly
Dai
##fers
##berley
##ducted
Melvin
cannabis
glider
##enbach
##rban
Costello
Skating
cartoonist
AN
audit
##pectator
distributing
226
312
interpreter
header
Alternatively
##ases
smug
##kumar
cabins
remastered
Connolly
Kelsey
LED
tentative
Check
Sichuan
shaved
##42
Gerhard
Harvest
inward
##rque
Hopefully
hem
##34
Typical
binds
wrath
Woodstock
forcibly
Fergus
##charged
##tured
prepares
amenities
penetration
##ghan
coarse
##oned
enthusiasts
##av
##twined
fielded
##cky
Kiel
##obia
470
beers
tremble
youths
attendees
##cademies
##sex
Macon
communism
dir
##abi
Lennox
Wen
differentiate
jewel
##SO
activate
assert
laden
unto
Gillespie
Guillermo
accumulation
##GM
NGO
Rosenberg
calculating
drastically
##omorphic
peeled
Liège
insurgents
outdoors
##enia
Aspen
Sep
awakened
##eye
Consul
Maiden
insanity
##brian
furnace
Colours
distributions
longitudinal
syllables
##scent
Martian
accountant
Atkins
husbands
sewage
zur
collaborate
highlighting
##rites
##PI
colonization
nearer
##XT
dunes
positioning
Ku
multitude
luxurious
Volvo
linguistics
plotting
squared
##inder
outstretched
##uds
Fuji
ji
##feit
##ahu
##loat
##gado
##luster
##oku
América
##iza
Residents
vine
Pieces
DD
Vampires
##ová
smoked
harshly
spreads
##turn
##zhi
betray
electors
##settled
Considering
exploits
stamped
Dusty
enraged
Nairobi
##38
intervened
##luck
orchestras
##lda
Hereford
Jarvis
calf
##itzer
##CH
salesman
Lovers
cigar
Angelica
doomed
heroine
##tible
Sanford
offenders
##ulously
articulated
##oam
Emanuel
Gardiner
Edna
Shu
gigantic
##stable
Tallinn
coasts
Maker
ale
stalking
##oga
##smus
lucrative
southbound
##changing
Reg
##lants
Schleswig
discount
grouping
physiological
##OH
##sun
Galen
assurance
reconcile
rib
scarlet
Thatcher
anarchist
##oom
Turnpike
##ceding
cocktail
Sweeney
Allegheny
concessions
oppression
reassuring
##poli
##ticus
##TR
##VI
##uca
##zione
directional
strikeouts
Beneath
Couldn
Kabul
##national
hydroelectric
##jit
Desire
##riot
enhancing
northbound
##PO
Ok
Routledge
volatile
Bernardo
Python
333
ample
chestnut
automobiles
##innamon
##care
##hering
BWF
salaries
Turbo
acquisitions
##stituting
strengths
pilgrims
Ponce
Pig
Actors
Beard
sanitation
##RD
##mett
Telecommunications
worms
##idas
Juno
Larson
Ventura
Northeastern
weighs
Houghton
collaborating
lottery
##rano
Wonderland
gigs
##lmer
##zano
##edd
##nife
mixtape
predominant
tripped
##ruly
Alexei
investing
Belgarath
Brasil
hiss
##crat
##xham
Côte
560
kilometer
##cological
analyzing
##As
engined
listener
##cakes
negotiation
##hisky
Santana
##lemma
IAAF
Seneca
skeletal
Covenant
Steiner
##lev
##uen
Neptune
retention
##upon
Closing
Czechoslovak
chalk
Navarre
NZ
##IG
##hop
##oly
##quatorial
##sad
Brewery
Conflict
Them
renew
turrets
disagree
Petra
Slave
##reole
adjustment
##dela
##regard
##sner
framing
stature
##rca
##sies
##46
##mata
Logic
inadvertently
naturalist
spheres
towering
heightened
Dodd
rink
##fle
Keyboards
bulb
diver
ul
##tsk
Exodus
Deacon
España
Canadiens
oblique
thud
reigned
rug
Whitman
Dash
##iens
Haifa
pets
##arland
manually
dart
##bial
Sven
textiles
subgroup
Napier
graffiti
revolver
humming
Babu
protector
typed
Provinces
Sparta
Wills
subjective
##rella
temptation
##liest
FL
Sadie
manifest
Guangdong
Transfer
entertain
eve
recipes
##33
Benedictine
retailer
##dence
establishes
##cluded
##rked
Ursula
##ltz
##lars
##rena
qualifiers
##curement
colt
depictions
##oit
Spiritual
differentiation
staffed
transitional
##lew
1761
fatalities
##oan
Bayern
Northamptonshire
Weeks
##CU
Fife
capacities
hoarse
##latt
##ة
evidenced
##HD
##ographer
assessing
evolve
hints
42nd
streaked
##lve
Yahoo
##estive
##rned
##zas
baggage
Elected
secrecy
##champ
Character
Pen
Decca
cape
Bernardino
vapor
Dolly
counselor
##isers
Benin
##khar
##CR
notch
##thus
##racy
bounty
lend
grassland
##chtenstein
##dating
pseudo
golfer
simplest
##ceive
Lucivar
Triumph
dinosaur
dinosaurs
##šić
Seahawks
##nco
resorts
reelected
1766
reproduce
universally
##OA
ER
tendencies
Consolidated
Massey
Tasmanian
reckless
##icz
##ricks
1755
questionable
Audience
##lates
preseason
Quran
trivial
Haitian
Freeway
dialed
Appointed
Heard
ecosystems
##bula
hormones
Carbon
Rd
##arney
##working
Christoph
presiding
pu
##athy
Morrow
Dar
ensures
posing
remedy
EA
disclosed
##hui
##rten
rumours
surveying
##ficiency
Aziz
Jewel
Plays
##smatic
Bernhard
Christi
##eanut
##friend
jailed
##dr
govern
neighbour
butler
Acheron
murdering
oils
mac
Editorial
detectives
bolts
##ulon
Guitars
malaria
36th
Pembroke
Opened
##hium
harmonic
serum
##sio
Franks
fingernails
##gli
culturally
evolving
scalp
VP
deploy
uploaded
mater
##evo
Jammu
Spa
##icker
flirting
##cursions
Heidi
Majority
sprawled
##alytic
Zheng
bunker
##lena
ST
##tile
Jiang
ceilings
##ently
##ols
Recovery
dire
##good
Manson
Honestly
Montréal
1764
227
quota
Lakshmi
incentive
Accounting
##cilla
Eureka
Reaper
buzzed
##uh
courtroom
dub
##mberg
KC
Gong
Theodor
Académie
NPR
criticizing
protesting
##pired
##yric
abuses
fisheries
##minated
1767
yd
Gemini
Subcommittee
##fuse
Duff
Wasn
Wight
cleaner
##tite
planetary
Survivor
Zionist
mounds
##rary
landfall
disruption
yielding
##yana
bids
unidentified
Garry
Ellison
Elmer
Fishing
Hayward
demos
modelling
##anche
##stick
caressed
entertained
##hesion
piers
Crimea
##mass
WHO
boulder
trunks
1640
Biennale
Palestinians
Pursuit
##udes
Dora
contender
##dridge
Nanjing
##ezer
##former
##ibel
Whole
proliferation
##tide
##weiler
fuels
predictions
##ente
##onium
Filming
absorbing
Ramón
strangled
conveyed
inhabit
prostitutes
recession
bonded
clinched
##eak
##iji
##edar
Pleasure
Rite
Christy
Therapy
sarcasm
##collegiate
hilt
probation
Sarawak
coefficients
underworld
biodiversity
SBS
groom
brewing
dungeon
##claiming
Hari
turnover
##ntina
##omer
##opped
orthodox
styling
##tars
##ulata
priced
Marjorie
##eley
##abar
Yong
##tically
Crambidae
Hernandez
##ego
##rricular
##ark
##lamour
##llin
##augh
##tens
Advancement
Loyola
##4th
##hh
goin
marshes
Sardinia
##ša
Ljubljana
Singing
suspiciously
##hesive
Félix
Regarding
flap
stimulation
##raught
Apr
Yin
gaping
tighten
skier
##itas
##lad
##rani
264
Ashes
Olson
Problems
Tabitha
##rading
balancing
sunrise
##ease
##iture
##ritic
Fringe
##iciency
Inspired
Linnaeus
PBA
disapproval
##kles
##rka
##tails
##urger
Disaster
Laboratories
apps
paradise
Aero
Came
sneaking
Gee
Beacon
ODI
commodity
Ellington
graphical
Gretchen
spire
##skaya
##trine
RTÉ
efficacy
plc
tribunal
##ytic
downhill
flu
medications
##kaya
widen
Sunrise
##nous
distinguishing
pawn
##BO
##irn
##ssing
##ν
Easton
##vila
Rhineland
##aque
defect
##saurus
Goose
Ju
##classified
Middlesbrough
shaping
preached
1759
##erland
Ein
Hailey
musicals
##altered
Galileo
Hilda
Fighters
Lac
##ometric
295
Leafs
Milano
##lta
##VD
##ivist
penetrated
Mask
Orchard
plaintiff
##icorn
Yvonne
##fred
outfielder
peek
Collier
Caracas
repealed
Bois
dell
restrict
Dolores
Hadley
peacefully
##LL
condom
Granny
Orders
sabotage
##toon
##rings
compass
marshal
gears
brigadier
dye
Yunnan
communicating
donate
emerald
vitamin
administer
Fulham
##classical
##llas
Buckinghamshire
Held
layered
disclosure
Akira
programmer
shrimp
Crusade
##ximal
Luzon
bakery
##cute
Garth
Citadel
uniquely
Curling
info
mum
Para
##ști
sleek
##ione
hey
Lantern
mesh
##lacing
##lizzard
##gade
prosecuted
Alba
Gilles
greedy
twists
##ogged
Viper
##kata
Appearances
Skyla
hymns
##pelled
curving
predictable
Grave
Watford
##dford
##liptic
##vary
Westwood
fluids
Models
statutes
##ynamite
1740
##culate
Framework
Johanna
##gression
Vuelta
imp
##otion
##raga
##thouse
Ciudad
festivities
##love
Beyoncé
italics
##vance
DB
##haman
outs
Singers
##ueva
##urning
##51
##ntiary
##mobile
285
Mimi
emeritus
nesting
Keeper
Ways
##onal
##oux
Edmond
MMA
##bark
##oop
Hampson
##ñez
##rets
Gladstone
wreckage
Pont
Playboy
reluctance
##ná
apprenticeship
preferring
Value
originate
##wei
##olio
Alexia
##rog
Parachute
jammed
stud
Eton
vols
##ganized
1745
straining
creep
indicators
##mán
humiliation
hinted
alma
tanker
##egation
Haynes
Penang
amazement
branched
rumble
##ddington
archaeologists
paranoid
expenditure
Absolutely
Musicians
banished
##fining
baptism
Joker
Persons
hemisphere
##tieth
##ück
flock
##xing
lbs
Kung
crab
##dak
##tinent
Regulations
barrage
parcel
##ós
Tanaka
##rsa
Natalia
Voyage
flaws
stepfather
##aven
##eological
Botanical
Minsk
##ckers
Cinderella
Feast
Loving
Previous
Shark
##took
barrister
collaborators
##nnes
Croydon
Graeme
Juniors
##7th
##formation
##ulos
##ák
£2
##hwa
##rove
##ș
Whig
demeanor
Otago
##TH
##ooster
Faber
instructors
##ahl
##bha
emptied
##schen
saga
##lora
exploding
##rges
Crusaders
##caster
##uations
streaks
CBN
bows
insights
ka
1650
diversion
LSU
Wingspan
##liva
Response
sanity
Producers
imitation
##fine
Lange
Spokane
splash
weed
Siberian
magnet
##rocodile
capitals
##rgus
swelled
Rani
Bells
Silesia
arithmetic
rumor
##hampton
favors
Weird
marketplace
##orm
tsunami
unpredictable
##citation
##ferno
Tradition
postwar
stench
succeeds
##roup
Anya
Users
oversized
totaling
pouch
##nat
Tripoli
leverage
satin
##cline
Bathurst
Lund
Niall
thereof
##quid
Bangor
barge
Animated
##53
##alan
Ballard
utilizes
Done
ballistic
NDP
gatherings
##elin
##vening
Rockets
Sabrina
Tamara
Tribal
WTA
##citing
blinded
flux
Khalid
Una
prescription
##jee
Parents
##otics
##food
Silicon
cured
electro
perpendicular
intimacy
##rified
Lots
##ceiving
##powder
incentives
McKenna
##arma
##ounced
##rinkled
Alzheimer
##tarian
262
Seas
##cam
Novi
##hout
##morphic
##hazar
##hul
##nington
Huron
Bahadur
Pirate
pursed
Griffiths
indicted
swap
refrain
##mulating
Lal
stomped
##Pad
##mamoto
Reef
disposed
plastered
weeping
##rato
Minas
hourly
tumors
##ruising
Lyle
##yper
##sol
Odisha
credibility
##Dowell
Braun
Graphic
lurched
muster
##nex
##ührer
##connected
##iek
##ruba
Carthage
Peck
maple
bursting
##lava
Enrico
rite
##jak
Moment
##skar
Styx
poking
Spartan
##urney
Hepburn
Mart
Titanic
newsletter
waits
Mecklenburg
agitated
eats
##dious
Chow
matrices
Maud
##sexual
sermon
234
##sible
##lung
Qi
cemeteries
mined
sprinter
##ckett
coward
##gable
##hell
##thin
##FB
Contact
##hay
rainforest
238
Hemisphere
boasts
##nders
##verance
##kat
Convent
Dunedin
Lecturer
lyricist
##bject
Iberian
comune
##pphire
chunk
##boo
thrusting
fore
informing
pistols
echoes
Tier
battleships
substitution
##belt
moniker
##charya
##lland
Thoroughbred
38th
##01
##tah
parting
tongues
Cale
##seau
Unionist
modular
celebrates
preview
steamed
Bismarck
302
737
vamp
##finity
##nbridge
weaknesses
husky
##berman
absently
##icide
Craven
tailored
Tokugawa
VIP
syntax
Kazan
captives
doses
filtered
overview
Cleopatra
Conversely
stallion
Burger
Suez
Raoul
th
##reaves
Dickson
Nell
Rate
anal
colder
##sław
Arm
Semitic
##green
reflective
1100
episcopal
journeys
##ours
##pository
##dering
residue
Gunn
##27
##ntial
##crates
##zig
Astros
Renee
Emerald
##vili
connectivity
undrafted
Sampson
treasures
##kura
##theon
##vern
Destroyer
##iable
##ener
Frederic
briefcase
confinement
Bree
##WD
Athena
233
Padres
Thom
speeding
##hali
Dental
ducks
Putin
##rcle
##lou
Asylum
##usk
dusk
pasture
Institutes
ONE
jack
##named
diplomacy
Intercontinental
Leagues
Towns
comedic
premature
##edic
##mona
##ories
trimmed
Charge
Cream
guarantees
Dmitry
splashed
Philosophical
tramway
##cape
Maynard
predatory
redundant
##gratory
##wry
sobs
Burgundy
edible
outfits
Handel
dazed
dangerously
idle
Operational
organizes
##sional
blackish
broker
weddings
##halt
Becca
McGee
##gman
protagonists
##pelling
Keynes
aux
stumble
##ordination
Nokia
reel
sexes
##woods
##pheric
##quished
##voc
##oir
##pathian
##ptus
##sma
##tating
##ê
fulfilling
sheath
##ayne
Mei
Ordinary
Collin
Sharpe
grasses
interdisciplinary
##OX
Background
##ignment
Assault
transforms
Hamas
Serge
ratios
##sik
swaying
##rcia
Rosen
##gant
##versible
cinematographer
curly
penny
Kamal
Mellon
Sailor
Spence
phased
Brewers
amassed
Societies
##ropriations
##buted
mythological
##SN
##byss
##ired
Sovereign
preface
Parry
##ife
altitudes
crossings
##28
Crewe
southernmost
taut
McKinley
##owa
##tore
254
##ckney
compiling
Shelton
##hiko
228
Poll
Shepard
Labs
Pace
Carlson
grasping
##ов
Delaney
Winning
robotic
intentional
shattering
##boarding
##git
##grade
Editions
Reserves
ignorant
proposing
##hanna
cutter
Mongols
NW
##eux
Codex
Cristina
Daughters
Rees
forecast
##hita
NGOs
Stations
Beaux
Erwin
##jected
##EX
##trom
Schumacher
##hrill
##rophe
Maharaja
Oricon
##sul
##dynamic
##fighting
Ce
Ingrid
rumbled
Prospect
stairwell
Barnard
applause
complementary
##uba
grunt
##mented
Bloc
Carleton
loft
noisy
##hey
490
contrasted
##inator
##rief
##centric
##fica
Cantonese
Blanc
Lausanne
License
artifact
##ddin
rot
Amongst
Prakash
RF
##topia
milestone
##vard
Winters
Mead
churchyard
Lulu
estuary
##ind
Cha
Infinity
Meadow
subsidies
##valent
CONCACAF
Ching
medicinal
navigate
Carver
Twice
abdominal
regulating
RB
toilets
Brewer
weakening
ambushed
##aut
##vignon
Lansing
unacceptable
reliance
stabbing
##mpo
##naire
Interview
##ested
##imed
bearings
##lts
Rashid
##iation
authenticity
vigorous
##frey
##uel
biologist
NFC
##rmaid
##wash
Makes
##aunt
##steries
withdrawing
##qa
Buccaneers
bleed
inclination
stain
##ilo
##ppel
Torre
privileged
cereal
trailers
alumnus
neon
Cochrane
Mariana
caress
##47
##ients
experimentation
Window
convict
signaled
##YP
rower
Pharmacy
interacting
241
Strings
dominating
kinase
Dinamo
Wire
pains
sensations
##suse
Twenty20
##39
spotlight
##hend
elemental
##pura
Jameson
Swindon
honoring
pained
##ediatric
##lux
Psychological
assemblies
ingredient
Martial
Penguins
beverage
Monitor
mysteries
##ION
emigration
mused
##sique
crore
AMC
Funding
Chinatown
Establishment
Finalist
enjoyable
1756
##mada
##rams
NO
newborn
CS
comprehend
Invisible
Siemens
##acon
246
contraction
##volving
##moration
##rok
montane
##ntation
Galloway
##llow
Verity
directorial
pearl
Leaning
##rase
Fernandez
swallowing
Automatic
Madness
haunting
paddle
##UE
##rrows
##vies
##zuki
##bolt
##iber
Fender
emails
paste
##lancing
hind
homestead
hopeless
##dles
Rockies
garlic
fatty
shrieked
##ismic
Gillian
Inquiry
Schultz
XML
##cius
##uld
Domesday
grenades
northernmost
##igi
Tbilisi
optimistic
##poon
Refuge
stacks
Bose
smash
surreal
Nah
Straits
Conquest
##roo
##weet
##kell
Gladys
CH
##lim
##vitation
Doctorate
NRHP
knocks
Bey
Romano
##pile
242
Diamonds
strides
eclectic
Betsy
clade
##hady
##leashed
dissolve
moss
Suburban
silvery
##bria
tally
turtles
##uctive
finely
industrialist
##nary
Ernesto
oz
pact
loneliness
##hov
Tomb
multinational
risked
Layne
USL
ne
##quiries
Ad
Message
Kamen
Kristen
reefs
implements
##itative
educators
garments
gunshot
##essed
##rve
Montevideo
vigorously
Stamford
assemble
packaged
##same
état
Viva
paragraph
##eter
##wire
Stick
Navajo
MCA
##pressing
ensembles
ABA
##zor
##llus
Partner
raked
##BI
Iona
thump
Celeste
Kiran
##iscovered
##rith
inflammation
##arel
Features
loosened
##yclic
Deluxe
Speak
economical
Frankenstein
Picasso
showcased
##zad
##eira
##planes
##linear
##overs
monsoon
prosecutors
slack
Horses
##urers
Angry
coughing
##truder
Questions
##tō
##zak
challenger
clocks
##ieving
Newmarket
##acle
cursing
stimuli
##mming
##qualified
slapping
##vasive
narration
##kini
Advertising
CSI
alliances
mixes
##yes
covert
amalgamation
reproduced
##ardt
##gis
1648
id
Annette
Boots
Champagne
Brest
Daryl
##emon
##jou
##llers
Mean
adaptive
technicians
##pair
##usal
Yoga
fronts
leaping
Jul
harvesting
keel
##44
petitioned
##lved
yells
Endowment
proponent
##spur
##tised
##zal
Homes
Includes
##ifer
##oodoo
##rvette
awarding
mirrored
ransom
Flute
outlook
##ganj
DVDs
Sufi
frontman
Goddard
barren
##astic
Suicide
hillside
Harlow
Lau
notions
Amnesty
Homestead
##irt
GE
hooded
umpire
mustered
Catch
Masonic
##erd
Dynamics
Equity
Oro
Charts
Mussolini
populace
muted
accompaniment
##lour
##ndes
ignited
##iferous
##laced
##atch
anguish
registry
##tub
##hards
##neer
251
Hooker
uncomfortably
##6th
##ivers
Catalina
MiG
giggling
1754
Dietrich
Kaladin
pricing
##quence
Sabah
##lving
##nical
Gettysburg
Vita
Telecom
Worst
Palais
Pentagon
##brand
##chichte
Graf
unnatural
1715
bio
##26
Radcliffe
##utt
chatting
spices
##aus
untouched
##eper
Doll
turkey
Syndicate
##rlene
##JP
##roots
Como
clashed
modernization
1757
fantasies
##iating
dissipated
Sicilian
inspect
sensible
reputed
##final
Milford
poised
RC
metabolic
Tobacco
Mecca
optimization
##heat
lobe
rabbits
NAS
geologist
##liner
Kilda
carpenter
nationalists
##brae
summarized
##venge
Designer
misleading
beamed
##meyer
Matrix
excuses
##aines
##biology
401
Moose
drafting
Sai
##ggle
Comprehensive
dripped
skate
##WI
##enan
##ruk
narrower
outgoing
##enter
##nounce
overseen
##structure
travellers
banging
scarred
##thing
##arra
Ebert
Sometime
##nated
BAFTA
Hurricanes
configurations
##MLL
immortality
##heus
gothic
##mpest
clergyman
viewpoint
Maxim
Instituto
emitted
quantitative
1689
Consortium
##rsk
Meat
Tao
swimmers
Shaking
Terence
mainline
##linity
Quantum
##rogate
Nair
banquet
39th
reprised
lagoon
subdivisions
synonymous
incurred
password
sprung
##vere
Credits
Petersen
Faces
##vu
statesman
Zombie
gesturing
##going
Sergey
dormant
possessive
totals
southward
Ángel
##odies
HM
Mariano
Ramirez
Wicked
impressions
##Net
##cap
##ème
Transformers
Poker
RIAA
Redesignated
##chuk
Harcourt
Peña
spacious
tinged
alternatively
narrowing
Brigham
authorization
Membership
Zeppelin
##amed
Handball
steer
##orium
##rnal
##rops
Committees
endings
##MM
##yung
ejected
grams
##relli
Birch
Hilary
Stadion
orphan
clawed
##kner
Motown
Wilkins
ballads
outspoken
##ancipation
##bankment
##cheng
Advances
harvested
novelty
ineligible
oversees
##´s
obeyed
inevitably
Kingdoms
burying
Fabian
relevance
Tatiana
##MCA
sarcastic
##onda
Akron
229
sandwiches
Adobe
Maddox
##azar
Hunting
##onized
Smiling
##tology
Juventus
Leroy
Poets
attach
lo
##rly
##film
Structure
##igate
olds
projections
SMS
outnumbered
##tase
judiciary
paramilitary
playfully
##rsing
##tras
Chico
Vin
informally
abandonment
##russ
Baroness
injuring
octagonal
deciduous
##nea
##olm
Hz
Norwood
poses
Marissa
alerted
willed
##KS
Dino
##ddler
##vani
Barbie
Thankfully
625
bicycles
shimmering
##tinuum
##wolf
Chesterfield
##idy
##urgency
Knowles
sweetly
Ventures
##ponents
##valence
Darryl
Powerplant
RAAF
##pec
Kingsley
Parramatta
penetrating
spectacle
##inia
Marlborough
residual
compatibility
hike
Underwood
depleted
ministries
##odus
##ropriation
rotting
Faso
##inn
Happiness
Lille
Suns
cookie
rift
warmly
##lvin
Bugs
Gotham
Gothenburg
Properties
##seller
##ubi
Created
MAC
Noelle
Requiem
Ulysses
##ails
franchises
##icious
##rwick
celestial
kinetic
720
STS
transmissions
amplitude
forums
freeing
reptiles
tumbling
##continent
##rising
##tropy
physiology
##uster
Loves
bodied
neutrality
Neumann
assessments
Vicky
##hom
hampered
##uku
Custom
timed
##eville
##xious
elastic
##section
rig
stilled
shipment
243
artworks
boulders
Bournemouth
##hly
##LF
##linary
rumored
##bino
##drum
Chun
Freiburg
##dges
Equality
252
Guadalajara
##sors
##taire
Roach
cramped
##ultural
Logistics
Punch
fines
Lai
caravan
##55
lame
Collector
pausing
315
migrant
hawk
signalling
##erham
##oughs
Demons
surfing
Rana
insisting
Wien
adolescent
##jong
##rera
##umba
Regis
brushes
##iman
residues
storytelling
Consider
contrasting
regeneration
##elling
##hlete
afforded
reactors
costing
##biotics
##gat
##евич
chanting
secondly
confesses
##ikos
##uang
##ronological
##−
Giacomo
##eca
vaudeville
weeds
rejecting
revoked
affluent
fullback
progresses
geologic
proprietor
replication
gliding
recounted
##bah
##igma
Flow
ii
newcomer
##lasp
##miya
Candace
fractured
interiors
confidential
Inverness
footing
##robe
Coordinator
Westphalia
jumper
##chism
dormitory
##gno
281
acknowledging
leveled
##éra
Algiers
migrate
Frog
Rare
##iovascular
##urous
DSO
nomadic
##iera
woken
lifeless
##graphical
##ifications
Dot
Sachs
crow
nmi
Tacoma
Weight
mushroom
RS
conditioned
##zine
Tunisian
altering
##mizing
Handicap
Patti
Monsieur
clicking
gorge
interrupting
##powerment
drawers
Serra
##icides
Specialist
##itte
connector
worshipped
##ask
consoles
tags
##iler
glued
##zac
fences
Bratislava
honeymoon
313
A2
disposition
Gentleman
Gilmore
glaciers
##scribed
Calhoun
convergence
Aleppo
shortages
##43
##orax
##worm
##codes
##rmal
neutron
##ossa
Bloomberg
Salford
periodicals
##ryan
Slayer
##ynasties
credentials
##tista
surveyor
File
stinging
unnoticed
Medici
ecstasy
espionage
Jett
Leary
circulating
bargaining
concerto
serviced
37th
HK
##fueling
Delilah
Marcia
graded
##join
Kaplan
feasible
##nale
##yt
Burnley
dreadful
ministerial
Brewster
Judah
##ngled
##rrey
recycled
Iroquois
backstage
parchment
##numbered
Kern
Motorsports
Organizations
##mini
Seems
Warrington
Dunbar
Ezio
##eor
paralyzed
Ara
yeast
##olis
cheated
reappeared
banged
##ymph
##dick
Lyndon
glide
Mat
##natch
Hotels
Household
parasite
irrelevant
youthful
##smic
##tero
##anti
2d
Ignacio
squash
##nets
shale
##اد
Abrams
##oese
assaults
##dier
##otte
Swamp
287
Spurs
##economic
Fargo
auditioned
##mé
Haas
une
abbreviation
Turkic
##tisfaction
favorites
specials
##lial
Enlightenment
Burkina
##vir
Comparative
Lacrosse
elves
##lerical
##pear
Borders
controllers
##villa
excelled
##acher
##varo
camouflage
perpetual
##ffles
devoid
schooner
##bered
##oris
Gibbons
Lia
discouraged
sue
##gnition
Excellent
Layton
noir
smack
##ivable
##evity
##lone
Myra
weaken
weaponry
##azza
Shake
backbone
Certified
clown
occupational
caller
enslaved
soaking
Wexford
perceive
shortlisted
##pid
feminism
Bari
Indie
##avelin
##ldo
Hellenic
Hundreds
Savings
comedies
Honors
Mohawk
Told
coded
Incorporated
hideous
trusts
hose
Calais
Forster
Gabon
Internationale
AK
Colour
##UM
##heist
McGregor
localized
##tronomy
Darrell
##iara
squirrel
freaked
##eking
##manned
##ungen
radiated
##dua
commence
Donaldson
##iddle
MR
SAS
Tavern
Teenage
admissions
Instruments
##ilizer
Konrad
contemplated
##ductor
Jing
Reacher
recalling
Dhabi
emphasizing
illumination
##tony
legitimacy
Goethe
Ritter
McDonnell
Polar
Seconds
aspiring
derby
tunic
##rmed
outlines
Changing
distortion
##cter
Mechanics
##urly
##vana
Egg
Wolverine
Stupid
centralized
knit
##Ms
Saratoga
Ogden
storylines
##vres
lavish
beverages
##grarian
Kyrgyzstan
forcefully
superb
Elm
Thessaloniki
follower
Plants
slang
trajectory
Nowadays
Bengals
Ingram
perch
coloring
carvings
doubtful
##aph
##gratulations
##41
Curse
253
nightstand
Campo
Meiji
decomposition
##giri
McCormick
Yours
##amon
##bang
Texans
injunction
organise
periodical
##peculative
oceans
##aley
Success
Lehigh
##guin
1730
Davy
allowance
obituary
##tov
treasury
##wayne
euros
readiness
systematically
##stered
##igor
##xen
##cliff
##lya
Send
##umatic
Celtics
Judiciary
425
propagation
rebellious
##ims
##lut
Dal
##ayman
##cloth
Boise
pairing
Waltz
torment
Hatch
aspirations
diaspora
##hame
Rank
237
Including
Muir
chained
toxicity
Université
##aroo
Mathews
meadows
##bio
Editing
Khorasan
##them
##ahn
##bari
##umes
evacuate
##sium
gram
kidnap
pinning
##diation
##orms
beacon
organising
McGrath
##ogist
Qur
Tango
##ceptor
##rud
##cend
##cie
##jas
##sided
Tuscany
Venture
creations
exhibiting
##rcerer
##tten
Butcher
Divinity
Pet
Whitehead
falsely
perished
handy
Moines
cyclists
synthesizers
Mortal
notoriety
##ronic
Dialogue
expressive
uk
Nightingale
grimly
vineyards
Driving
relentless
compiler
##district
##tuated
Hades
medicines
objection
Answer
Soap
Chattanooga
##gogue
Haryana
Parties
Turtle
##ferred
explorers
stakeholders
##aar
##rbonne
tempered
conjecture
##tee
##hur
Reeve
bumper
stew
##church
##generate
##ilitating
##chanized
##elier
##enne
translucent
##lows
Publisher
evangelical
inherit
##rted
247
SmackDown
bitterness
lesions
##worked
mosques
wed
##lashes
Ng
Rebels
booking
##nail
Incident
Sailing
yo
confirms
Chaplin
baths
##kled
modernist
pulsing
Cicero
slaughtered
boasted
##losure
zipper
##hales
aristocracy
halftime
jolt
unlawful
Marching
sustaining
Yerevan
bracket
ram
Markus
##zef
butcher
massage
##quisite
Leisure
Pizza
collapsing
##lante
commentaries
scripted
##disciplinary
##sused
eroded
alleging
vase
Chichester
Peacock
commencement
dice
hotter
poisonous
executions
##occo
frost
fielding
vendor
Counts
Troops
maize
Divisional
analogue
shadowy
Nuevo
Ville
radiating
worthless
Adriatic
Buy
blaze
brutally
horizontally
longed
##matical
federally
Rolf
Root
exclude
rag
agitation
Lounge
astonished
##wirl
Impossible
transformations
##IVE
##ceded
##slav
downloaded
fucked
Egyptians
Welles
##ffington
U2
befriended
radios
##jid
archaic
compares
##ccelerator
##imated
##tosis
Hung
Scientists
Thousands
geographically
##LR
Macintosh
fluorescent
##ipur
Wehrmacht
##BR
##firmary
Chao
##ague
Boyer
##grounds
##hism
##mento
##taining
infancy
##cton
510
Boca
##loy
1644
ben
dong
stresses
Sweat
expressway
graders
ochreous
nets
Lawn
thirst
Uruguayan
satisfactory
##tracts
baroque
rusty
##ław
Shen
Gdańsk
chickens
##graving
Hodge
Papal
SAT
bearer
##ogo
##rger
merits
Calendar
Highest
Skills
##ortex
Roberta
paradigm
recounts
frigates
swamps
unitary
##oker
balloons
Hawthorne
Muse
spurred
advisors
reclaimed
stimulate
fibre
pat
repeal
##dgson
##iar
##rana
anthropologist
descends
flinch
reared
##chang
##eric
##lithic
commissioning
##cumenical
##lume
##rchen
Wolff
##tsky
Eurasian
Nepali
Nightmare
ZIP
playback
##latz
##vington
Warm
##75
Martina
Rollins
Saetan
Variations
sorting
##م
530
Joaquin
Ptolemy
thinner
##iator
##pticism
Cebu
Highlanders
Linden
Vanguard
##SV
##mor
##ulge
ISSN
cartridges
repression
Étienne
311
Lauderdale
commodities
null
##rb
1720
gearbox
##reator
Ang
Forgotten
dubious
##rls
##dicative
##phate
Groove
Herrera
##çais
Collections
Maximus
##published
Fell
Qualification
filtering
##tized
Roe
hazards
##37
##lative
##tröm
Guadalupe
Tajikistan
Preliminary
fronted
glands
##paper
##iche
##iding
Cairns
rallies
Location
seduce
##mple
BYU
##itic
##FT
Carmichael
Prentice
songwriters
forefront
Physicians
##rille
##zee
Preparatory
##cherous
UV
##dized
Navarro
misses
##nney
Inland
resisting
##sect
Hurt
##lino
galaxies
##raze
Institutions
devote
##lamp
##ciating
baron
##bracing
Hess
operatic
##CL
##ος
Chevalier
Guiana
##lattered
Fed
##cuted
##smo
Skull
denies
236
Waller
##mah
Sakura
mole
nominate
sermons
##bering
widowed
##röm
Cavendish
##struction
Nehru
Revelation
doom
Gala
baking
Nr
Yourself
banning
Individuals
Sykes
orchestrated
630
Phone
steered
620
specialising
starvation
##AV
##alet
##upation
seductive
##jects
##zure
Tolkien
Benito
Wizards
Submarine
dictator
Duo
Caden
approx
basins
##nc
shrink
##icles
##sponsible
249
mit
outpost
##bayashi
##rouse
##tl
Jana
Lombard
RBIs
finalized
humanities
##function
Honorable
tomato
##iot
Pie
tee
##pect
Beaufort
Ferris
bucks
##graduate
##ocytes
Directory
anxiously
##nating
flanks
##Ds
virtues
##believable
Grades
criterion
manufactures
sourced
##balt
##dance
##tano
Ying
##BF
##sett
adequately
blacksmith
totaled
trapping
expanse
Historia
Worker
Sense
ascending
housekeeper
##oos
Crafts
Resurrection
##verty
encryption
##aris
##vat
##pox
##runk
##iability
gazes
spying
##ths
helmets
wired
##zophrenia
Cheung
WR
downloads
stereotypes
239
Lucknow
bleak
Bragg
hauling
##haft
prohibit
##ermined
##castle
barony
##hta
Typhoon
antibodies
##ascism
Hawthorn
Kurdistan
Minority
Gorge
Herr
appliances
disrupt
Drugs
Lazarus
##ilia
##ryo
##tany
Gotta
Masovian
Roxy
choreographed
##rissa
turbulent
##listed
Anatomy
exiting
##det
##isław
580
Kaufman
sage
##apa
Symposium
##rolls
Kaye
##ptera
##rocław
jerking
##menclature
Guo
M1
resurrected
trophies
##lard
Gathering
nestled
serpent
Dow
reservoirs
Claremont
arbitration
chronicle
eki
##arded
##zers
##mmoth
Congregational
Astronomical
NE
RA
Robson
Scotch
modelled
slashed
##imus
exceeds
##roper
##utile
Laughing
vascular
superficial
##arians
Barclay
Caucasian
classmate
sibling
Kimberly
Shreveport
##ilde
##liche
Cheney
Deportivo
Veracruz
berries
##lase
Bed
MI
Anatolia
Mindanao
broadband
##olia
##arte
##wab
darts
##immer
##uze
believers
ordinance
violate
##wheel
##ynth
Alongside
Coupe
Hobbs
arrondissement
earl
townland
##dote
##lihood
##sla
Ghosts
midfield
pulmonary
##eno
cues
##gol
##zda
322
Siena
Sultanate
Bradshaw
Pieter
##thical
Raceway
bared
competence
##ssent
Bet
##urer
##ła
Alistair
Göttingen
appropriately
forge
##osterone
##ugen
DL
345
convoys
inventions
##resses
##cturnal
Fay
Integration
slash
##roats
Widow
barking
##fant
1A
Hooper
##cona
##runched
unreliable
##emont
##esign
##stabulary
##stop
Journalists
bony
##iba
##trata
##ège
horrific
##bish
Jocelyn
##rmon
##apon
##cier
trainers
##ulatory
1753
BR
corpus
synthesized
##bidden
##rafford
Elgin
##entry
Doherty
clockwise
##played
spins
##ample
##bley
Cope
constructions
seater
warlord
Voyager
documenting
fairies
##viator
Lviv
jewellery
suites
##gold
Maia
NME
##eavor
##kus
Eugène
furnishings
##risto
MCC
Metropolis
Older
Telangana
##mpus
amplifier
supervising
1710
buffalo
cushion
terminating
##powering
steak
Quickly
contracting
dem
sarcastically
Elsa
##hein
bastards
narratives
Takes
304
composure
typing
variance
##ifice
Softball
##rations
McLaughlin
gaped
shrines
##hogany
Glamorgan
##icle
##nai
##ntin
Fleetwood
Woodland
##uxe
fictitious
shrugs
##iper
BWV
conform
##uckled
Launch
##ductory
##mized
Tad
##stituted
##free
Bel
Chávez
messing
quartz
##iculate
##folia
##lynn
ushered
##29
##ailing
dictated
Pony
##opsis
precinct
802
Plastic
##ughter
##uno
##porated
Denton
Matters
SPD
hating
##rogen
Essential
Deck
Dortmund
obscured
##maging
Earle
##bred
##ittle
##ropolis
saturated
##fiction
##ression
Pereira
Vinci
mute
warehouses
##ún
biographies
##icking
sealing
##dered
executing
pendant
##wives
murmurs
##oko
substrates
symmetrical
Susie
##mare
Yusuf
analogy
##urage
Lesley
limitation
##rby
##ío
disagreements
##mise
embroidered
nape
unarmed
Sumner
Stores
dwell
Wilcox
creditors
##rivatization
##shes
##amia
directs
recaptured
scouting
McGuire
cradle
##onnell
Sato
insulin
mercenary
tolerant
Macquarie
transitions
cradled
##berto
##ivism
##yotes
FF
Ke
Reach
##dbury
680
##bill
##oja
##sui
prairie
##ogan
reactive
##icient
##rits
Cyclone
Sirius
Survival
Pak
##coach
##trar
halves
Agatha
Opus
contrasts
##jection
ominous
##iden
Baylor
Woodrow
duct
fortification
intercourse
##rois
Colbert
envy
##isi
Afterward
geared
##flections
accelerate
##lenching
Witness
##rrer
Angelina
Material
assertion
misconduct
Nix
cringed
tingling
##eti
##gned
Everest
disturb
sturdy
##keepers
##vied
Profile
heavenly
##kova
##victed
translating
##sses
316
Invitational
Mention
martyr
##uristic
Barron
hardness
Nakamura
405
Genevieve
reflections
##falls
jurist
##LT
Pyramid
##yme
Shoot
heck
linguist
##tower
Ives
superiors
##leo
Achilles
##phological
Christophe
Padma
precedence
grassy
Oral
resurrection
##itting
clumsy
##lten
##rue
huts
##stars
Equal
##queduct
Devin
Gaga
diocesan
##plating
##upe
##graphers
Patch
Scream
hail
moaning
tracts
##hdi
Examination
outsider
##ergic
##oter
Archipelago
Havilland
greenish
tilting
Aleksandr
Konstantin
warship
##emann
##gelist
##ought
billionaire
##blivion
321
Hungarians
transplant
##jured
##fters
Corbin
autism
pitchers
Garner
thence
Scientology
transitioned
integrating
repetitive
##dant
Rene
vomit
##burne
1661
Researchers
Wallis
insulted
wavy
##wati
Ewing
excitedly
##kor
frescoes
injustice
##achal
##lumber
##úl
novella
##sca
Liv
##enstein
##river
monstrous
topping
downfall
looming
sinks
trillion
##pont
Effect
##phi
##urley
Sites
catchment
##H1
Hopper
##raiser
1642
Maccabi
lance
##chia
##sboro
NSA
branching
retorted
tensor
Immaculate
drumming
feeder
##mony
Dyer
homicide
Temeraire
fishes
protruding
skins
orchards
##nso
inlet
ventral
##finder
Asiatic
Sul
1688
Melinda
assigns
paranormal
gardening
Tau
calming
##inge
##crow
regimental
Nik
fastened
correlated
##gene
##rieve
Sick
##minster
##politan
hardwood
hurled
##ssler
Cinematography
rhyme
Montenegrin
Packard
debating
##itution
Helens
Trick
Museums
defiance
encompassed
##EE
##TU
##nees
##uben
##ünster
##nosis
435
Hagen
cinemas
Corbett
commended
##fines
##oman
bosses
ripe
scraping
##loc
filly
Saddam
pointless
Faust
Orléans
Syriac
##♭
longitude
##ropic
Alfa
bliss
gangster
##ckling
SL
blending
##eptide
##nner
bends
escorting
##bloid
##quis
burials
##sle
##è
Ambulance
insults
##gth
Antrim
unfolded
##missible
splendid
Cure
warily
Saigon
Waste
astonishment
boroughs
##VS
##dalgo
##reshing
##usage
rue
marital
versatile
unpaid
allotted
bacterium
##coil
##cue
Dorothea
IDF
##location
##yke
RPG
##tropical
devotees
liter
##pree
Johnstone
astronaut
attends
pollen
periphery
doctrines
meta
showered
##tyn
GO
Huh
laude
244
Amar
Christensen
Ping
Pontifical
Austen
raiding
realities
##dric
urges
##dek
Cambridgeshire
##otype
Cascade
Greenberg
Pact
##cognition
##aran
##urion
Riot
mimic
Eastwood
##imating
reversal
##blast
##henian
Pitchfork
##sunderstanding
Staten
WCW
lieu
##bard
##sang
experimenting
Aquino
##lums
TNT
Hannibal
catastrophic
##lsive
272
308
##otypic
41st
Highways
aggregator
##fluenza
Featured
Reece
dispatch
simulated
##BE
Communion
Vinnie
hardcover
inexpensive
til
##adores
groundwater
kicker
blogs
frenzy
##wala
dealings
erase
Anglia
##umour
Hapoel
Marquette
##raphic
##tives
consult
atrocities
concussion
##érard
Decree
ethanol
##aen
Rooney
##chemist
##hoot
1620
menacing
Schuster
##bearable
laborers
sultan
Juliana
erased
onstage
##ync
Eastman
##tick
hushed
##yrinth
Lexie
Wharton
Lev
##PL
Testing
Bangladeshi
##bba
##usions
communicated
integers
internship
societal
##odles
Loki
ET
Ghent
broadcasters
Unix
##auer
Kildare
Yamaha
##quencing
##zman
chilled
##rapped
##uant
Duval
sentiments
Oliveira
packets
Horne
##rient
Harlan
Mirage
invariant
##anger
##tensive
flexed
sweetness
##wson
alleviate
insulting
limo
Hahn
##llars
##hesia
##lapping
buys
##oaming
mocked
pursuits
scooted
##conscious
##ilian
Ballad
jackets
##kra
hilly
##cane
Scenic
McGraw
silhouette
whipping
##roduced
##wark
##chess
##rump
Lemon
calculus
demonic
##latine
Bharatiya
Govt
Que
Trilogy
Ducks
Suit
stairway
##ceipt
Isa
regulator
Automobile
flatly
##buster
##lank
Spartans
topography
Tavi
usable
Chartered
Fairchild
##sance
##vyn
Digest
nuclei
typhoon
##llon
Alvarez
DJs
Grimm
authoritative
firearm
##chschule
Origins
lair
unmistakable
##xial
##cribing
Mouth
##genesis
##shū
##gaon
##ulter
Jaya
Neck
##UN
##oing
##static
relativity
##mott
##utive
##esan
##uveau
BT
salts
##roa
Dustin
preoccupied
Novgorod
##asus
Magnum
tempting
##histling
##ilated
Musa
##ghty
Ashland
pubs
routines
##etto
Soto
257
Featuring
Augsburg
##alaya
Bit
loomed
expects
##abby
##ooby
Auschwitz
Pendleton
vodka
##sent
rescuing
systemic
##inet
##leg
Yun
applicant
revered
##nacht
##ndas
Muller
characterization
##patient
##roft
Carole
##asperated
Amiga
disconnected
gel
##cologist
Patriotic
rallied
assign
veterinary
installing
##cedural
258
Jang
Parisian
incarcerated
stalk
##iment
Jamal
McPherson
Palma
##oken
##viation
512
Rourke
irrational
##rippled
Devlin
erratic
##NI
##payers
Ni
engages
Portal
aesthetics
##rrogance
Milne
assassins
##rots
335
385
Cambodian
Females
fellows
si
##block
##otes
Jayne
Toro
flutter
##eera
Burr
##lanche
relaxation
##fra
Fitzroy
##undy
1751
261
comb
conglomerate
ribbons
veto
##Es
casts
##ege
1748
Ares
spears
spirituality
comet
##nado
##yeh
Veterinary
aquarium
yer
Councils
##oked
##ynamic
Malmö
remorse
auditions
drilled
Hoffmann
Moe
Nagoya
Yacht
##hakti
##race
##rrick
Talmud
coordinating
##EI
##bul
##his
##itors
##ligent
##uerra
Narayan
goaltender
taxa
##asures
Det
##mage
Infinite
Maid
bean
intriguing
##cription
gasps
socket
##mentary
##reus
sewing
transmitting
##different
##furbishment
##traction
Grimsby
sprawling
Shipyard
##destine
##hropic
##icked
trolley
##agi
##lesh
Josiah
invasions
Content
firefighters
intro
Lucifer
subunit
Sahib
Myrtle
inhibitor
maneuvers
##teca
Wrath
slippery
##versing
Shoes
##dial
##illiers
##luded
##mmal
##pack
handkerchief
##edestal
##stones
Fusion
cumulative
##mell
##cacia
##rudge
##utz
foe
storing
swiped
##meister
##orra
batter
strung
##venting
##kker
Doo
Taste
immensely
Fairbanks
Jarrett
Boogie
1746
mage
Kick
legislators
medial
##ilon
##logies
##ranton
Hybrid
##uters
Tide
deportation
Metz
##secration
##virus
UFO
##fell
##orage
##raction
##rrigan
1747
fabricated
##BM
##GR
##rter
muttering
theorist
##tamine
BMG
Kincaid
solvent
##azed
Thin
adorable
Wendell
ta
##viour
pulses
##pologies
counters
exposition
sewer
Luciano
Clancy
##angelo
##riars
Showtime
observes
frankly
##oppy
Bergman
lobes
timetable
##bri
##uest
FX
##dust
##genus
Glad
Helmut
Meridian
##besity
##ontaine
Revue
miracles
##titis
PP
bluff
syrup
307
Messiah
##erne
interfering
picturesque
unconventional
dipping
hurriedly
Kerman
248
Ethnic
Toward
acidic
Harrisburg
##65
intimidating
##aal
Jed
Pontiac
munitions
##nchen
growling
mausoleum
##ération
##wami
Cy
aerospace
caucus
Doing
##around
##miring
Cuthbert
##poradic
##rovisation
##wth
evaluating
##scraper
Belinda
owes
##sitic
##thermal
##fast
economists
##lishing
##uerre
##ân
credible
##koto
Fourteen
cones
##ebrates
bookstore
towels
##phony
Appearance
newscasts
##olin
Karin
Bingham
##elves
1680
306
disks
##lston
##secutor
Levant
##vout
Micro
snuck
##ogel
##racker
Exploration
drastic
##kening
Elsie
endowment
##utnant
Blaze
##rrosion
leaking
45th
##rug
##uernsey
760
Shapiro
cakes
##ehan
##mei
##ité
##kla
repetition
successively
Friendly
Île
Koreans
Au
Tirana
flourish
Spirits
Yao
reasoned
##leam
Consort
cater
marred
ordeal
supremacy
##ritable
Paisley
euro
healer
portico
wetland
##kman
restart
##habilitation
##zuka
##Script
emptiness
communion
##CF
##inhabited
##wamy
Casablanca
pulsed
##rrible
##safe
395
Dual
Terrorism
##urge
##found
##gnolia
Courage
patriarch
segregated
intrinsic
##liography
##phe
PD
convection
##icidal
Dharma
Jimmie
texted
constituents
twitch
##calated
##mitage
##ringing
415
milling
##geons
Armagh
Geometridae
evergreen
needy
reflex
template
##pina
Schubert
##bruck
##icted
##scher
##wildered
1749
Joanne
clearer
##narl
278
Print
automation
consciously
flashback
occupations
##ests
Casimir
differentiated
policing
repay
##aks
##gnesium
Evaluation
commotion
##CM
##smopolitan
Clapton
mitochondrial
Kobe
1752
Ignoring
Vincenzo
Wet
bandage
##rassed
##unate
Maris
##eted
##hetical
figuring
##eit
##nap
leopard
strategically
##reer
Fen
Iain
##ggins
##pipe
Matteo
McIntyre
##chord
##feng
Romani
asshole
flopped
reassure
Founding
Styles
Torino
patrolling
##erging
##ibrating
##ructural
sincerity
##ät
##teacher
Juliette
##cé
##hog
##idated
##span
Winfield
##fender
##nast
##pliant
1690
Bai
Je
Saharan
expands
Bolshevik
rotate
##root
Britannia
Severn
##cini
##gering
##say
sly
Steps
insertion
rooftop
Piece
cuffs
plausible
##zai
Provost
semantic
##data
##vade
##cimal
IPA
indictment
Libraries
flaming
highlands
liberties
##pio
Elders
aggressively
##pecific
Decision
pigeon
nominally
descriptive
adjustments
equestrian
heaving
##mour
##dives
##fty
##yton
intermittent
##naming
##sets
Calvert
Casper
Tarzan
##kot
Ramírez
##IB
##erus
Gustavo
Roller
vaulted
##solation
##formatics
##tip
Hunger
colloquially
handwriting
hearth
launcher
##idian
##ilities
##lind
##locating
Magdalena
Soo
clubhouse
##kushima
##ruit
Bogotá
Organic
Worship
##Vs
##wold
upbringing
##kick
groundbreaking
##urable
##ván
repulsed
##dira
##ditional
##ici
melancholy
##bodied
##cchi
404
concurrency
H₂O
bouts
##gami
288
Leto
troll
##lak
advising
bundled
##nden
lipstick
littered
##leading
##mogeneous
Experiment
Nikola
grove
##ogram
Mace
##jure
cheat
Annabelle
Tori
lurking
Emery
Walden
##riz
paints
Markets
brutality
overrun
##agu
##sat
din
ostensibly
Fielding
flees
##eron
Pound
ornaments
tornadoes
##nikov
##organisation
##reen
##Works
##ldred
##olten
##stillery
soluble
Mata
Grimes
Léon
##NF
coldly
permitting
##inga
##reaked
Agents
hostess
##dl
Dyke
Kota
avail
orderly
##saur
##sities
Arroyo
##ceps
##egro
Hawke
Noctuidae
html
seminar
##ggles
##wasaki
Clube
recited
##sace
Ascension
Fitness
dough
##ixel
Nationale
##solidate
pulpit
vassal
570
Annapolis
bladder
phylogenetic
##iname
convertible
##ppan
Comet
paler
##definite
Spot
##dices
frequented
Apostles
slalom
##ivision
##mana
##runcated
Trojan
##agger
##iq
##league
Concept
Controller
##barian
##curate
##spersed
##tring
engulfed
inquired
##hmann
286
##dict
##osy
##raw
MacKenzie
su
##ienced
##iggs
##quitaine
bisexual
##noon
runways
subsp
##!
##"
###
##$
##%
##&
##'
##(
##)
##*
##+
##,
##-
##.
##/
##:
##;
##<
##=
##>
##?
##@
##[
##\
##]
##^
##_
##`
##{
##|
##}
##~
##¡
##¢
##£
##¥
##§
##¨
##©
##ª
##«
##¬
##®
##±
##´
##µ
##¶
##·
##¹
##º
##»
##¼
##¾
##¿
##À
##Á
##Â
##Ä
##Å
##Æ
##Ç
##È
##É
##Í
##Î
##Ñ
##Ó
##Ö
##×
##Ø
##Ú
##Ü
##Þ
##â
##ã
##æ
##ç
##î
##ï
##ð
##ñ
##ô
##õ
##÷
##û
##þ
##ÿ
##Ā
##ą
##Ć
##Č
##ď
##Đ
##đ
##ē
##ė
##ę
##ě
##ğ
##ġ
##Ħ
##ħ
##ĩ
##Ī
##İ
##ļ
##Ľ
##ľ
##Ł
##ņ
##ň
##ŋ
##Ō
##ŏ
##ő
##Œ
##œ
##ř
##Ś
##ś
##Ş
##Š
##Ţ
##ţ
##ť
##ũ
##ŭ
##ů
##ű
##ų
##ŵ
##ŷ
##ź
##Ż
##ż
##Ž
##ž
##Ə
##ƒ
##ơ
##ư
##ǎ
##ǐ
##ǒ
##ǔ
##ǫ
##Ș
##Ț
##ț
##ɐ
##ɑ
##ɔ
##ɕ
##ə
##ɛ
##ɡ
##ɣ
##ɨ
##ɪ
##ɲ
##ɾ
##ʀ
##ʁ
##ʂ
##ʃ
##ʊ
##ʋ
##ʌ
##ʐ
##ʑ
##ʒ
##ʔ
##ʰ
##ʲ
##ʳ
##ʷ
##ʻ
##ʼ
##ʾ
##ʿ
##ˈ
##ː
##ˡ
##ˢ
##ˣ
##́
##̃
##̍
##̯
##͡
##Α
##Β
##Γ
##Δ
##Ε
##Η
##Θ
##Ι
##Κ
##Λ
##Μ
##Ν
##Ο
##Π
##Σ
##Τ
##Φ
##Χ
##Ψ
##Ω
##ά
##έ
##ή
##ί
##β
##γ
##δ
##ε
##ζ
##η
##θ
##ι
##κ
##λ
##μ
##ξ
##ο
##π
##ρ
##σ
##τ
##υ
##φ
##χ
##ψ
##ω
##ό
##ύ
##ώ
##І
##Ј
##А
##Б
##В
##Г
##Д
##Е
##Ж
##З
##И
##К
##Л
##М
##Н
##О
##П
##Р
##С
##Т
##У
##Ф
##Х
##Ц
##Ч
##Ш
##Э
##Ю
##Я
##б
##в
##г
##д
##ж
##з
##к
##л
##м
##п
##с
##т
##у
##ф
##х
##ц
##ч
##ш
##щ
##ъ
##ы
##ь
##э
##ю
##ё
##і
##ї
##ј
##њ
##ћ
##Ա
##Հ
##ա
##ե
##ի
##կ
##մ
##յ
##ն
##ո
##ս
##տ
##ր
##ւ
##ְ
##ִ
##ֵ
##ֶ
##ַ
##ָ
##ֹ
##ּ
##א
##ב
##ג
##ד
##ה
##ו
##ז
##ח
##ט
##י
##כ
##ל
##ם
##מ
##ן
##נ
##ס
##ע
##פ
##צ
##ק
##ר
##ש
##ת
##،
##ء
##آ
##أ
##إ
##ئ
##ا
##ب
##ت
##ث
##ج
##ح
##خ
##ذ
##ز
##س
##ش
##ص
##ض
##ط
##ظ
##ع
##غ
##ف
##ق
##ك
##ل
##و
##ى
##َ
##ِ
##ٹ
##پ
##چ
##ک
##گ
##ہ
##ی
##ے
##ं
##आ
##क
##ग
##च
##ज
##ण
##त
##द
##ध
##न
##प
##ब
##भ
##म
##य
##र
##ल
##व
##श
##ष
##स
##ह
##ा
##ि
##ी
##ु
##े
##ो
##्
##।
##॥
##আ
##ই
##এ
##ও
##ক
##খ
##গ
##চ
##ছ
##জ
##ট
##ত
##থ
##দ
##ধ
##ন
##প
##ব
##ম
##য
##র
##ল
##শ
##স
##হ
##়
##া
##ি
##ী
##ু
##ে
##ো
##্
##য়
##க
##த
##ப
##ம
##ய
##ர
##ல
##வ
##ா
##ி
##ு
##்
##ร
##་
##ག
##ང
##ད
##ན
##བ
##མ
##ར
##ལ
##ས
##ི
##ུ
##ེ
##ོ
##ა
##ე
##ი
##ლ
##ნ
##ო
##რ
##ს
##ᴬ
##ᴵ
##ᵀ
##ᵃ
##ᵇ
##ᵈ
##ᵉ
##ᵍ
##ᵏ
##ᵐ
##ᵒ
##ᵖ
##ᵗ
##ᵘ
##ᵣ
##ᵤ
##ᵥ
##ᶜ
##ᶠ
##ḍ
##Ḥ
##ḥ
##Ḩ
##ḩ
##ḳ
##ṃ
##ṅ
##ṇ
##ṛ
##ṣ
##ṭ
##ạ
##ả
##ấ
##ầ
##ẩ
##ậ
##ắ
##ế
##ề
##ể
##ễ
##ệ
##ị
##ọ
##ố
##ồ
##ổ
##ộ
##ớ
##ờ
##ợ
##ụ
##ủ
##ứ
##ừ
##ử
##ữ
##ự
##ỳ
##ỹ
##ἀ
##ἐ
##ὁ
##ὐ
##ὰ
##ὶ
##ὸ
##ῆ
##ῖ
##ῦ
##ῶ
##‐
##‑
##‒
##–
##—
##―
##‖
##‘
##’
##‚
##“
##”
##„
##†
##‡
##•
##…
##‰
##′
##″
##⁄
##⁰
##ⁱ
##⁴
##⁵
##⁶
##⁷
##⁸
##⁹
##⁻
##ⁿ
##₅
##₆
##₇
##₈
##₉
##₊
##₍
##₎
##ₐ
##ₑ
##ₒ
##ₓ
##ₕ
##ₖ
##ₘ
##ₚ
##ₛ
##ₜ
##₤
##€
##₱
##₹
##ℓ
##№
##ℝ
##⅓
##←
##↑
##→
##↔
##⇌
##⇒
##∂
##∈
##∗
##∘
##√
##∞
##∧
##∨
##∩
##∪
##≈
##≠
##≡
##≤
##≥
##⊂
##⊆
##⊕
##⋅
##─
##│
##■
##●
##★
##☆
##☉
##♠
##♣
##♥
##♦
##♯
##⟨
##⟩
##ⱼ
##、
##。
##《
##》
##「
##」
##『
##』
##〜
##い
##う
##え
##お
##か
##き
##く
##け
##こ
##さ
##し
##す
##せ
##そ
##た
##ち
##つ
##て
##と
##な
##に
##の
##は
##ひ
##ま
##み
##む
##め
##も
##や
##ゆ
##よ
##ら
##り
##る
##れ
##ん
##ア
##ィ
##イ
##ウ
##エ
##オ
##カ
##ガ
##キ
##ク
##グ
##コ
##サ
##シ
##ジ
##ス
##ズ
##タ
##ダ
##ッ
##テ
##デ
##ト
##ド
##ナ
##ニ
##ハ
##バ
##パ
##フ
##ブ
##プ
##マ
##ミ
##ム
##ャ
##ュ
##ラ
##リ
##ル
##レ
##ロ
##ン
##・
##ー
##一
##三
##上
##下
##中
##事
##二
##井
##京
##人
##亻
##仁
##佐
##侍
##光
##公
##力
##北
##十
##南
##原
##口
##史
##司
##吉
##同
##和
##囗
##国
##國
##土
##城
##士
##大
##天
##太
##夫
##女
##子
##宀
##安
##宮
##宿
##小
##尚
##山
##島
##川
##州
##平
##年
##心
##愛
##戸
##文
##新
##方
##日
##明
##星
##書
##月
##木
##本
##李
##村
##東
##松
##林
##正
##武
##氏
##水
##氵
##江
##河
##海
##版
##犬
##王
##生
##田
##白
##皇
##省
##真
##石
##社
##神
##竹
##美
##義
##花
##藤
##西
##谷
##車
##辶
##道
##郎
##郡
##部
##野
##金
##長
##門
##陽
##青
##食
##馬
##高
##龍
##龸
##사
##씨
##의
##이
##한
##fi
##fl
##!
##(
##)
##,
##-
##/
##:
|
PyTorch/Segmentation/MaskRCNN/pytorch/maskrcnn_benchmark/modeling/roi_heads/mask_head | mask_head | mask_head | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
import torch
from torch import nn
from maskrcnn_benchmark.structures.bounding_box import BoxList
from .roi_mask_feature_extractors import make_roi_mask_feature_extractor
from .roi_mask_predictors import make_roi_mask_predictor
from .inference import make_roi_mask_post_processor
from .loss import make_roi_mask_loss_evaluator
def keep_only_positive_boxes(boxes):
"""
Given a set of BoxList containing the `labels` field,
return a set of BoxList for which `labels > 0`.
Arguments:
boxes (list of BoxList)
"""
assert isinstance(boxes, (list, tuple))
assert isinstance(boxes[0], BoxList)
assert boxes[0].has_field("labels")
positive_boxes = []
positive_inds = []
num_boxes = 0
for boxes_per_image in boxes:
labels = boxes_per_image.get_field("labels")
inds_mask = labels > 0
inds = inds_mask.nonzero().squeeze(1)
positive_boxes.append(boxes_per_image[inds])
positive_inds.append(inds_mask)
return positive_boxes, positive_inds
class ROIMaskHead(torch.nn.Module):
def __init__(self, cfg):
super(ROIMaskHead, self).__init__()
self.cfg = cfg.clone()
self.feature_extractor = make_roi_mask_feature_extractor(cfg)
self.predictor = make_roi_mask_predictor(cfg)
self.post_processor = make_roi_mask_post_processor(cfg)
self.loss_evaluator = make_roi_mask_loss_evaluator(cfg)
def forward(self, features, proposals, targets=None):
"""
Arguments:
features (list[Tensor]): feature-maps from possibly several levels
proposals (list[BoxList]): proposal boxes
targets (list[BoxList], optional): the ground-truth targets.
Returns:
x (Tensor): the result of the feature extractor
proposals (list[BoxList]): during training, the original proposals
are returned. During testing, the predicted boxlists are returned
with the `mask` field set
losses (dict[Tensor]): During training, returns the losses for the
head. During testing, returns an empty dict.
"""
if self.training:
# during training, only focus on positive boxes
all_proposals = proposals
proposals, positive_inds = keep_only_positive_boxes(proposals)
if self.training and self.cfg.MODEL.ROI_MASK_HEAD.SHARE_BOX_FEATURE_EXTRACTOR:
x = features
x = x[torch.cat(positive_inds, dim=0)]
else:
x = self.feature_extractor(features, proposals)
mask_logits = self.predictor(x)
if not self.training:
result = self.post_processor(mask_logits, proposals)
return x, result, {}
loss_mask = self.loss_evaluator(proposals, mask_logits, targets)
return x, all_proposals, dict(loss_mask=loss_mask)
def build_roi_mask_head(cfg):
return ROIMaskHead(cfg)
|
PyTorch/SpeechSynthesis/FastPitch/platform | platform | DGX1_FastPitch_AMP_8GPU | #!/bin/bash
set -a
: ${NUM_GPUS:=8}
: ${BATCH_SIZE:=16}
: ${GRAD_ACCUMULATION:=2}
: ${AMP:=true}
bash scripts/train.sh "$@"
|
TensorFlow/Recommendation/WideAndDeep/utils | utils | metrics | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from trainer import features
# rough approximation for MAP metric for measuring ad quality
# roughness comes from batch sizes falling between groups of
# display ids
# hack because of name clashes. Probably makes sense to rename features
DISPLAY_ID_COLUMN = features.DISPLAY_ID_COLUMN
def map_custom_metric(features, labels, predictions):
display_ids = tf.reshape(features[DISPLAY_ID_COLUMN], [-1])
predictions = predictions['probabilities'][:, 1]
labels = labels[:, 0]
# Processing unique display_ids, indexes and counts
# Sorting needed in case the same display_id occurs in two different places
sorted_ids = tf.argsort(display_ids)
display_ids = tf.gather(display_ids, indices=sorted_ids)
predictions = tf.gather(predictions, indices=sorted_ids)
labels = tf.gather(labels, indices=sorted_ids)
_, display_ids_idx, display_ids_ads_count = tf.unique_with_counts(
display_ids, out_idx=tf.int64)
pad_length = 30 - tf.reduce_max(display_ids_ads_count)
pad_fn = lambda x: tf.pad(x, [(0, 0), (0, pad_length)])
preds = tf.RaggedTensor.from_value_rowids(
predictions, display_ids_idx).to_tensor()
labels = tf.RaggedTensor.from_value_rowids(
labels, display_ids_idx).to_tensor()
labels = tf.argmax(labels, axis=1)
return {
'map': tf.compat.v1.metrics.average_precision_at_k(
predictions=pad_fn(preds),
labels=labels,
k=12,
name="streaming_map")}
IS_LEAK_COLUMN = features.IS_LEAK_COLUMN
def map_custom_metric_with_leak(features, labels, predictions):
display_ids = features[DISPLAY_ID_COLUMN]
display_ids = tf.reshape(display_ids, [-1])
is_leak_tf = features[IS_LEAK_COLUMN]
is_leak_tf = tf.reshape(is_leak_tf, [-1])
predictions = predictions['probabilities'][:, 1]
predictions = predictions + tf.cast(is_leak_tf, tf.float32)
labels = labels[:, 0]
# Processing unique display_ids, indexes and counts
# Sorting needed in case the same display_id occurs in two different places
sorted_ids = tf.argsort(display_ids)
display_ids = tf.gather(display_ids, indices=sorted_ids)
predictions = tf.gather(predictions, indices=sorted_ids)
labels = tf.gather(labels, indices=sorted_ids)
_, display_ids_idx, display_ids_ads_count = tf.unique_with_counts(
display_ids, out_idx=tf.int64)
pad_length = 30 - tf.reduce_max(display_ids_ads_count)
pad_fn = lambda x: tf.pad(x, [(0, 0), (0, pad_length)])
preds = tf.RaggedTensor.from_value_rowids(predictions, display_ids_idx).to_tensor()
labels = tf.RaggedTensor.from_value_rowids(labels, display_ids_idx).to_tensor()
labels = tf.argmax(labels, axis=1)
return {
'map_with_leak': tf.compat.v1.metrics.average_precision_at_k(
predictions=pad_fn(preds),
labels=labels,
k=12,
name="streaming_map_with_leak")}
|
CUDA-Optimized/FastSpeech | FastSpeech | README | # FastSpeech For PyTorch and TensorRT
This repository provides a script and recipe to train the FastSpeech model to achieve state-of-the-art accuracy and is tested and maintained by NVIDIA.
It also provides an optimization in TensorRT to accelerate inference performance without loss of accuracy.
For more details, see this [talk](https://developer.nvidia.com/gtc/2020/video/s21420) and [slides](https://drive.google.com/file/d/1V-h5wBWAZpIpwg-qjwOuxZuOk4CLDRxy/view?usp=sharing) presented in GTC 2020.
## Table Of Contents
- [Model overview](#model-overview)
* [Model architecture](#model-architecture)
* [Default configuration](#default-configuration)
* [Feature support matrix](#feature-support-matrix)
* [Features](#features)
- [Setup](#setup)
* [Requirements](#requirements)
- [Quick Start Guide](#quick-start-guide)
- [Advanced](#advanced)
* [Scripts and sample code](#scripts-and-sample-code)
* [Parameters](#parameters)
* [Command-line options](#command-line-options)
* [Getting the data](#getting-the-data)
* [Dataset guidelines](#dataset-guidelines)
* [Training process](#training-process)
* [Inference process](#inference-process)
- [Performance](#performance)
* [Benchmarking](#benchmarking)
* [Training performance benchmark](#training-performance-benchmark)
* [Inference performance benchmark](#inference-performance-benchmark)
* [Results](#results)
* [Training performance results](#training-performance-results)
* [Inference performance results](#inference-performance-results)
* [Inference performance: NVIDIA DGX-1 (1x V100 16GB)](#inference-performance-nvidia-dgx-1-1x-v100-16gb)
* [Inference performance: NVIDIA T4](#inference-performance-nvidia-t4)
- [Release notes](#release-notes)
* [Changelog](#changelog)
* [Known issues](#known-issues)
## Model overview
The [FastSpeech](https://arxiv.org/pdf/1905.09263.pdf) model is one of the state-of-the-art Text-to-Mel models, researched by Microsoft and its paper was published to NeurIPS 2019. This model uses the WaveGlow vocoder model to generate waveforms.
One of the main points of this model is that the inference is disruptively fast. What make this possible is that it requires only single feed-forwarding, and no recurrence and auto-regression are required in the model. Another benefit of this model is that it’s robust to errors, meaning that it makes no repetitive words or skipped words.
Our implementation of the FastSpeech model differs from the model described in the paper. Our implementation uses Tacotron2 instead of Transformer TTS as a teacher model to get alignments between texts and mel-spectrograms.
This FastSpeech model is trained with mixed precision using Tensor Cores on NVIDIA Volta and Turing GPUs. Therefore, researchers can get results up to 2x faster than training without Tensor Cores, while experiencing the benefits of mixed precision training. Also, this model accelerates inference by running on TensorRT, up to 3x faster than running on PyTorch Framework on NVIDIA Volta and Turing GPUs. The models are tested against each NGC monthly container release to ensure consistent accuracy and performance over time.
### Model architecture
Fastspeech is a Text-to-Mel model, not based on any recurrent blocks or autoregressive logic. It consists of three parts - Phoneme-Side blocks, Length Regulator, and Mel-Side blocks. Phoneme-Side blocks contain an embedding layer, 6 Feed Forward Transformer(FFT) blocks, and the positional encoding adding layer. Length regulator has a nested neural model inside, Duration Predictor. Mel-Side blocks is almost similar with Phoneme-Side blocks, except for a linear layer in the tail.
The FFT Block is a variant of the Transformer block. It contains a multi-head attention layer with a residual connection, two layers of 1D-convolutional network with residual connections and two Layer Normalization layers.
The Length Regulator is the key block in FastSpeech model. Dealing with TTS, one of the biggest difficulties, is handling variable length of data. That's why recently most of the deep neural TTS have required recurrent blocks or autoregressive logic in them. However, the way Length Regulator handles variable length of data is completely different. Basically, it controls the length by repeating elements of the sequence, by the predicted durations. The Duration Predictor in Length Regulator, predicts each phoneme’s duration. It is also a neural model that consists of two 1D-convolution and a Fully Connected layer. Finally, Length Regulator expands each element of the sequence by the predicted durations.

Figure 1. Architecture of the FastSpeech model. Taken from the
[FastSpeech](https://arxiv.org/pdf/1905.09263.pdf) paper.
### Default configuration
This FastSpeech model supports multi-GPU and mixed precision training with dynamic loss scaling (see Apex code [here](https://github.com/NVIDIA/apex/blob/master/apex/fp16_utils/loss_scaler.py)), as well as mixed precision inference. To speed up FastSpeech training, reference mel-spectrograms and alignments between texts and mel-spectrograms are generated during a preprocessing step and read directly from disk during training, instead of being generated during training. Also, this model utilizes fused layer normalization supported by Apex (see [here](https://nvidia.github.io/apex/layernorm.html)) to get extra speed-up during training and inference.
This model is accelerated during inference by our implementation using TensorRT Python API (see [here](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/index.html)). Custom CUDA/C++ plugins are provided for some layers, to implement complex operations in the model for TensorRT and for better performance during inference. Also, we provide implementation of multi-engine inference as an experimental feature for improving inference performance more, dealing with variable input lengths. For more details, refer to [running on TensorRT](fastspeech/trt/README.md)
In summary, the following features were implemented in this model:
* Data-parallel multi-GPU training
* Dynamic loss scaling with backoff for Tensor Cores (mixed precision) training
* Accelerated inference on TensorRT using custom plugins and multi-engines approach
### Feature support matrix
The following features are supported by this model:
| Feature | FastSpeech
|----------------------------------|--------------------------
|Automatic mixed precision (AMP) | Yes
|TensorRT inferencing | Yes
#### Features
Automatic Mixed Precision (AMP) - AMP is a tool that enables Tensor Core-accelerated training. For more information, refer to [APEX AMP docs](https://nvidia.github.io/apex/amp.html).
TensorRT - a library for high-performance inference on NVIDIA GPUs, improving latency, throughput, power efficiency, and memory consumption. It builds optimized runtime engines by selecting the most performant kernels & algorithms, fusing layers, and using mixed precision. For more information, refer to [github.com/NVIDIA/TensorRT](https://github.com/NVIDIA/TensorRT).
## Setup
The following section lists the requirements that you need to meet in order to start training the FastSpeech model.
### Requirements
This repository contains Dockerfile which extends the PyTorch NGC container
and encapsulates some dependencies. Aside from these dependencies, ensure you
have the following components:
* [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
* [PyTorch 20.10-py3 NGC container](https://ngc.nvidia.com/registry/nvidia-pytorch)
or newer
* [NVIDIA Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/), [Turing](https://www.nvidia.com/en-us/geforce/turing/)<!--, or [Ampere](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/) based GPU-->
For more information about how to get started with NGC containers, see the
following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning
Documentation:
* [Getting Started Using NVIDIA GPU Cloud](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html)
* [Accessing And Pulling From The NGC Container Registry](https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html#accessing_registry)
* [Running PyTorch](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/running.html#running)
For those unable to use the PyTorch NGC container, to set up the required
environment or create your own container, see the versioned
[NVIDIA Container Support Matrix](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html).
## Quick Start Guide
To train your model using mixed precision with Tensor Cores or using FP32, perform the following steps using the default parameters of the FastSpeech model on the LJSpeech(https://keithito.com/LJ-Speech-Dataset) dataset. For the specifics concerning training and inference, see the [Advanced](#advanced) section.
1. Clone the repository,
```
git clone https://github.com/NVIDIA/DeepLearningExamples.git
cd DeepLearningExamples/CUDA-Optimized/FastSpeech
```
2. Download and preprocess the dataset. Data is downloaded to the ./LJSpeech-1.1 directory (on the host). The ./LJSpeech-1.1 directory is mounted to the /workspace/fastspeech/LJSpeech-1.1 location in the NGC container.
```
bash scripts/prepare_dataset.sh
```
3. Build the FastSpeech PyTorch NGC container.
```
bash scripts/docker/build.sh
```
4. Start an interactive session in the NGC container to run training/inference. After you build the container image, you can start an interactive CLI session with:
```
bash scripts/docker/interactive.sh
```
5. Start training. To preprocess mel-spectrograms for faster training, first run:
```
python fastspeech/dataset/ljspeech_dataset.py --dataset_path="./LJSpeech-1.1" --mels_path="./mels_ljspeech1.1"
```
The preprocessed mel-spectrograms are stored in the ./mels_ljspeech1.1 directory.
Next, preprocess the alignments on LJSpeech dataset with feed-forwards to the teacher model. Download the Nvidia [pretrained Tacotron2 checkpoint](https://drive.google.com/file/d/1c5ZTuT7J08wLUoVZ2KkUs_VdZuJ86ZqA/view) to get a pretrained teacher model. And set --tacotron2_path to the Tacotron2 checkpoint file path and the result alignments are stored in --aligns_path.
```
python fastspeech/align_tacotron2.py --dataset_path="./LJSpeech-1.1" --tacotron2_path="tacotron2_statedict.pt" --aligns_path="aligns_ljspeech1.1"
```
The preprocessed alignments are stored in the ./aligns_ljspeech1.1 directory. For more information, refer to the [training process section](#training-process).
Finally, run the training script:
```
python fastspeech/train.py --dataset_path="./LJSpeech-1.1" --mels_path="./mels_ljspeech1.1" --aligns_path="./aligns_ljspeech1.1" --log_path="./logs" --checkpoint_path="./checkpoints"
```
The checkpoints and Tensorboard log files are stored in the ./checkpoints and ./logs, respectively.
Additionally, to accelerate the training using AMP, run with --use_amp:
```
python fastspeech/train.py --dataset_path="./LJSpeech-1.1" --mels_path="./mels_ljspeech1.1" --aligns_path="./aligns_ljspeech1.1" --log_path="./logs" --checkpoint_path="./checkpoints" --use_amp
```
6. Start generation. To generate waveforms with WaveGlow Vocoder, Get [pretrained WaveGlow model](https://ngc.nvidia.com/catalog/models/nvidia:waveglow_ckpt_amp_256/files?version=19.10.0) from NGC into the home directory, for example, ./nvidia_waveglow256pyt_fp16.
After you have trained the FastSpeech model, you can perform generation using the checkpoint stored in ./checkpoints. Then run:
```
python generate.py --waveglow_path="./nvidia_waveglow256pyt_fp16" --checkpoint_path="./checkpoints" --text="./test_sentences.txt"
```
The script loads automatically the latest checkpoint (if any exists), or you can pass a checkpoint file through --ckpt_file. And it loads input texts in ./test_sentences.txt and stores the result in ./results directory. You can also set the result directory path with --results_path.
You can also run with a sample text:
```
python generate.py --waveglow_path="./nvidia_waveglow256pyt_fp16" --checkpoint_path="./checkpoints" --text="The more you buy, the more you save."
```
7. Accelerate generation(inferencing of FastSpeech and WaveGlow) with TensorRT. Set parameters config file with --hparam=trt.yaml to enable TensorRT inference mode. To prepare for running WaveGlow on TensorRT, first get an ONNX file via [DeepLearningExamples/PyTorch/SpeechSynthesis/Tacotron2/tensorrt](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2/tensorrt), convert it to an TensorRT engine using scripts/waveglow/convert_onnx2trt.py, and copy this in the home directory, for example, ./waveglow.fp16.trt. Then run with --waveglow_engine_path:
```
python generate.py --hparam=trt.yaml --waveglow_path="./nvidia_waveglow256pyt_fp16" --checkpoint_path="./checkpoints" --text="./test_sentences.txt" --waveglow_engine_path="waveglow.fp16.trt"
```
## Advanced
The following sections provide greater details of the dataset, running training and inference, and the training results.
### Scripts and sample code
The ./fastspeech directory contains models and scripts for data processing, training/inference, and estimating performance.
* train.py: the FastSpeech model training script.
* infer.py: the FastSpeech model inference script.
* perf_infer.py: the script for estimating inference performance.
* align_tacotron2.py: the script for preprocessing alignments.
The ./fastspeech/trt directory contains the FastSpeech TensorRT model, inferencer and plugins for TensorRT.
And, ./generate.py is the script for generating waveforms with a vocoder.
### Parameters
All parameters of the FastSpeech model and for training/inference are defined in parameters config files in ./fastspeech/hparams.
The default config file, base.yaml, contains the most common parameters including paths, audio processing, and model hyperparams. The default config file for training, train.yaml, contains parameters used during training such as learning rate, batch size, and number of steps. And the default config file for inference, infer.yaml, contains parameters required for inference including batch size and usage of half precision. For more details, refer to the config files, i.e., base.yaml, train.yaml, and infer.yaml in ./fastspeech/hparams.
You can also define a new config file by overriding the default config, and set the config file via a command-line option --hparam, for example:
```yaml
# File name: ./fastspeech/hparams/my_train.yaml
# Inherit all parameters from train.yaml.
parent_yaml: "train.yaml"
# Override the learning rate.
learning_rate: 0.0005
```
```
python fastspeech/train.py --hparam=my_train.yaml ...
```
### Command-line options
To see the full list of available options and their descriptions, use the `-- -h` or `-- --help` command-line option, for example:
```
python fastspeech/train.py -- -h
```
Although it will not display all parameters defined in the config files, you can override any parameters in the config files, for example:
```
python fastspeech/train.py ... --batch_size=8 --final_steps=64000
```
### Getting the data
The FastSpeech model was trained on the LJSpeech-1.1 dataset. This repository contains the ./scripts/prepare_dataset.sh script which will automatically download and extract the whole dataset. By default, data will be extracted to the ./LJSpeech-1.1 directory. The dataset directory contains a README file, a wavs directory with all audio samples, and a file metadata.csv that contains audio file names and the corresponding transcripts.
#### Dataset guidelines
The LJSpeech dataset has 13,100 clips that amount to about 24 hours of speech. Since the original dataset has all transcripts in the metadata.csv file, the ./scripts/prepare_dataset.sh script partitions the metadata.csv into sub-meta files for training/test set - metadata_train.csv and metadata_test.csv containing 13,000 and 100 transcripts respectively.
### Training process
To accelerate the training performance, preprocessing of alignments between texts and mel-spectrograms is performed prior to the training iterations.
The FastSpeech model requires reference alignments of texts and mel-spectrograms extracted from an auto-regressive TTS teacher model. As Tacotron2 is used as a teacher in our implementation, download the Nvidia [pretrained Tacotron2 checkpoint](https://drive.google.com/file/d/1c5ZTuT7J08wLUoVZ2KkUs_VdZuJ86ZqA/view) to utilize this for the preprocessing of the alignments.
Run ```align_tacotron2.py``` to get alignments on LJSpeech dataset with feed-forwards to the teacher model. --tacotron2_path is for setting Tacotron2 checkpoint file path and the result alignments are stored in --aligns_path. After that, the alignments are loaded during training.
```
python fastspeech/align_tacotron2.py --dataset_path="./LJSpeech-1.1" --tacotron2_path="tacotron2_statedict.pt" --aligns_path="aligns_ljspeech1.1"
```
You can also preprocess mel-spectrograms for faster training. The result mel-spectrograms are stored in --mels_path and loaded during training. If --mels_path is not set, mel-spectrograms are processed during training.
Run ```ljspeech_dataset.py```
```
python fastspeech/dataset/ljspeech_dataset.py --dataset_path="./LJSpeech-1.1" --mels_path="mels_ljspeech1.1"
```
#### Accelerated training
NVIDIA [APEX](https://github.com/NVIDIA/apex) library supports a simple method to obtain up to 2x speed-up during training. The library provides easy-to-use APIs for using AMP and layer fusions.
To use AMP during training, run with --use_amp
```
python fastspeech/train.py ... --use_amp
```
Another approach for extra speed-up during training is fusing operations. To use fused layer normalization, set --fused_layernorm.
```
python fastspeech/train.py ... --use_amp --fused_layernorm
```
### Inference process
```infer.py``` is provided to test the FastSpeech model on the LJSpeech dataset. --n_iters is the number of batches to infer. To run in FP16, run with --use_fp16.
```
python fastspeech/infer.py --dataset_path="./LJSpeech-1.1" --checkpoint_path="./checkpoints" --n_iters=10 --use_fp16
```
#### Accelerated inference
To accelerate inference with TensorRT, set --hparam=trt.yaml.
```
python fastspeech/infer.py --hparam=trt.yaml --dataset_path="./LJSpeech-1.1" --checkpoint_path="./checkpoints" --n_iters=10 --use_fp16
```
For more details, refer to [accelerating inference with TensorRT](fastspeech/trt/README.md).
#### Generation
To generate waveforms with WaveGlow Vocoder, get [pretrained WaveGlow model](https://ngc.nvidia.com/catalog/models/nvidia:waveglow_ckpt_amp_256/files?version=19.10.0) from NGC into the home directory, for example, ./nvidia_waveglow256pyt_fp16.
Run generate.py with:
* --text - an input text or the text file path.
* --results_path - result waveforms directory path. (default=./results).
* --ckpt_file - checkpoint file path. (default checkpoint file is the latest file in --checkpoint_path)
```
python generate.py --waveglow_path="./nvidia_waveglow256pyt_fp16" --text="The more you buy, the more you save."
```
or
```
python generate.py --waveglow_path="./nvidia_waveglow256pyt_fp16" --text=test_sentences.txt
```
Sample result waveforms are [here](samples).
To generate waveforms with the whole pipeline of FastSpeech and WaveGlow with TensorRT, extract a WaveGlow TRT engine file through https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2/tensorrt and run generate.py with --hparam=trt.yaml and --waveglow_engine_path.
```
python generate.py --hparam=trt.yaml --waveglow_path="./nvidia_waveglow256pyt_fp16" --waveglow_engine_path="waveglow.fp16.trt" --text="The more you buy, the more you save."
```
Sample result waveforms are [FP32](fastspeech/trt/samples) and [FP16](fastspeech/trt/samples_fp16).
## Performance
The performance measurements in this document were conducted at the time of publication and may not reflect the performance achieved from NVIDIA’s latest software release. For the most up-to-date performance measurements, go to [NVIDIA Data Center Deep Learning Product Performance](https://developer.nvidia.com/deep-learning-performance-training-inference).
### Benchmarking
The following section shows how to run benchmarks measuring the model performance in training and inference modes.
#### Training performance benchmark
To benchmark the training performance, set CUDA_VISIBLE_DEVICES, depending on GPU count:
* for 1 GPU,
```
export CUDA_VISIBLE_DEVICES=0
```
* for 4 GPUs,
```
export CUDA_VISIBLE_DEVICES=0,1,2,3
```
and run on a specific batch size:
* in FP32
```
python fastspeech/train.py --batch_size=BATCH_SIZE
```
* in mixed precision
```
python fastspeech/train.py --batch_size=BATCH_SIZE --use_amp
```
#### Inference performance benchmark
Set CUDA_VISIBLE_DEVICES=0 to use single GPU,
```
export CUDA_VISIBLE_DEVICES=0
```
and run on a specific batch size:
* in FP32
```
python fastspeech/perf_infer.py --batch_size=BATCH_SIZE
```
* in FP16
```
python fastspeech/perf_infer.py --batch_size=BATCH_SIZE --use_fp16
```
To benchmark the inference performance with vocoder,
```
python fastspeech/perf_infer.py --batch_size=BATCH_SIZE --with_vocoder --waveglow_path=WAVEGLOW_PATH
```
Finally, to benchmark the inference performance on TensorRT,
```
python fastspeech/perf_infer.py --hparam=trt.yaml --batch_size=BATCH_SIZE
```
* with vocoder
```
python fastspeech/perf_infer.py --hparam=trt.yaml --batch_size=BATCH_SIZE --with_vocoder --waveglow_path=WAVEGLOW_PATH
```
### Results
The following sections provide details on how we achieved our performance and accuracy in training and inference.
#### Training performance results
Our results were obtained by running the script in [training performance benchmark](#training-performance-benchmark) on <!--NVIDIA DGX A100 with 8x A100 40G GPUs and -->NVIDIA DGX-1 with 8x V100 16G GPUs. Performance numbers (in number of mels per second) were averaged over an entire training epoch.
<!-- ##### Training performance: NVIDIA DGX A100 (8x A100 40GB)
| GPUs | Batch size / GPU | Throughput(mels/s) - FP32 | Throughput(mels/s) - mixed precision | Throughput speedup (FP32 - mixed precision) | Multi-GPU Weak scaling - FP32 | Multi-GPU Weak scaling - mixed precision
|---|----|--------|--------|------|-----|------|
| 1 | 32 | | | | | 1 |
| 4 | 32 | | | | | |
| 8 | 32 | | | | | | -->
##### Training performance: NVIDIA DGX-1 (8x V100 16GB)
| GPUs | Batch size / GPU | Throughput(mels/s) - FP32 | Throughput(mels/s) - mixed precision | Throughput speedup (FP32 - mixed precision) | Multi-GPU Weak scaling - FP32 | Multi-GPU Weak scaling - mixed precision
|---|----|--------|--------|------|-----|------|
| 1 | 32 | 31674 | 63431 | 2.00 | 1 | 1 |
| 4 | 32 | 101115 | 162847 | 1.61 | 3.19| 2.57 |
| 8 | 32 | 167650 | 188251 | 1.12 | 5.29| 2.97 |
#### Inference performance results
Our results were obtained by running the script in [inference performance benchmark](#inference-performance-benchmark) on NVIDIA DGX-1 with 1x V100 16GB GPU and a NVIDIA T4. The following tables show inference statistics for the FastSpeech and WaveGlow text-to-speech system on PyTorch and comparisons by framework with batch size 1 in FP16, gathered from 1000 inference runs. Latency is measured from the start of FastSpeech inference to the end of WaveGlow inference. The tables include average latency, latency standard deviation, and latency confidence intervals. Throughput is measured as the number of generated audio samples per second. RTF is the real-time factor which tells how many seconds of speech are generated in 1 second of compute. The used WaveGlow model is a 256-channel model. The numbers reported below were taken with a moderate length of 128 characters.
##### Inference performance: NVIDIA DGX-1 (1x V100 16GB)
| Batch size | Precision | Avg latency (s) | Std latency(s) | Latency tolerance interval 90% (s) | Latency tolerance interval 95% (s) | Latency tolerance interval 99% (s) | Throughput (samples/s) | Avg RTF | Speed-up with mixed precision |
|------------|-----------|-----------------|----------------|------------------------------------|------------------------------------|--------------------|---------------------|---------|-------------------------------|
| 1 | FP16 | 0.2287 | 0.001 | 0.2295 | 0.2297 | 0.2303 | 681,773 | 30.92 | 1.50 |
| 4 | FP16 | 0.5003 | 0.0016 | 0.502 | 0.5023 | 0.5032 | 1,244,466 | 14.11 | 2.57 |
| 8 | FP16 | 0.9695 | 0.0023 | 0.9722 | 0.9732 | 0.9748 | 1,284,339 | 7.28 | 2.73 |
| 1 | FP32 | 0.3428 | 0.0016 | 0.3445 | 0.3449 | 0.3458 | 454,833 | 20.63 | 1.00 |
| 4 | FP32 | 1.287 | 0.0039 | 1.2916 | 1.2927 | 1.2954 | 484,558 | 5.50 | 1.00 |
| 8 | FP32 | 2.6481 | 0.0041 | 2.6535 | 2.6549 | 2.657 | 470,992 | 2.67 | 1.00 |
| Framework | Batch size | Precision | Avg latency (s) | Std latency(s) | Latency tolerance interval 90% (s) | Latency tolerance interval 95% (s) | Latency tolerance interval 99% (s) | Throughput (samples/s) | Avg RTF | Speed-up (PyT - PyT+TRT) |
|-----------|------------|-----------|-----------------|----------------|------------------------------------|------------------------------------|--------------------|---------------------|---------|-------------------------------|
| PyT | 1 | FP16 | 0.2287 | 0.001 | 0.2295 | 0.2297 | 0.2303 | 681,773 | 30.92 | 1 |
| PyT+TRT | 1 | FP16 | 0.1115 | 0.0007 | 0.1122 | 0.1124 | 0.1135 | 1,398,343 | 63.42 | 2.05 |
| PyT | 4 | FP16 | 0.5003 | 0.0016 | 0.502 | 0.5023 | 0.5032 | 1,244,466 | 14.11 | 1 |
| PyT+TRT | 4 | FP16 | 0.3894 | 0.0019 | 0.3917 | 0.3925 | 0.3961 | 1,599,005 | 18.13 | 1.28 |
##### Inference performance: NVIDIA T4
| Batch size | Precision | Avg latency (s) | Std latency(s) | Latency tolerance interval 90% (s) | Latency tolerance interval 95% (s) | Latency tolerance interval 99% (s) | Throughput (samples/s) | Avg RTF | Speed-up with mixed precision |
|------------|-----------|-----------------|----------------|------------------------------------|------------------------------------|--------------------|---------------------|---------|-------------------------------|
| 1 | FP16 | 0.9345 | 0.0294 | 0.9662 | 0.9723 | 0.9806 | 167,003 | 7.57 | 1.28 |
| 4 | FP16 | 3.7815 | 0.0877 | 3.9078 | 3.9393 | 3.9632 | 164,730 | 1.87 | 1.28 |
| 8 | FP16 | 7.5722 | 0.1764 | 7.8273 | 7.8829 | 7.9286 | 164,530 | 0.93 | 1.21 |
| 1 | FP32 | 1.1952 | 0.0368 | 1.2438 | 1.2487 | 1.2589 | 130,572 | 5.92 | 1.00 |
| 4 | FP32 | 4.8578 | 0.1215 | 5.0343 | 5.0651 | 5.1027 | 128,453 | 1.46 | 1.00 |
| 8 | FP32 | 9.1563 | 0.4114 | 9.4049 | 9.4571 | 9.5194 | 136,367 | 0.77 | 1.00 |
| Framework | Batch size | Precision | Avg latency (s) | Std latency(s) | Latency tolerance interval 90% (s) | Latency tolerance interval 95% (s) | Latency tolerance interval 99% (s) | Throughput (samples/s) | Avg RTF | Speed-up (PyT - PyT+TRT) |
|-----------|------------|-----------|-----------------|----------------|------------------------------------|------------------------------------|--------------------|---------------------|---------|-------------------------------|
| PyT | 1 | FP16 | 0.9345 | 0.0294 | 0.9662 | 0.9723 | 0.9806 | 167,003 | 7.57 | 1 |
| PyT+TRT | 1 | FP16 | 0.3234 | 0.0058 | 0.3304 | 0.3326 | 0.3358 | 482,286 | 21.87 | 2.89 |
## Release notes
### Changelog
Oct 2020
- PyTorch 1.7, TensorRT 7.2 support <!--and Nvidia Ampere architecture support-->
July 2020
- Initial release
### Known issues
There are no known issues in this release. |
PyTorch/Forecasting/TFT | TFT | inference | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pandas as pd
import numpy as np
import pickle
import argparse
import torch
from torch.utils.data import DataLoader
from torch.cuda import amp
from torch.utils.tensorboard import SummaryWriter
from tqdm import tqdm
from modeling import TemporalFusionTransformer
from configuration import ElectricityConfig
from data_utils import TFTDataset
from utils import PerformanceMeter
from criterions import qrisk
import dllogger
from log_helper import setup_logger
from torch.cuda import amp
def _unscale_per_id(config, values, ids, scalers):
num_horizons = config.example_length - config.encoder_length + 1
flat_values = pd.DataFrame(
values,
columns=[f't{j}' for j in range(num_horizons - values.shape[1], num_horizons)]
)
flat_values['id'] = ids
df_list = []
for idx, group in flat_values.groupby('id'):
scaler = scalers[idx]
group_copy = group.copy()
for col in group_copy.columns:
if not 'id' in col:
_col = np.expand_dims(group_copy[col].values, -1)
_t_col = scaler.inverse_transform(_col)[:,-1]
group_copy[col] = _t_col
df_list.append(group_copy)
flat_values = pd.concat(df_list, axis=0)
flat_values = flat_values[[col for col in flat_values if not 'id' in col]]
return flat_values.values
def _unscale(config, values, scaler):
num_horizons = config.example_length - config.encoder_length + 1
flat_values = pd.DataFrame(
values,
columns=[f't{j}' for j in range(num_horizons - values.shape[1], num_horizons)]
)
for col in flat_values.columns:
if not 'id' in col:
_col = np.expand_dims(flat_values[col].values, -1)
_t_col = scaler.inverse_transform(_col)[:,-1]
flat_values[col] = _t_col
flat_values = flat_values[[col for col in flat_values if not 'id' in col]]
return flat_values.values
def predict(args, config, model, data_loader, scalers, cat_encodings, extend_targets=False):
model.eval()
predictions = []
targets = []
ids = []
perf_meter = PerformanceMeter(benchmark_mode=not args.disable_benchmark)
n_workers = args.distributed_world_size if hasattr(args, 'distributed_world_size') else 1
with torch.jit.fuser("fuser2"):
for step, batch in enumerate(data_loader):
perf_meter.reset_current_lap()
with torch.no_grad():
batch = {key: tensor.cuda() if tensor.numel() else None for key, tensor in batch.items()}
ids.append(batch['id'][:,0,:])
targets.append(batch['target'])
predictions.append(model(batch).float())
perf_meter.update(args.batch_size * n_workers,
exclude_from_total=step in [0, 1, 2, len(data_loader)-1])
targets = torch.cat(targets, dim=0).cpu().numpy()
if not extend_targets:
targets = targets[:,config.encoder_length:,:]
predictions = torch.cat(predictions, dim=0).cpu().numpy()
if config.scale_per_id:
ids = torch.cat(ids, dim=0).cpu().numpy()
unscaled_predictions = np.stack(
[_unscale_per_id(config, predictions[:,:,i], ids, scalers) for i in range(len(config.quantiles))],
axis=-1)
unscaled_targets = np.expand_dims(_unscale_per_id(config, targets[:,:,0], ids, scalers), axis=-1)
else:
ids = None
unscaled_predictions = np.stack(
[_unscale(config, predictions[:,:,i], scalers['']) for i in range(len(config.quantiles))],
axis=-1)
unscaled_targets = np.expand_dims(_unscale(config, targets[:,:,0], scalers['']), axis=-1)
return unscaled_predictions, unscaled_targets, ids, perf_meter
def visualize_v2(args, config, model, data_loader, scalers, cat_encodings):
unscaled_predictions, unscaled_targets, ids, _ = predict(args, config, model, data_loader, scalers, cat_encodings, extend_targets=True)
num_horizons = config.example_length - config.encoder_length + 1
pad = unscaled_predictions.new_full((unscaled_targets.shape[0], unscaled_targets.shape[1] - unscaled_predictions.shape[1], unscaled_predictions.shape[2]), fill_value=float('nan'))
pad[:,-1,:] = unscaled_targets[:,-num_horizons,:]
unscaled_predictions = torch.cat((pad, unscaled_predictions), dim=1)
ids = torch.from_numpy(ids.squeeze())
joint_graphs = torch.cat([unscaled_targets, unscaled_predictions], dim=2)
graphs = {i:joint_graphs[ids == i, :, :] for i in set(ids.tolist())}
for key, g in graphs.items():
for i, ex in enumerate(g):
df = pd.DataFrame(ex.numpy(),
index=range(num_horizons - ex.shape[0], num_horizons),
columns=['target'] + [f'P{int(q*100)}' for q in config.quantiles])
fig = df.plot().get_figure()
ax = fig.get_axes()[0]
_values = df.values[config.encoder_length-1:,:]
ax.fill_between(range(num_horizons), _values[:,1], _values[:,-1], alpha=0.2, color='green')
os.makedirs(os.path.join(args.results, 'single_example_vis', str(key)), exist_ok=True)
fig.savefig(os.path.join(args.results, 'single_example_vis', str(key), f'{i}.pdf'))
def inference(args, config, model, data_loader, scalers, cat_encodings):
unscaled_predictions, unscaled_targets, ids, perf_meter = predict(args, config, model, data_loader, scalers, cat_encodings)
if args.joint_visualization or args.save_predictions:
ids = torch.from_numpy(ids.squeeze())
#ids = torch.cat([x['id'][0] for x in data_loader.dataset])
joint_graphs = torch.cat([unscaled_targets, unscaled_predictions], dim=2)
graphs = {i:joint_graphs[ids == i, :, :] for i in set(ids.tolist())}
for key, g in graphs.items(): #timeseries id, joint targets and predictions
_g = {'targets': g[:,:,0]}
_g.update({f'P{int(q*100)}':g[:,:,i+1] for i, q in enumerate(config.quantiles)})
if args.joint_visualization:
summary_writer = SummaryWriter(log_dir=os.path.join(args.results, 'predictions_vis', str(key)))
for q, t in _g.items(): # target and quantiles, timehorizon values
if q == 'targets':
targets = torch.cat([t[:,0], t[-1,1:]]) # WIP
# We want to plot targets on the same graph as predictions. Probably could be written better.
for i, val in enumerate(targets):
summary_writer.add_scalars(str(key), {f'{q}':val}, i)
continue
# Tensor t contains different time horizons which are shifted in phase
# Next lines realign them
y = t.new_full((t.shape[0] + t.shape[1] -1, t.shape[1]), float('nan'))
for i in range(y.shape[1]):
y[i:i+t.shape[0], i] = t[:,i]
for i, vals in enumerate(y): # timestep, timehorizon values value
summary_writer.add_scalars(str(key), {f'{q}_t+{j+1}':v for j,v in enumerate(vals) if v == v}, i)
summary_writer.close()
if args.save_predictions:
for q, t in _g.items():
df = pd.DataFrame(t.tolist())
df.columns = [f't+{i+1}' for i in range(len(df.columns))]
os.makedirs(os.path.join(args.results, 'predictions', str(key)), exist_ok=True)
df.to_csv(os.path.join(args.results, 'predictions', str(key), q+'.csv'))
#losses = QuantileLoss(config)(torch.from_numpy(unscaled_predictions).contiguous(),
# torch.from_numpy(unscaled_targets).contiguous()).numpy()
#normalizer = np.mean(np.abs(unscaled_targets))
#q_risk = 2 * losses / normalizer
risk = qrisk(unscaled_predictions, unscaled_targets, np.array(config.quantiles))
perf_dict = {
'throughput': perf_meter.avg,
'latency_avg': perf_meter.total_time/len(perf_meter.intervals),
'latency_p90': perf_meter.p(90),
'latency_p95': perf_meter.p(95),
'latency_p99': perf_meter.p(99),
'total_infernece_time': perf_meter.total_time,
}
return risk, perf_dict
def main(args):
setup_logger(args)
# Set up model
state_dict = torch.load(args.checkpoint)
config = state_dict['config']
model = TemporalFusionTransformer(config).cuda()
model.load_state_dict(state_dict['model'])
model.eval()
model.cuda()
# Set up dataset
test_split = TFTDataset(args.data, config)
data_loader = DataLoader(test_split, batch_size=args.batch_size, num_workers=4)
scalers = pickle.load(open(args.tgt_scalers, 'rb'))
cat_encodings = pickle.load(open(args.cat_encodings, 'rb'))
if args.visualize:
# TODO: abstract away all forms of visualization.
visualize_v2(args, config, model, data_loader, scalers, cat_encodings)
quantiles, perf_dict = inference(args, config, model, data_loader, scalers, cat_encodings)
quantiles = {'test_p10': quantiles[0].item(), 'test_p50': quantiles[1].item(), 'test_p90': quantiles[2].item(), 'sum':sum(quantiles).item()}
finish_log = {**quantiles, **perf_dict}
dllogger.log(step=(), data=finish_log, verbosity=1)
print('Test q-risk: P10 {test_p10} | P50 {test_p50} | P90 {test_p90}'.format(**quantiles))
print('Latency:\n\tAverage {:.3f}s\n\tp90 {:.3f}s\n\tp95 {:.3f}s\n\tp99 {:.3f}s'.format(
perf_dict['latency_avg'], perf_dict['latency_p90'], perf_dict['latency_p95'], perf_dict['latency_p99']))
if __name__=='__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--checkpoint', type=str,
help='Path to the checkpoint')
parser.add_argument('--data', type=str,
help='Path to the test split of the dataset')
parser.add_argument('--tgt_scalers', type=str,
help='Path to the tgt_scalers.bin file produced by the preprocessing')
parser.add_argument('--cat_encodings', type=str,
help='Path to the cat_encodings.bin file produced by the preprocessing')
parser.add_argument('--batch_size', type=int, default=64)
parser.add_argument('--visualize', action='store_true', help='Visualize predictions - each example on the separate plot')
parser.add_argument('--joint_visualization', action='store_true', help='Visualize predictions - each timeseries on separate plot. Projections will be concatenated.')
parser.add_argument('--save_predictions', action='store_true')
parser.add_argument('--results', type=str, default='/results')
parser.add_argument('--log_file', type=str, default='dllogger.json')
parser.add_argument("--disable_benchmark", action='store_true', help='Disable benchmarking mode')
ARGS = parser.parse_args()
main(ARGS)
|
Tools/PyTorch/TimeSeriesPredictionPlatform | TimeSeriesPredictionPlatform | launch_triton_configure | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import warnings
import hydra
warnings.filterwarnings("ignore")
@hydra.main(config_path="conf/", config_name="converter_config")
def main(cfg):
print(cfg)
cfg.deployment.config.checkpoint=cfg.checkpoint
hydra.utils.call(cfg, _recursive_=False)
if __name__ == "__main__":
main()
|
TensorFlow2/Recommendation/DLRM_and_DCNv2/tests | tests | test_fspecs | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/bin/bash
NAMES=${1:-'*.yaml'}
COMMON_OPTS="--xla --amp"
bash test_with_opts.sh "${NAMES}" "${COMMON_OPTS}"
#
# usage:
# docker build . -t nvidia_dlrm_tf
# docker run --security-opt seccomp=unconfined --runtime=nvidia -it --rm --ipc=host -v ${PWD}/data:/data nvidia_dlrm_tf bash
# cd tests
# bash test_fspecs.sh
|
PyTorch/Segmentation/nnUNet/triton/deployment_toolkit/bermuda | bermuda | utils | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from collections import Counter
from typing import Callable, Dict, List
import networkx as nx
from ..core import ShapeSpec
def infer_precision(
nx_graph: nx.Graph,
input_names: List[str],
output_names: List[str],
get_node_dtype_fn: Callable,
):
node_dtypes = [nx_graph.nodes[node_name].get("dtype", None) for node_name in nx_graph.nodes]
node_dtypes = [dt for dt in node_dtypes if dt is None or dt.kind not in ["i", "b"]]
dtypes_counter = Counter(node_dtypes)
return dtypes_counter.most_common()[0][0]
def get_shapes_with_dynamic_axes(dataloader, batch_size_dim=0):
def _set_dynamic_shapes(t, shapes):
for k, v in t.items():
shape = list(v.shape)
for dim, s in enumerate(shape):
if shapes[k][dim] != -1 and shapes[k][dim] != s:
shapes[k][dim] = -1
## get all shapes from input and output tensors
input_shapes = {}
output_shapes = {}
for batch in dataloader:
_, x, y = batch
for k, v in x.items():
input_shapes[k] = list(v.shape)
for k, v in y.items():
output_shapes[k] = list(v.shape)
break
# based on max <max_num_iters> iterations, check which
# dimensions differ to determine dynamic_axes
max_num_iters = 100
for idx, batch in enumerate(dataloader):
if idx >= max_num_iters:
break
_, x, y = batch
_set_dynamic_shapes(x, input_shapes)
_set_dynamic_shapes(y, output_shapes)
return input_shapes, output_shapes
def get_dynamic_axes(dataloader, batch_size_dim=0):
input_shapes, output_shapes = get_shapes_with_dynamic_axes(dataloader, batch_size_dim)
all_shapes = {**input_shapes, **output_shapes}
dynamic_axes = {}
for k, shape in all_shapes.items():
for idx, s in enumerate(shape):
if s == -1:
dynamic_axes[k] = {idx: k + "_" + str(idx)}
for k, v in all_shapes.items():
if k in dynamic_axes:
dynamic_axes[k].update({batch_size_dim: "batch_size_" + str(batch_size_dim)})
else:
dynamic_axes[k] = {batch_size_dim: "batch_size_" + str(batch_size_dim)}
return dynamic_axes
def get_input_shapes(dataloader, max_batch_size=1) -> Dict[str, ShapeSpec]:
def init_counters_and_shapes(x, counters, min_shapes, max_shapes):
for k, v in x.items():
counters[k] = Counter()
min_shapes[k] = [float("inf")] * v.ndim
max_shapes[k] = [float("-inf")] * v.ndim
counters = {}
min_shapes: Dict[str, tuple] = {}
max_shapes: Dict[str, tuple] = {}
for idx, batch in enumerate(dataloader):
ids, x, y = batch
if idx == 0:
init_counters_and_shapes(x, counters, min_shapes, max_shapes)
for k, v in x.items():
shape = v.shape
counters[k][shape] += 1
min_shapes[k] = tuple([min(a, b) for a, b in zip(min_shapes[k], shape)])
max_shapes[k] = tuple([max(a, b) for a, b in zip(max_shapes[k], shape)])
opt_shapes: Dict[str, tuple] = {}
for k, v in counters.items():
opt_shapes[k] = v.most_common(1)[0][0]
shapes = {}
for k in opt_shapes.keys(): # same keys in min_shapes and max_shapes
shapes[k] = ShapeSpec(
min=(1,) + min_shapes[k][1:],
max=(max_batch_size,) + max_shapes[k][1:],
opt=(max_batch_size,) + opt_shapes[k][1:],
)
return shapes
|
PyTorch/Forecasting/TFT/scripts | scripts | run_electricity | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
: ${SEED:=1}
: ${LR:=1e-3}
: ${NGPU:=8}
: ${BATCH_SIZE:=1024}
: ${EPOCHS:=30}
python -m torch.distributed.run --nproc_per_node=${NGPU} train.py \
--dataset electricity \
--data_path /data/processed/electricity_bin \
--batch_size=${BATCH_SIZE} \
--sample 450000 50000 \
--lr ${LR} \
--epochs ${EPOCHS} \
--seed ${SEED} \
--use_amp \
--results /results/TFT_electricity_bs${NGPU}x${BATCH_SIZE}_lr${LR}/seed_${SEED}
|
PyTorch/Detection/SSD | SSD | README | # SSD300 v1.1 For PyTorch
This repository provides a script and recipe to train the SSD300 v1.1 model to achieve state of the art accuracy, and is tested and maintained by NVIDIA.
## Table Of Contents
- [Model overview](#model-overview)
* [Model architecture](#model-architecture)
* [Default configuration](#default-configuration)
* [Feature support matrix](#feature-support-matrix)
* [Features](#features)
* [Mixed precision training](#mixed-precision-training)
* [Enabling mixed precision](#enabling-mixed-precision)
* [Enabling TF32](#enabling-tf32)
- [Setup](#setup)
* [Requirements](#requirements)
- [Quick Start Guide](#quick-start-guide)
- [Advanced](#advanced)
* [Scripts and sample code](#scripts-and-sample-code)
* [Parameters](#parameters)
* [Command-line options](#command-line-options)
* [Getting the data](#getting-the-data)
* [Dataset guidelines](#dataset-guidelines)
* [Data preprocessing](#data-preprocessing)
* [Data augmentation](#data-augmentation)
* [Training process](#training-process)
* [Evaluation process](#evaluation-process)
* [Inference process](#inference-process)
- [Performance](#performance)
* [Benchmarking](#benchmarking)
* [Training performance benchmark](#training-performance-benchmark)
* [Inference performance benchmark](#inference-performance-benchmark)
* [Results](#results)
* [Training accuracy results](#training-accuracy-results)
* [Training accuracy: NVIDIA DGX A100 (8x A100 80GB)](#training-accuracy-nvidia-dgx-a100-8x-a100-80gb)
* [Training accuracy: NVIDIA DGX-1 (8x V100 16GB)](#training-accuracy-nvidia-dgx-1-8x-v100-16gb)
* [Training loss plot](#training-loss-plot)
* [Training stability test](#training-stability-test)
* [Training performance results](#training-performance-results)
* [Training performance: NVIDIA DGX A100 (8x A100 80GB)](#training-performance-nvidia-dgx-a100-8x-a100-80gb)
* [Training performance: NVIDIA DGX-1 (8x V100 16G)](#training-performance-nvidia-dgx-1-8x-v100-16gb)
* [Inference performance results](#inference-performance-results)
* [Inference performance: NVIDIA DGX A100 (1x A100 80GB)](#inference-performance-nvidia-dgx-a100-1x-a100-80gb)
* [Inference performance: NVIDIA DGX-1 (1x V100 16GB)](#inference-performance-nvidia-dgx-1-1x-v100-16gb)
- [Release notes](#release-notes)
* [Changelog](#changelog)
* [Known issues](#known-issues)
## Model overview
The SSD300 v1.1 model is based on the
[SSD: Single Shot MultiBox Detector](https://arxiv.org/abs/1512.02325) paper, which
describes SSD as “a method for detecting objects in images using a single deep neural network".
The input size is fixed to 300x300.
The main difference between this model and the one described in the paper is in the backbone.
Specifically, the VGG model is obsolete and is replaced by the ResNet-50 model.
From the
[Speed/accuracy trade-offs for modern convolutional object detectors](https://arxiv.org/abs/1611.10012)
paper, the following enhancements were made to the backbone:
* The conv5_x, avgpool, fc and softmax layers were removed from the original classification model.
* All strides in conv4_x are set to 1x1.
Detector heads are similar to the ones referenced in the paper, however,
they are enhanced by additional BatchNorm layers after each convolution.
Additionally, we removed weight decay on every bias parameter and
all the BatchNorm layer parameters as described in the
[Highly Scalable Deep Learning Training System with Mixed-Precision:
Training ImageNet in Four Minutes](https://arxiv.org/abs/1807.11205) paper.
Training of SSD requires computational costly augmentations.
To fully utilize GPUs during training we are using the
[NVIDIA DALI](https://github.com/NVIDIA/DALI) library
to accelerate data preparation pipelines.
This model is trained with mixed precision using Tensor Cores on Volta, Turing,
and the NVIDIA Ampere GPU architectures. Therefore, researchers can get results
2x faster than training without Tensor Cores, while experiencing the benefits of
mixed precision training. This model is tested against each NGC monthly
container release to ensure consistent accuracy and performance over time.
### Model architecture
Despite the changes described in the previous section,
the overall architecture, as described in the following diagram, has not changed.
<p align="center">
<img width="90%" src="./img/ssd_diagram.png" />
<br>
Figure 1. The architecture of a Single Shot MultiBox Detector model. Image has been taken from the <a href="https://arxiv.org/abs/1512.02325">Single Shot MultiBox Detector paper</a>.
</p>
The backbone is followed by 5 additional convolutional layers.
In addition to the convolutional layers, we attached 6 detection heads:
* The first detection head is attached to the last conv4_x layer.
* The other five detection heads are attached to the corresponding 5 additional layers.
### Default configuration
We trained the model for 65 epochs with the following setup:
* SGD with momentum (0.9)
* Learning rate = 2.6e-3 * number of GPUs * (batch_size / 32)
* Learning rate decay – multiply by 0.1 before 43 and 54 epochs
* We use linear warmup of the learning rate during the first epoch.
For more information, see the
[Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour](https://arxiv.org/abs/1706.02677) paper.
To enable warmup provide argument the `--warmup 300`
* Weight decay:
* 0 for BatchNorms and biases
* 5e-4 for other layers
**Note**: The learning rate is automatically scaled (in other words, multiplied
by the number of GPUs and multiplied by the batch size divided by 32).
### Feature support matrix
The following features are supported by this model.
| **Feature** | **SSD300 v1.1 PyTorch** |
|:---------:|:----------:|
|[AMP](https://pytorch.org/docs/stable/amp.html) | Yes |
|[APEX DDP](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) | Yes |
|[NVIDIA DALI](https://docs.nvidia.com/deeplearning/sdk/dali-release-notes/index.html) | Yes |
#### Features
[AMP](https://pytorch.org/docs/stable/amp.html) is an abbreviation used for automatic mixed precision training.
[DDP](https://nvidia.github.io/apex/parallel.html) stands for DistributedDataParallel and is used for multi-GPU training.
[NVIDIA DALI](https://docs.nvidia.com/deeplearning/sdk/dali-release-notes/index.html) - DALI is a library accelerating data preparation pipeline.
To accelerate your input pipeline, you only need to define your data loader
with the DALI library.
For details, see example sources in this repo or see
the [DALI documentation](https://docs.nvidia.com/deeplearning/sdk/dali-developer-guide/docs/index.html)
### Mixed precision training
Mixed precision is the combined use of different numerical precisions in
a computational method. [Mixed precision](https://arxiv.org/abs/1710.03740)
training offers significant computational speedup by performing operations
in half-precision format, while storing minimal information in single-precision
to retain as much information as possible in critical parts of the network.
Since the introduction of [Tensor Cores](https://developer.nvidia.com/tensor-cores)
in Volta, and following with both the Turing and Ampere architectures, significant training speedups are
experienced by switching to mixed precision -- up to 3x overall speedup
on the most arithmetically intense model architectures. Using mixed precision
training requires two steps:
1. Porting the model to use the FP16 data type where appropriate.
2. Adding loss scaling to preserve small gradient values.
The ability to train deep learning networks with lower precision was introduced
in the Pascal architecture and first supported in [CUDA 8](https://devblogs.nvidia.com/parallelforall/tag/fp16/)
in the NVIDIA Deep Learning SDK.
For information about:
- How to train using mixed precision, see the [Mixed Precision Training](https://arxiv.org/abs/1710.03740)
paper and [Training With Mixed Precision](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html)
documentation.
- Techniques used for mixed precision training, see the [Mixed-Precision
Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/)
blog.
- PyTorch AMP, see the [PyTorch Automatic Mixed Precision package](https://pytorch.org/docs/stable/amp.html).
#### Enabling mixed precision
Mixed precision is enabled in PyTorch by using the Automatic Mixed Precision (AMP)
autocast [torch.cuda.amp.autocast](https://pytorch.org/docs/stable/amp.html#autocasting) which casts variables
to half-precision upon retrieval, while storing variables in single-precision format.
Furthermore, to preserve small gradient magnitudes in backpropagation,
a [gradient scaling](https://pytorch.org/docs/stable/amp.html#gradient-scaling)
step must be included.
For an in-depth walk through on AMP, check out sample usage
[here](https://pytorch.org/docs/stable/amp.html).
#### Enabling TF32
TensorFloat-32 (TF32) is the new math mode in [NVIDIA A100](https://www.nvidia.com/en-us/data-center/a100/) GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.
TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations.
For more information, refer to the [TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) blog post.
TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.
### Glossary
backbone
: a part of a many object detection architectures, usually pre-trained for a different,
simpler task, like classification.
input pipeline
: set of operations performed for every item in input data before feeding the neural
network. Especially for object detection task, the input pipeline can be complex
and computationally significant. For that reason, solutions like NVIDIA DALI emerged.
object detection
: a subset of Computer Vision problem. The task of object detection is to localize
possibly multiple objects on the image and classify them. The difference between
Object Detection, Image Classification, and Localization are clearly explained in the
video published as a part of the [C4W3L01 course](https://www.youtube.com/watch?v=GSwYGkTfOKk).
SSD (Single Shot MultiBox Detector)
: a name for the detection model described in a [paper authored by Liu at al.](https://arxiv.org/abs/1512.02325)
ResNet (ResNet-50)
: a name for the classification model described in a [paper authored by He et al.](https://arxiv.org/abs/1512.03385)
In this repo, it is used as a backbone for SSD.
## Setup
The following section lists the requirements in order to start training the SSD300 v1.1 model.
### Requirements
This repository contains `Dockerfile` which extends the PyTorch 22.10 NGC container
and encapsulates some dependencies. Aside from these dependencies,
ensure you have the following software:
* [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
* [PyTorch 22.10 NGC container](https://ngc.nvidia.com/registry/nvidia-pytorch)
* GPU-based architecture:
* [NVIDIA Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/)
* [NVIDIA Turing](https://www.nvidia.com/en-us/geforce/turing/)
* [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/)
For more information about how to get started with NGC containers, see the
following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning
Documentation:
* [Getting Started Using NVIDIA GPU Cloud](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html)
* [Accessing And Pulling From The NGC Container Registry](https://docs.nvidia.com/deeplearning/dgx/user-guide/index.html#accessing_registry)
* [Running PyTorch](https://docs.nvidia.com/deeplearning/dgx/pytorch-release-notes/running.html#running)
For those unable to use the [PyTorch 22.10 NGC container](https://ngc.nvidia.com/registry/nvidia-pytorch),
to set up the required environment or create your own container,
see the versioned [NVIDIA Container Support Matrix](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html).
## Quick Start Guide
To train your model using mixed or TF32 precision with Tensor Cores or using FP32,
perform the following steps using the default parameters of the SSD v1.1 model
on the [COCO 2017](http://cocodataset.org/#download) dataset.
For the specifics concerning training and inference,
see the [Advanced](#advanced) section.
1. Clone the repository.
```
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/PyTorch/Detection/SSD
```
2. Download and preprocess the dataset.
Extract the COCO 2017 dataset with `download_dataset.sh $COCO_DIR`.
Data will be downloaded to the `$COCO_DIR` directory (on the host).
3. Build the SSD300 v1.1 PyTorch NGC container.
```
docker build . -t nvidia_ssd
```
4. Start an interactive session in the NGC container to run training/inference.
```
docker run --rm -it --gpus=all --ipc=host -v $COCO_DIR:/coco nvidia_ssd
```
**Note**: the default mount point in the container is `/coco`.
5. Start training.
The `./examples` directory provides several sample scripts for various GPU settings
and act as wrappers around the `main.py` script.
The example scripts need two arguments:
- A path to the root SSD directory.
- A path to the COCO 2017 dataset.
Remaining arguments are passed to the `main.py` script.
The `--save save_dir` flag, saves the model after each epoch in `save_dir` directory.
The checkpoints are stored as `<save_dir>/epoch_*.pt`.
Use `python main.py -h` to obtain the list of available options in the `main.py` script.
For example, if you want to run 8 GPU training with Tensor Core acceleration and
save checkpoints after each epoch, run:
```
bash ./examples/SSD300_FP16_8GPU.sh . /coco --save $SSD_CHECKPINT_PATH
```
6. Start validation/evaluation.
The `main.py` training script automatically runs validation during training.
The results from the validation are printed to `stdout`.
To evaluate a checkpointed model saved in the previous point, run:
```
python ./main.py --backbone resnet50 --mode evaluation --checkpoint ./models/epoch_*.pt --data /coco
```
7. Optionally, resume training from a checkpointed model.
```
python ./main.py --backbone resnet50 --checkpoint ./models/epoch_*.pt --data /coco
```
8. Start inference/predictions.
You can check your trained model with a Jupyter notebook provided in the examples directory.
Start with running a Docker container with a Jupyter notebook server:
```
docker run --rm -it --gpus=all --ipc=host -v $SSD_CHECKPOINT_PATH:/checkpoints/SSD300v1.1.pt -v $COCO_PATH:/datasets/coco2017 -p 8888:8888 nvidia_ssd jupyter-notebook --ip 0.0.0.0 --allow-root
```
## Advanced
The following sections provide greater details of the dataset,
running training and inference, and the training results.
### Scripts and sample code
In the root directory, the most important files are:
- `main.py`: the script that controls the logic of training and validation of the SSD300 v1.1 model;
- `Dockerfile`: Instructions for docker to build a container with the basic set of dependencies to run SSD300 v1.1;
- `requirements.txt`: a set of extra Python requirements for running SSD300 v1.1;
- `download_dataset.py`: automatically downloads the COCO dataset for training.
The `ssd/` directory contains modules used to train and evaluate the SSD300 v1.1 model
- `model.py`: the definition of SSD300 v1.1 model
- `data.py`: definition of input pipelines used in training and evaluation
- `train.py`: functions used to train the SSD300 v1.1 model
- `evaluate.py`: functions used to evaluate the SSD300 v1.1 model
- `coco_pipeline.py`: definition of input pipeline using NVIDIA DALI
- `coco.py`: code specific for the COCO dataset
- `logger.py`: utilities for logging
- `utils.py`: extra utility functions
The `examples/` directory contains scripts wrapping common scenarios.
### Parameters
#### The script `main.py`
The script for training end evaluating the SSD300 v1.1 model have a variety
of parameters that control these processes.
##### Common parameters
`--data`
: use it to specify, where your dataset is. By default, the script will look for it
under the `/coco` directory.
`--checkpoint`
: allows you to specify the path to the pre-trained model.
`--save`
: when the flag is turned on, the script will save the trained model checkpoints in the specified directory
`--seed`
: Use it to specify the seed for RNGs.
`--amp`
: when the flag is turned on, the AMP features will be enabled.
##### Training related
`--epochs`
: a number of times the model will see every example from the training dataset.
`--evaluation`
: after this parameter, list the number of epochs after which evaluation should
be performed.
`--learning-rate`
: initial learning rate.
`--multistep`
: after this parameter, list the epochs after which learning rate should be decayed.
`--warmup`
: allows you to specify the number of iterations for which a linear learning-rate
warmup will be performed.
`--momentum`
: momentum argument for SGD optimizer.
`--weight-decay`
: weight decay argument for SGD optimizer.
`--batch-size`
: a number of inputs processed at once for each iteration.
`--backbone-path`
: the path to the checkpointed backbone. When it is not provided, a pre-trained model from torchvision
will be downloaded.
##### Evaluation related
`--eval-batch-size`
: a number of inputs processed at once for each iteration.
##### Utility parameters
`--help`
: displays a short description of all parameters accepted by the script.
### Command-line options
All these parameters can be controlled by passing command-line arguments
to the `main.py` script. To get a complete list of all command-line arguments
with descriptions and default values you can run:
```
python main.py --help
```
### Getting the data
The SSD model was trained on the COCO 2017 dataset. The [val2017](http://cocodataset.org/#download) validation set
was used as a validation dataset. PyTorch can work directly on JPEGs,
therefore, preprocessing/augmentation is not needed.
This repository contains the `download_dataset.sh` download script which will automatically
download and preprocess the training, validation and test datasets. By default,
data will be downloaded to the `/coco` directory.
#### Dataset guidelines
Our model expects input data aligned in a way a COCO dataset is aligned by the `download_dataset.sh` script.
`train2017` and `val2017` directories should contain images in JPEG format.
Annotation format is described in [the COCO documentation](http://cocodataset.org/#format-data).
The preprocessing of the data is defined in the `ssd/coco_pipeline.py` module.
##### Data preprocessing
Before we feed data to the model, both during training and inference, we perform:
* JPEG decoding
* normalization with a mean =` [0.485, 0.456, 0.406]` and std dev = `[0.229, 0.224, 0.225]`
* encoding bounding boxes
* resizing to 300x300
Additionally, during training, data is:
* randomly shuffled
* samples without annotations are skipped
##### Data augmentation
During training we perform the following augmentation techniques:
* Random crop using the algorithm described in the [SSD: Single Shot MultiBox Detector](https://arxiv.org/abs/1512.02325) paper
* Random horizontal flip
* Color jitter
### Training process
Training the SSD model is implemented in the `main.py` script.
By default, training is running for 65 epochs. Because evaluation is relatively time consuming,
it is not running every epoch. With default settings, evaluation is executed after epochs:
21, 31, 37, 42, 48, 53, 59, 64. The model is evaluated using pycocotools distributed with
the COCO dataset.
Which epochs should be evaluated can be reconfigured with the `--evaluation` argument.
To run training with Tensor Cores, use the `--amp` flag when running the `main.py` script.
The flag `--save ./models` flag enables storing checkpoints after each epoch under `./models/epoch_*.pt`.
### Evaluation process
Pycocotools’ open-sourced scripts provides a consistent way
to evaluate models on the COCO dataset. We are using these scripts
during validation to measure a models performance in AP metric.
Metrics below are evaluated using pycocotools’ methodology, in the following format:
```
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.27205
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.45869
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.27884
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.08275
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.29840
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.42722
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.25092
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.36528
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.38262
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.13577
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.42287
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.57277
```
The metric reported in our results is present in the first row.
### Inference process
Our scripts for SSD300 v1.1 presents two ways to run inference.
To get meaningful results, you need a pre-trained model checkpoint.
One way is to run an interactive session on Jupyter notebook, as described in a 8th step of the [Quick Start Guide](#quick-start-guide).
The container prints Jupyter notebook logs like this:
```
[I 16:17:58.935 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[I 16:17:59.769 NotebookApp] JupyterLab extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab
[I 16:17:59.769 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 16:17:59.770 NotebookApp] Serving notebooks from local directory: /workspace
[I 16:17:59.770 NotebookApp] The Jupyter Notebook is running at:
[I 16:17:59.770 NotebookApp] http://(65935d756c71 or 127.0.0.1):8888/?token=04c78049c67f45a4d759c8f6ddd0b2c28ac4eab60d81be4e
[I 16:17:59.770 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 16:17:59.774 NotebookApp] No web browser found: could not locate runnable browser.
[C 16:17:59.774 NotebookApp]
To access the notebook, open this file in a browser:
file:///root/.local/share/jupyter/runtime/nbserver-1-open.html
Or copy and paste one of these URLs:
http://(65935d756c71 or 127.0.0.1):8888/?token=04c78049c67f45a4d759c8f6ddd0b2c28ac4eab60d81be4e
```
Use the token printed in the last line to start your notebook session.
The notebook is in `examples/inference.ipynb`, for example:
http://127.0.0.1:8888/notebooks/examples/inference.ipynb?token=04c78049c67f45a4d759c8f6ddd0b2c28ac4eab60d81be4e
Another way is to run a script `examples/SSD300_inference.py`. It contains the logic from the notebook, wrapped into a Python script. The script contains sample usage.
To use the inference example script in your own code, you can call the `main` function, providing input image URIs as an argument. The result will be a list of detections for each input image.
## Performance
The performance measurements in this document were conducted at the time of publication and may not reflect the performance achieved from NVIDIA’s latest software release. For the most up-to-date performance measurements, go to [NVIDIA Data Center Deep Learning Product Performance](https://developer.nvidia.com/deep-learning-performance-training-inference).
### Benchmarking
The following section shows how to run benchmarks measuring the model performance in training and inference modes.
#### Training performance benchmark
The training benchmark was run in various scenarios on A100 80GB and V100 16G GPUs. The benchmark does not require a checkpoint from a fully trained model.
To benchmark training, run:
```
torchrun --nproc_per_node={NGPU} \
main.py --batch-size {bs} \
--mode benchmark-training \
--benchmark-warmup 100 \
--benchmark-iterations 200 \
{AMP} \
--data {data}
```
Where the `{NGPU}` selects number of GPUs used in benchmark, the `{bs}` is the desired
batch size, the `{AMP}` is set to `--amp` if you want to benchmark training with
Tensor Cores, and the `{data}` is the location of the COCO 2017 dataset.
`--benchmark-warmup` is specified to omit the first iteration of the first epoch.
`--benchmark-iterations` is a number of iterations used to measure performance.
#### Inference performance benchmark
Inference benchmark was run on 1x A100 80GB GPU and 1x V100 16G GPU. To benchmark inference, run:
```
python main.py --eval-batch-size {bs} \
--mode benchmark-inference \
--benchmark-warmup 100 \
--benchmark-iterations 200 \
{AMP} \
--data {data}
```
Where the `{bs}` is the desired batch size, the `{AMP}` is set to `--amp` if you want to benchmark inference with Tensor Cores, and the `{data}` is the location of the COCO 2017 dataset.
`--benchmark-warmup` is specified to omit the first iterations of the first epoch. `--benchmark-iterations` is a number of iterations used to measure performance.
### Results
The following sections provide details on how we achieved our performance and accuracy in training and inference.
#### Training accuracy results
##### Training accuracy: NVIDIA DGX A100 (8x A100 80GB)
Our results were obtained by running the `./examples/SSD300_A100_{FP16,TF32}_{1,4,8}GPU.sh`
script in the `pytorch-22.10-py3` NGC container on NVIDIA DGX A100 (8x A100 80GB) GPUs.
|GPUs |Batch size / GPU|Accuracy - TF32|Accuracy - mixed precision|Time to train - TF32|Time to train - mixed precision|Time to train speedup (TF32 to mixed precision)|
|-----------|----------------|---------------|---------------------------|--------------------|--------------------------------|------------------------------------------------|
|1 |64 |0.271 |0.272 |03:19:59 |03:18:35 |100% |
|4 |64 |0.270 |0.270 |00:51:22 |00:51:31 | 99% |
|8 |64 |0.270 |0.269 |00:26:10 |00:26:10 | 99% |
|1 |128 |0.274 |0.271 |03:03:56 |03:03:50 |100% |
|4 |128 |0.272 |0.270 |00:46:51 |00:47:01 | 99% |
|8 |128 |0.267 |0.267 |00:23:44 |00:23:46 | 99% |
|1 |256 |0.272 |0.272 |02:56:37 |02:56:44 | 99% |
|4 |256 |0.271 |0.267 |00:45:05 |00:45:07 | 99% |
|8 |256 |0.260 |0.258 |00:22:49 |00:22:56 |100% |
##### Training accuracy: NVIDIA DGX-1 (8x V100 16GB)
Our results were obtained by running the `./examples/SSD300_FP{16,32}_{1,4,8}GPU.sh`
script in the `pytorch-22.10-py3` NGC container on NVIDIA DGX-1 with 8x
V100 16GB GPUs.
|GPUs |Batch size / GPU|Accuracy - FP32|Accuracy - mixed precision|Time to train - FP32|Time to train - mixed precision|Time to train speedup (FP32 to mixed precision)|
|-----------|----------------|---------------|---------------------------|--------------------|--------------------------------|------------------------------------------------|
|1 |32 |0.269 |0.271 |20:04:48 |07:25:27 |270% |
|4 |32 |0.270 |0.269 |05:08:56 |01:58:41 |260% |
|8 |32 |0.271 |0.269 |02:35:00 |01:00:27 |256% |
|1 |64 |<N/A> |0.272 |<N/A> |06:47:58 |<N/A> |
|4 |64 |<N/A> |0.270 |<N/A> |01:46:34 |<N/A> |
|8 |64 |<N/A> |0.269 |<N/A> |00:53:52 |<N/A> |
Due to smaller size, mixed precision models can be trained with bigger batches. In such cases mixed precision speedup is calculated versus FP32 training with maximum batch size for that precision
##### Training loss plot
Here are example graphs of FP32, TF32 and AMP training on 8 GPU configuration:

##### Training stability test
The SSD300 v1.1 model was trained for 65 epochs, starting
from 15 different initial random seeds. The training was performed in the `pytorch-22.10-py3` NGC container on
NVIDIA DGX A100 8x A100 80GB GPUs with batch size per GPU = 128.
After training, the models were evaluated on the test dataset. The following
table summarizes the final mAP on the test set.
|**Precision**|**Average mAP**|**Standard deviation**|**Minimum**|**Maximum**|**Median**|
|------------:|--------------:|---------------------:|----------:|----------:|---------:|
| AMP | 0.2679503039 | 0.001360494012 | 0.26201 | 0.27013 | 0.26529 |
| TF32 | 0.2670691823 | 0.001639394102 | 0.26181 | 0.27274 | 0.26492 |
#### Training performance results
##### Training performance: NVIDIA DGX A100 (8x A100 80GB)
Our results were obtained by running the `main.py` script with the `--mode
benchmark-training` flag in the `pytorch-22.10-py3` NGC container on NVIDIA
DGX A100 (8x A100 80GB) GPUs. Performance numbers (in items/images per second)
were averaged over an entire training epoch.
|GPUs |Batch size / GPU|Throughput - TF32|Throughput - mixed precision|Throughput speedup (TF32 - mixed precision)|Weak scaling - TF32 |Weak scaling - mixed precision |
|-----------|----------------|-----------------|-----------------------------|-------------------------------------------|--------------------------------|------------------------------------------------|
|1 |64 | 364.27 | 662.91 |181% |100% |100% |
|4 |64 |1432.73 |2581.24 |180% |393% |389% |
|8 |64 |2838.76 |5252.84 |185% |779% |792% |
|1 |128 | 377.18 | 724.41 |192% |100% |100% |
|4 |128 |1493.13 |2885.55 |193% |395% |398% |
|8 |128 |2967.23 |5733.98 |193% |786% |791% |
To achieve these same results, follow the [Quick Start Guide](#quick-start-guide) outlined above.
##### Training performance: NVIDIA DGX-1 (8x V100 16GB)
Our results were obtained by running the `main.py` script with the `--mode
benchmark-training` flag in the `pytorch-22.10-py3` NGC container on NVIDIA
DGX-1 with 8x V100 16GB GPUs. Performance numbers (in items/images per second)
were averaged over an entire training epoch.
|GPUs |Batch size / GPU|Throughput - FP32|Throughput - mixed precision|Throughput speedup (FP32 - mixed precision)|Weak scaling - FP32 |Weak scaling - mixed precision |
|-----------|----------------|-----------------|-----------------------------|-------------------------------------------|--------------------------------|------------------------------------------------|
|1 |32 |107.22 | 296.80 |276% |100% |100% |
|4 |32 |419.54 |1115.59 |265% |391% |375% |
|8 |32 |840.35 |2153.96 |256% |783% |725% |
|1 |64 |<N/A> | 322.81 |<N/A> |<N/A> |100% |
|4 |64 |<N/A> |1238.27 |<N/A> |<N/A> |383% |
|8 |64 |<N/A> |2520.50 |<N/A> |<N/A> |780% |
Due to smaller size, mixed precision models can be trained with bigger batches. In such cases mixed precision speedup is calculated versus FP32 training with maximum batch size for that precision
To achieve these same results, follow the [Quick Start Guide](#quick-start-guide) outlined above.
#### Inference performance results
##### Inference performance: NVIDIA DGX A100 (1x A100 80GB)
Our results were obtained by running the `main.py` script with `--mode
benchmark-inference` flag in the pytorch-22.10-py3 NGC container on NVIDIA
DGX A100 (1x A100 80GB) GPU.
|Batch size |Throughput - TF32|Throughput - mixed precision|Throughput speedup (TF32 - mixed precision)|Weak scaling - TF32 |Weak scaling - mixed precision |
|-----------|-----------------|-----------------------------|-------------------------------------------|--------------------|--------------------------------|
|1 |158.83 | 142.67 | 89% |100% |100% |
|2 |308.31 | 261.21 | 84% |194% |183% |
|4 |481.69 | 454.95 | 94% |303% |318% |
|8 |597.72 | 742.05 |124% |376% |520% |
|16 |590.44 | 887.01 |150% |371% |621% |
|32 |708.97 | 970.27 |136% |446% |680% |
|64 |798.16 |1057.51 |132% |502% |741% |
To achieve these same results, follow the [Quick Start Guide](#quick-start-guide) outlined above.
##### Inference performance: NVIDIA DGX-1 (1x V100 16GB)
Our results were obtained by running the `main.py` script with `--mode
benchmark-inference` flag in the pytorch-22.10-py3 NGC container on NVIDIA
DGX-1 with (1x V100 16GB) GPU.
|Batch size |Throughput - FP32|Throughput - mixed precision|Throughput speedup (FP32 - mixed precision)|Weak scaling - FP32 |Weak scaling - mixed precision |
|-----------|-----------------|-----------------------------|-------------------------------------------|--------------------|--------------------------------|
|1 | 93.21 | 84.59 | 90% |100% |100% |
|2 |148.61 |165.30 |111% |159% |195% |
|4 |206.82 |304.77 |147% |221% |360% |
|8 |242.55 |447.25 |184% |260% |528% |
|16 |292.44 |541.05 |185% |313% |639% |
|32 |311.61 |605.30 |194% |334% |715% |
To achieve these same results, follow the [Quick Start Guide](#quick-start-guide) outlined above.
## Release notes
### Changelog
October 2022
* upgrade the PyTorch container to 22.10
* switched to using torchvision IMAGENET1K_V2 backbone weights
* added a flag to control for torchvision weight enums
* added a flag to control TF32 computations
* fixed various depreciation warnings
* set `TORCH_CUDNN_V8_API_ENABLED` environment variable which replaces `CUDNN_V8_API_ENABLED` from older containers
* updated [nv-cocoapi](https://github.com/NVIDIA/cocoapi/) from 0.6.0 to 0.7.3
* updated python dependencies
June 2022
* upgrade the PyTorch container to 22.05
* fixed DALI depreciation warnings
January 2022
* upgrade the PyTorch container to 22.01
* made AMP the default data precision
* added --data-layout option (channels_first is the recommended layout with --no-amp)
* updated README with new performance numbers
November 2021
* upgrade the PyTorch container to 21.11
* switched data layout from NCHW (channels first) to NHWC (channels last)
* replaced `torch.distributed.launch` with `torchrun`
* updated README with new performance numbers
May 2021
* upgrade the PyTorch container to 21.05
* replaced APEX AMP with native PyTorch AMP
* updated [nv-cocoapi](https://github.com/NVIDIA/cocoapi/) from 0.4.0 to 0.6.0
* code updated to use DALI 1.2.0
April 2021
* upgrade the PyTorch container to 21.04
* changed python package naming
March 2021
* upgrade the PyTorch container to 21.03
* code updated to use DALI 0.30.0
* use DALI [BoxEncoder](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/supported_ops.html#nvidia.dali.ops.BoxEncoder) instead of a CUDA extension
* replaced [cocoapi](https://github.com/cocodataset/cocoapi) with [nv-cocoapi](https://github.com/NVIDIA/cocoapi/)
June 2020
* upgrade the PyTorch container to 20.06
* update performance tables to include A100 results
* update examples with A100 configs
August 2019
* upgrade the PyTorch container to 19.08
* update Results section in the README
* code updated to use DALI 0.12.0
* checkpoint loading fix
* fixed links in the README
July 2019
* script and notebook for inference
* use AMP instead of hand-crafted FP16 support
* README update
* introduced a parameter with a path to the custom backbone checkpoint
* minor enchantments of `example/*` scripts
* alignment to changes in PyTorch 19.06
March 2019
* Initial release
## Known issues
There are no known issues with this model.
|
TensorFlow/Detection/SSD | SSD | requirements | cython==0.29.24
pycocotools==2.0.2
contextlib2==21.6.0
|
PyTorch/SpeechRecognition/QuartzNet/scripts/docker | docker | launch | #!/bin/bash
SCRIPT_DIR=$(cd $(dirname $0); pwd)
QN_REPO=${QN_REPO:-"${SCRIPT_DIR}/../.."}
DATA_DIR=${1:-${DATA_DIR-${QN_REPO}"/datasets"}}
RESULT_DIR=${2:-${RESULT_DIR:-${QN_REPO}"/results"}}
SCRIPT=${3:-${SCRIPT:-""}}
MOUNTS=""
MOUNTS+=" -v $DATA_DIR:/datasets"
MOUNTS+=" -v $RESULT_DIR:/results"
MOUNTS+=" -v ${QN_REPO}:/quartznet"
docker run -it --rm --gpus all\
--env PYTHONDONTWRITEBYTECODE=1 \
--shm-size=4g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
$MOUNTS \
-w /quartznet \
quartznet:latest bash $SCRIPT
|
TensorFlow2/Classification/ConvNets/efficientnet_v1/B4/training/AMP | AMP | convergence_8xV100-32G | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
horovodrun -np 8 bash ./scripts/bind.sh --cpu=exclusive --ib=single -- python3 main.py \
--cfg config/efficientnet_v1/b4_cfg.py \
--mode train_and_eval \
--use_amp \
--use_xla \
--model_dir ./output \
--data_dir /data \
--log_steps 100 \
--max_epochs 500 \
--save_checkpoint_freq 5 \
--train_batch_size 64 \
--eval_batch_size 64 \
--train_img_size 380 \
--eval_img_size 380 \
--augmenter_name autoaugment \
--lr_decay cosine \
--mixup_alpha 0.2 \
--defer_img_mixing \
--moving_average_decay 0.9999 \
--lr_init 0.005
|
PyTorch/Forecasting/TFT | TFT | requirements | git+https://github.com/NVIDIA/[email protected]#egg=dllogger
pandas==1.3.4
pynvml==11.0.0
|
PyTorch/SpeechSynthesis/Tacotron2/scripts | scripts | train_tacotron2 | mkdir -p output
python -m multiproc train.py -m Tacotron2 -o ./output/ -lr 1e-3 --epochs 1501 -bs 48 --weight-decay 1e-6 --grad-clip-thresh 1.0 --cudnn-enabled --log-file nvlog.json --anneal-steps 500 1000 1500 --anneal-factor 0.1
|
TensorFlow2/Recommendation/DLRM_and_DCNv2/deployment/hps | hps | triton_ensemble_wrapper | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# author: Tomasz Grel ([email protected])
import tritonclient.utils
import tritonclient.http
import numpy as np
import deployment.hps.constants as c
class NumpyToHpsInputConverter:
def __init__(self, categorical_sizes, fused_embedding=True):
self.offsets = np.cumsum([0] + categorical_sizes)[:-1]
self.fused_embedding = fused_embedding
def __call__(self, numerical_features, cat_features):
batch_size = cat_features[0].shape[0]
cat_features = [f.numpy().flatten() for f in cat_features]
# add the offsets
if self.fused_embedding:
cat_features = [f + o for f, o in zip(cat_features, self.offsets)]
key_tensor = np.concatenate(cat_features, axis=0).astype(np.int64).reshape([1, -1])
if self.fused_embedding:
nkey_tensor = np.full(shape=(1, 1), fill_value=batch_size * len(cat_features), dtype=np.int32)
else:
nkey_tensor = np.full(shape=(1, len(cat_features)), fill_value=batch_size, dtype=np.int32)
numerical_features = numerical_features.numpy().astype(np.float32).reshape([1, -1])
return key_tensor, nkey_tensor, numerical_features
class RecsysTritonEnsemble:
def __init__(self, model_name, num_tables, verbose, categorical_sizes, fused_embedding=True):
self.input_converter = NumpyToHpsInputConverter(categorical_sizes, fused_embedding)
self.model_name = model_name
self.triton_client = tritonclient.http.InferenceServerClient(url="localhost:8000", verbose=verbose)
if not self.triton_client.is_server_live():
raise ValueError('Triton server is not live!')
print('triton model repo: ', self.triton_client.get_model_repository_index())
def __call__(self, inputs, sigmoid=False, training=False):
numerical_features, cat_features = list(inputs.values())
batch_size = cat_features[0].shape[0]
key_tensor, nkey_tensor, numerical_features = self.input_converter(numerical_features, cat_features)
inputs = [
tritonclient.http.InferInput(c.key_global_prefix,
key_tensor.shape,
tritonclient.utils.np_to_triton_dtype(np.int64)),
tritonclient.http.InferInput(c.numkey_global_prefix,
nkey_tensor.shape,
tritonclient.utils.np_to_triton_dtype(np.int32)),
tritonclient.http.InferInput(c.ens_numerical_features_name,
numerical_features.shape,
tritonclient.utils.np_to_triton_dtype(np.float32)),
]
inputs[0].set_data_from_numpy(key_tensor)
inputs[1].set_data_from_numpy(nkey_tensor)
inputs[2].set_data_from_numpy(numerical_features)
outputs = [tritonclient.http.InferRequestedOutput(c.ens_output_name)]
response = self.triton_client.infer(self.model_name, inputs, outputs=outputs)
result_np = response.as_numpy(c.ens_output_name)
result_np = result_np.reshape([batch_size])
return result_np
|
TensorFlow/Detection/SSD/models/research/slim/nets | nets | vgg | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains model definitions for versions of the Oxford VGG network.
These model definitions were introduced in the following technical report:
Very Deep Convolutional Networks For Large-Scale Image Recognition
Karen Simonyan and Andrew Zisserman
arXiv technical report, 2015
PDF: http://arxiv.org/pdf/1409.1556.pdf
ILSVRC 2014 Slides: http://www.robots.ox.ac.uk/~karen/pdf/ILSVRC_2014.pdf
CC-BY-4.0
More information can be obtained from the VGG website:
www.robots.ox.ac.uk/~vgg/research/very_deep/
Usage:
with slim.arg_scope(vgg.vgg_arg_scope()):
outputs, end_points = vgg.vgg_a(inputs)
with slim.arg_scope(vgg.vgg_arg_scope()):
outputs, end_points = vgg.vgg_16(inputs)
@@vgg_a
@@vgg_16
@@vgg_19
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
slim = tf.contrib.slim
def vgg_arg_scope(weight_decay=0.0005):
"""Defines the VGG arg scope.
Args:
weight_decay: The l2 regularization coefficient.
Returns:
An arg_scope.
"""
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(weight_decay),
biases_initializer=tf.zeros_initializer()):
with slim.arg_scope([slim.conv2d], padding='SAME') as arg_sc:
return arg_sc
def vgg_a(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
scope='vgg_a',
fc_conv_padding='VALID',
global_pool=False):
"""Oxford Net VGG 11-Layers version A Example.
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
scope: Optional scope for the variables.
fc_conv_padding: the type of padding to use for the fully connected layer
that is implemented as a convolutional layer. Use 'SAME' padding if you
are applying the network in a fully convolutional manner and want to
get a prediction map downsampled by a factor of 32 as an output.
Otherwise, the output prediction map will be (input / 32) - 6 in case of
'VALID' padding.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original VGG architecture.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the input to the logits layer (if num_classes is 0 or None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(scope, 'vgg_a', [inputs]) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.repeat(inputs, 1, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 1, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 2, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 2, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 2, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
net = slim.conv2d(net, 4096, [7, 7], padding=fc_conv_padding, scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(end_points_collection)
if global_pool:
net = tf.reduce_mean(net, [1, 2], keep_dims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(net, num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
vgg_a.default_image_size = 224
def vgg_16(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
scope='vgg_16',
fc_conv_padding='VALID',
global_pool=False):
"""Oxford Net VGG 16-Layers version D Example.
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
scope: Optional scope for the variables.
fc_conv_padding: the type of padding to use for the fully connected layer
that is implemented as a convolutional layer. Use 'SAME' padding if you
are applying the network in a fully convolutional manner and want to
get a prediction map downsampled by a factor of 32 as an output.
Otherwise, the output prediction map will be (input / 32) - 6 in case of
'VALID' padding.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original VGG architecture.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the input to the logits layer (if num_classes is 0 or None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(scope, 'vgg_16', [inputs]) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
net = slim.conv2d(net, 4096, [7, 7], padding=fc_conv_padding, scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(end_points_collection)
if global_pool:
net = tf.reduce_mean(net, [1, 2], keep_dims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(net, num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
vgg_16.default_image_size = 224
def vgg_19(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
scope='vgg_19',
fc_conv_padding='VALID',
global_pool=False):
"""Oxford Net VGG 19-Layers version E Example.
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
scope: Optional scope for the variables.
fc_conv_padding: the type of padding to use for the fully connected layer
that is implemented as a convolutional layer. Use 'SAME' padding if you
are applying the network in a fully convolutional manner and want to
get a prediction map downsampled by a factor of 32 as an output.
Otherwise, the output prediction map will be (input / 32) - 6 in case of
'VALID' padding.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original VGG architecture.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the non-dropped-out input to the logits layer (if num_classes is 0 or
None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(scope, 'vgg_19', [inputs]) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 4, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 4, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 4, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
net = slim.conv2d(net, 4096, [7, 7], padding=fc_conv_padding, scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(end_points_collection)
if global_pool:
net = tf.reduce_mean(net, [1, 2], keep_dims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(net, num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
vgg_19.default_image_size = 224
# Alias
vgg_d = vgg_16
vgg_e = vgg_19
|
PyTorch/Detection/Efficientdet/effdet | effdet | efficientnet_test | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from efficientnet import EfficientNet, efficientnet_configs
def test_feature_type(net, images):
output, features = net(images, features_only=True)
print("[ ... Test Type ... ] Type of output {} features {}".format(type(output), type(features)))
def test_feature_dimensions(net, images):
output, features = net(images, features_only=True)
print("[ ... Test dimension ... ] Dim of output {} features {}".format(output.size(), len(features)))
for i, x in enumerate(features):
print("[ ... Test dimension ... ] Index {} features size {}".format(i, features[i].size()))
def test_feature_info(net, images):
feature_info = net.feature_info
for i, f in enumerate(feature_info):
print("[ ... Test Feature Info ... ] Index {} features info {}".format(i, f))
def main():
global_config = efficientnet_configs['fanout']
net = EfficientNet(width_coeff=1, depth_coeff=1, dropout=0.2, num_classes=1000, global_config=global_config, out_indices=[2,3,4])
images = torch.rand((2, 3, 512, 512))
test_feature_type(net, images)
test_feature_dimensions(net, images)
test_feature_info(net, images)
print("Model Layer Names")
for n, m in net.named_modules():
print(n)
if __name__ == '__main__':
main() |
TensorFlow/Segmentation/UNet_Industrial/utils | utils | cmdline_helper | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# ==============================================================================
#
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ==============================================================================
import argparse
from datasets import known_datasets
from model.unet import UNet_v1
from model.blocks.activation_blck import authorized_activation_fn
def _add_bool_argument(parser, name=None, default=False, required=False, help=None):
if not isinstance(default, bool):
raise ValueError()
feature_parser = parser.add_mutually_exclusive_group(required=required)
feature_parser.add_argument('--' + name, dest=name, action='store_true', help=help, default=default)
feature_parser.add_argument('--no' + name, dest=name, action='store_false')
feature_parser.set_defaults(name=default)
def parse_cmdline():
p = argparse.ArgumentParser(description="JoC-UNet_v1-TF")
p.add_argument(
'--unet_variant',
default="tinyUNet",
choices=UNet_v1.authorized_models_variants,
type=str,
required=False,
help="""Which model size is used. This parameter control directly the size and the number of parameters"""
)
p.add_argument(
'--activation_fn',
choices=authorized_activation_fn,
type=str,
default="relu",
required=False,
help="""Which activation function is used after the convolution layers"""
)
p.add_argument(
'--exec_mode',
choices=['train', 'train_and_evaluate', 'evaluate', 'training_benchmark', 'inference_benchmark'],
type=str,
required=True,
help="""Which execution mode to run the model into"""
)
p.add_argument(
'--iter_unit',
choices=['epoch', 'batch'],
type=str,
required=True,
help="""Will the model be run for X batches or X epochs ?"""
)
p.add_argument('--num_iter', type=int, required=True, help="""Number of iterations to run.""")
p.add_argument('--batch_size', type=int, required=True, help="""Size of each minibatch per GPU.""")
p.add_argument(
'--warmup_step',
default=200,
type=int,
required=False,
help="""Number of steps considered as warmup and not taken into account for performance measurements."""
)
p.add_argument(
'--results_dir',
type=str,
required=True,
help="""Directory in which to write training logs, summaries and checkpoints."""
)
p.add_argument(
'--log_dir',
type=str,
required=False,
default="dlloger_out.json",
help="""Directory in which to write logs."""
)
_add_bool_argument(
parser=p,
name="save_eval_results_to_json",
default=False,
required=False,
help="Whether to save evaluation results in JSON format."
)
p.add_argument('--data_dir', required=False, default=None, type=str, help="Path to dataset directory")
p.add_argument(
'--dataset_name',
choices=list(known_datasets.keys()),
type=str,
required=True,
help="""Name of the dataset used in this run (only DAGM2007 is supported atm.)"""
)
p.add_argument(
'--dataset_classID',
default=None,
type=int,
required=False,
help="""ClassID to consider to train or evaluate the network (used for DAGM)."""
)
p.add_argument(
'--data_format',
choices=['NHWC', 'NCHW'],
type=str,
default="NCHW",
required=False,
help="""Which Tensor format is used for computation inside the mode"""
)
_add_bool_argument(
parser=p,
name="amp",
default=False,
required=False,
help="Enable Automatic Mixed Precision to speedup FP32 computation using tensor cores"
)
_add_bool_argument(
parser=p, name="xla", default=False, required=False, help="Enable Tensorflow XLA to maximise performance."
)
p.add_argument(
'--weight_init_method',
choices=UNet_v1.authorized_weight_init_methods,
default="he_normal",
type=str,
required=False,
help="""Which initialisation method is used to randomly intialize the model during training"""
)
p.add_argument('--learning_rate', default=1e-4, type=float, required=False, help="""Learning rate value.""")
p.add_argument(
'--learning_rate_decay_factor',
default=0.8,
type=float,
required=False,
help="""Decay factor to decrease the learning rate."""
)
p.add_argument(
'--learning_rate_decay_steps',
default=500,
type=int,
required=False,
help="""Decay factor to decrease the learning rate."""
)
p.add_argument('--rmsprop_decay', default=0.9, type=float, required=False, help="""RMSProp - Decay value.""")
p.add_argument('--rmsprop_momentum', default=0.8, type=float, required=False, help="""RMSProp - Momentum value.""")
p.add_argument('--weight_decay', default=1e-5, type=float, required=False, help="""Weight Decay scale factor""")
_add_bool_argument(
parser=p, name="use_auto_loss_scaling", default=False, required=False, help="Use AutoLossScaling with TF-AMP"
)
p.add_argument(
'--loss_fn_name',
type=str,
default="adaptive_loss",
required=False,
help="""Loss function Name to use to train the network"""
)
_add_bool_argument(
parser=p, name="augment_data", default=True, required=False, help="Choose whether to use data augmentation"
)
p.add_argument(
'--display_every',
type=int,
default=50,
required=False,
help="""How often (in batches) to print out debug information."""
)
p.add_argument(
'--debug_verbosity',
choices=[0, 1, 2],
default=0,
type=int,
required=False,
help="""Verbosity Level: 0 minimum, 1 with layer creation debug info, 2 with layer + var creation debug info."""
)
p.add_argument('--seed', type=int, default=None, help="""Random seed.""")
FLAGS, unknown_args = p.parse_known_args()
if len(unknown_args) > 0:
for bad_arg in unknown_args:
print("ERROR: Unknown command line arg: %s" % bad_arg)
raise ValueError("Invalid command line arg(s)")
return FLAGS
|
Tools/DGLPyTorch/SyntheticGraphGeneration/syngen/generator/tabular | tabular | gaussian_generator | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pickle
from typing import Optional, List
import cupy as cp
import numpy as np
import pandas as pd
from tqdm import tqdm
from pandas.api.types import is_integer_dtype
from sklearn.preprocessing import OrdinalEncoder
from syngen.generator.tabular.chunked_tabular_generator import ChunkedBaseTabularGenerator
from syngen.generator.utils import cuda_repeat
class GaussianGenerator(ChunkedBaseTabularGenerator):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def ordinal_encoder(self, cat_col):
encoder = OrdinalEncoder()
encoder.fit(cat_col)
return encoder
def fit(
self,
data,
categorical_columns=(),
columns: Optional[List[str]] = None,
verbose: bool = False,
):
self.column_order = columns or list(data.columns)
self.cat_fit = {}
self.categorical_columns = set(categorical_columns)
self.continuous_columns = set(self.column_order) - self.categorical_columns
num_samples = len(data)
# - multinomial distribution
cat_cols = tqdm(self.categorical_columns) if verbose else self.categorical_columns
for column in cat_cols:
enc = self.ordinal_encoder(data[column].values.reshape(-1, 1))
pvals = data[column].value_counts() / num_samples
pvals = pvals.values
self.cat_fit[column] = {
"encoder": enc,
"pvals": pvals,
'dtype': data[column].dtype,
}
self.cont_fit = {}
self.integer_continuous_columns = []
# - gaussian distribution
cont_cols = tqdm(self.continuous_columns) if verbose else self.continuous_columns
for column in cont_cols:
mean, std = data[column].mean(), data[column].std()
self.cont_fit[column] = {
"mean": mean,
"std": std,
'dtype': data[column].dtype,
}
if is_integer_dtype(data[column].dtype):
self.integer_continuous_columns.append(column)
self.fits = {**self.cat_fit, **self.cont_fit}
def sample(self, n, gpu=False, memmap_kwargs=None, start_idx=0, end_idx=None, **kwargs):
use_memmap = memmap_kwargs is not None
if use_memmap:
memmap_outfile = np.load(memmap_kwargs['filename'], mmap_mode='r+')
if gpu:
cont_means = []
cont_stds = []
for column in self.continuous_columns:
cont_means.append(self.fits[column]['mean'])
cont_stds.append(self.fits[column]['std'])
cont_data = cp.random.normal(
cp.array(cont_means),
cp.array(cont_stds),
size=(n, len(self.continuous_columns)),
dtype=cp.float32
)
cont_data = cp.asnumpy(cont_data)
df = pd.DataFrame(cont_data, columns=list(self.continuous_columns))
if self.integer_continuous_columns:
df[self.integer_continuous_columns] = \
df[self.integer_continuous_columns].astype(np.int32)
for column in self.categorical_columns:
sampled_data = cp.random.multinomial(n, self.fits[column]["pvals"])
sampled_data = cuda_repeat(sampled_data)
cp.random.shuffle(sampled_data)
sampled_data = cp.asnumpy(sampled_data.reshape(-1, 1))
encoder = self.fits[column]["encoder"]
sampled_data = encoder.inverse_transform(sampled_data)
df[column] = sampled_data.reshape(-1).astype(self.fits[column]["dtype"])
else:
df = pd.DataFrame()
for column in self.column_order:
if column in self.categorical_columns:
sampled_data = np.random.multinomial(n,
self.fits[column]["pvals"])
sampled_data = np.repeat(np.arange(len(sampled_data)), sampled_data)
np.random.shuffle(sampled_data)
sampled_data = sampled_data.reshape(-1, 1)
encoder = self.fits[column]["encoder"]
sampled_data = encoder.inverse_transform(sampled_data)
else:
sampled_data = np.random.normal(
self.fits[column]['mean'],
self.fits[column]['std'], n)
df[column] = sampled_data.reshape(-1).astype(self.fits[column]["dtype"])
df = df[self.column_order]
if use_memmap:
memmap_outfile[start_idx:end_idx] = df.values
return None
return df
def save(self, path):
with open(path, 'wb') as file_handler:
pickle.dump(self, file_handler, protocol=pickle.HIGHEST_PROTOCOL)
@classmethod
def load(cls, path):
with open(path, 'rb') as file_handler:
model = pickle.load(file_handler)
return model
|
PyTorch/SpeechSynthesis/FastPitch/hifigan | hifigan | metrics | import timer
from collections import defaultdict
class Metrics(defaultdict):
# TODO Where to measure - gpu:0 or all gpus?
def __init__(self, tb_keys=[], benchmark_epochs=10):
super().__init__(float)
# dll_tb_keys=['loss_gen', 'loss_discrim', 'loss_mel', 'took']:
self.tb_keys = tb_keys #_ = {'dll': dll_keys, 'tb': tb_keys, 'dll+tb': dll_tb_keys}
self.iter_start_time = None
self.iter_metrics = defaultdict(float)
self.epoch_start_time = None
self.epoch_metrics = defaultdict(float)
self.benchmark_epochs = benchmark_epochs
def start_epoch(self, epoch, start_timer=True):
self.epoch = epoch
if start_timer:
self.epoch_start_time = time.time()
def start_iter(self, iter, start_timer=True):
self.iter = iter
self.accum_steps = 0
self.step_metrics.clear()
if start_timer:
self.iter_start_time = time.time()
def update_iter(self, ...):
# do stuff
pass
def accumulate(self, scope='step'):
tgt = {'step': self.step_metrics, 'epoch': self.epoch_metrics}[scope]
for k, v in self.items():
tgt[k] += v
self.clear()
def update_iter(self, metrics={}, stop_timer=True):
is not self.started_iter:
return
self.accumulate(metrics)
self.accumulate(self.iter_metrics, scope='epoch')
if stop_timer:
self.iter_metrics['took'] = time.time() - self.iter_time_start
def update_epoch(self, stop_timer=True):
# tb_total_steps=None,
# subset='train_avg',
# data=OrderedDict([
# ('loss', epoch_loss[-1]),
# ('mel_loss', epoch_mel_loss[-1]),
# ('frames/s', epoch_num_frames[-1] / epoch_time[-1]),
# ('took', epoch_time[-1])]),
# )
if stop_timer:
self.['epoch_time'] = time.time() - self.epoch_time_start
if steps % args.stdout_interval == 0:
# with torch.no_grad():
# mel_error = F.l1_loss(y_mel, y_g_hat_mel).item()
took = time.time() - self.start_b
self.sws['train'].add_scalar("gen_loss_total", loss_gen_all.item(), steps)
self.sws['train'].add_scalar("mel_spec_error", mel_error.item(), steps)
for key, val in meta.items():
sw_name = 'train'
for name_ in keys_mpd + keys_msd:
if name_ in key:
sw_name = 'train_' + name_
key = key.replace('loss_', 'loss/')
key = re.sub('mpd\d+', 'mpd-msd', key)
key = re.sub('msd\d+', 'mpd-msd', key)
self.sws[sw_name].add_scalar(key, val / h.batch_size, steps)
def iter_metrics(self, target='dll+tb'):
return {self.iter_metrics[k] for k in self.keys_[target]}
def foo
Steps : 40, Gen Loss Total : 57.993, Mel-Spec. Error : 47.374, s/b : 1.013
logger.log((epoch, epoch_iter, num_iters),
tb_total_steps=total_iter,
subset='train',
data=OrderedDict([
('loss', iter_loss),
('mel_loss', iter_mel_loss),
('frames/s', iter_num_frames / iter_time),
('took', iter_time),
('lrate', optimizer.param_groups[0]['lr'])]),
)
class Meter:
def __init__(self, sink_type, scope, downstream=None, end_points=None, verbosity=dllogger.Verbosity.DEFAULT):
self.verbosity = verbosity
self.sink_type = sink_type
self.scope = scope
self.downstream = downstream
self.end_points = end_points or []
def start(self):
ds = None if self.downstream is None else self.downstream.sink
end_pt_fn = lambda x: list(map(lambda f: f(x), self.end_points)) # call all endpoint functions
self.sink = self.sink_type(end_pt_fn, ds)
def end(self):
self.sink.close()
def send(self, data):
self.sink.send(data)
def meters(self):
if self.downstream is not None:
downstream_meters = self.downstream.meters()
else:
downstream_meters = []
return [self] + downstream_meters
def add_end_point(self, new_endpoint):
self.end_points.append(new_endpoint)
def __or__(self, other):
"""for easy chaining of meters"""
if self.downstream is None:
self.downstream = other
else:
self.downstream | other
return self
|
PyTorch/LanguageModeling/BART/configs | configs | config_xsum | {
"_num_labels": 3,
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_bias_logits": false,
"add_final_layer_norm": false,
"architectures": [
"BartForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 0,
"classif_dropout": 0.0,
"d_model": 1024,
"decoder_attention_heads": 16,
"decoder_ffn_dim": 4096,
"decoder_layerdrop": 0.0,
"decoder_layers": 12,
"decoder_start_token_id": 2,
"dropout": 0.1,
"early_stopping": true,
"encoder_attention_heads": 16,
"encoder_ffn_dim": 4096,
"encoder_layerdrop": 0.0,
"encoder_layers": 12,
"eos_token_id": 2,
"eos_token_ids": [
2
],
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"max_length": 62,
"max_position_embeddings": 1024,
"min_length": 11,
"model_type": "bart",
"no_repeat_ngram_size": 3,
"normalize_before": false,
"normalize_embedding": true,
"num_beams": 6,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"prefix": " ",
"replacing_rate": 0,
"scale_embedding": false,
"static_position_embeddings": false,
"student_decoder_layers": null,
"student_encoder_layers": null,
"task_specific_params": {},
"vocab_size": 50265
}
|
TensorFlow/Detection/SSD/models/research/slim/nets/mobilenet | mobilenet | mobilenet_example | #!/usr/bin/env python
# coding: utf-8
# >[Prerequisites (downloading tensorflow_models and checkpoints)](#scrollTo=T_cETKXHDTXu)
#
# >[Checkpoint based inference](#scrollTo=fxMe7_pkk_Vo)
#
# >[Frozen inference](#scrollTo=PlwvpK3ElBk6)
#
#
# # Prerequisites (downloading tensorflow_models and checkpoints)
# In[ ]:
get_ipython().system('git clone https://github.com/tensorflow/models')
# In[ ]:
from __future__ import print_function
from IPython import display
checkpoint_name = 'mobilenet_v2_1.0_224' #@param
url = 'https://storage.googleapis.com/mobilenet_v2/checkpoints/' + checkpoint_name + '.tgz'
print('Downloading from ', url)
get_ipython().system('wget {url}')
print('Unpacking')
get_ipython().system('tar -xvf {base_name}.tgz')
checkpoint = base_name + '.ckpt'
display.clear_output()
print('Successfully downloaded checkpoint from ', url,
'. It is available as', checkpoint)
# In[ ]:
get_ipython().system('wget https://upload.wikimedia.org/wikipedia/commons/f/fe/Giant_Panda_in_Beijing_Zoo_1.JPG -O panda.jpg')
# In[ ]:
# setup path
import sys
sys.path.append('/content/models/research/slim')
# # Checkpoint based inference
# In[ ]:
import tensorflow as tf
from nets.mobilenet import mobilenet_v2
tf.reset_default_graph()
# For simplicity we just decode jpeg inside tensorflow.
# But one can provide any input obviously.
file_input = tf.placeholder(tf.string, ())
image = tf.image.decode_jpeg(tf.read_file(file_input))
images = tf.expand_dims(image, 0)
images = tf.cast(images, tf.float32) / 128. - 1
images.set_shape((None, None, None, 3))
images = tf.image.resize_images(images, (224, 224))
# Note: arg_scope is optional for inference.
with tf.contrib.slim.arg_scope(mobilenet_v2.training_scope(is_training=False)):
logits, endpoints = mobilenet_v2.mobilenet(images)
# Restore using exponential moving average since it produces (1.5-2%) higher
# accuracy
ema = tf.train.ExponentialMovingAverage(0.999)
vars = ema.variables_to_restore()
saver = tf.train.Saver(vars)
# In[ ]:
from IPython import display
import pylab
from datasets import imagenet
import PIL
display.display(display.Image('panda.jpg'))
with tf.Session() as sess:
saver.restore(sess, checkpoint)
x = endpoints['Predictions'].eval(feed_dict={file_input: 'panda.jpg'})
label_map = imagenet.create_readable_names_for_imagenet_labels()
print("Top 1 prediction: ", x.argmax(),label_map[x.argmax()], x.max())
# # Frozen inference
# In[ ]:
import numpy as np
img = np.array(PIL.Image.open('panda.jpg').resize((224, 224))).astype(np.float) / 128 - 1
gd = tf.GraphDef.FromString(open(base_name + '_frozen.pb', 'rb').read())
inp, predictions = tf.import_graph_def(gd, return_elements = ['input:0', 'MobilenetV2/Predictions/Reshape_1:0'])
# In[ ]:
with tf.Session(graph=inp.graph):
x = predictions.eval(feed_dict={inp: img.reshape(1, 224,224, 3)})
label_map = imagenet.create_readable_names_for_imagenet_labels()
print("Top 1 Prediction: ", x.argmax(),label_map[x.argmax()], x.max())
# In[ ]:
|
CUDA-Optimized/FastSpeech/fastspeech | fastspeech | train | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import pprint
import fire
import torch
from torch.optim.lr_scheduler import LambdaLR
from fastspeech import DEFAULT_DEVICE
from fastspeech import hparam as hp
from fastspeech.data_load import PadDataLoader
from fastspeech.dataset.ljspeech_dataset import LJSpeechDataset
from fastspeech.model.fastspeech import Fastspeech
from fastspeech.trainer.fastspeech_trainer import FastspeechTrainer
from fastspeech.utils.logging import tprint
try:
import apex
except ImportError:
ImportError('Required to install apex.')
# import multiprocessing
# multiprocessing.set_start_method('spawn', True)
pp = pprint.PrettyPrinter(indent=4, width=1000)
def train(hparam="train.yaml",
device=DEFAULT_DEVICE,
**kwargs):
""" The FastSpeech model training script.
By default, this script assumes to load parameters in the default config file, fastspeech/hparams/train.yaml.
Besides the flags, you can also set parameters in the config file via the command-line. For examples,
--dataset_path=DATASET_PATH
Path to dataset directory.
--tacotron2_path=TACOTRON2_PATH
Path to tacotron2 checkpoint file.
--mels_path=MELS_PATH
Path to preprocessed mels directory.
--aligns_path=ALIGNS_PATH
Path to preprocessed alignments directory.
--log_path=LOG_PATH
Path to log directory.
--checkpoint_path=CHECKPOINT_PATH
Path to checkpoint directory. The latest checkpoint will be loaded.
--batch_size=BATCH_SIZE
Batch size to use. Defaults to 16.
Refer to fastspeech/hparams/train.yaml to see more parameters.
Args:
hparam (str, optional): Path to default config file. Defaults to "train.yaml".
device (str, optional): Device to use. Defaults to "cuda" if avaiable, or "cpu".
"""
hp.set_hparam(hparam, kwargs)
tprint("Hparams:\n{}".format(pp.pformat(hp)))
tprint("Device count: {}".format(torch.cuda.device_count()))
# model
model = Fastspeech(
max_seq_len=hp.max_seq_len,
d_model=hp.d_model,
phoneme_side_n_layer=hp.phoneme_side_n_layer,
phoneme_side_head=hp.phoneme_side_head,
phoneme_side_conv1d_filter_size=hp.phoneme_side_conv1d_filter_size,
phoneme_side_output_size=hp.phoneme_side_output_size,
mel_side_n_layer=hp.mel_side_n_layer,
mel_side_head=hp.mel_side_head,
mel_side_conv1d_filter_size=hp.mel_side_conv1d_filter_size,
mel_side_output_size=hp.mel_side_output_size,
duration_predictor_filter_size=hp.duration_predictor_filter_size,
duration_predictor_kernel_size=hp.duration_predictor_kernel_size,
fft_conv1d_kernel=hp.fft_conv1d_kernel,
fft_conv1d_padding=hp.fft_conv1d_padding,
dropout=hp.dropout,
n_mels=hp.num_mels,
fused_layernorm=hp.fused_layernorm
)
# dataset
dataset = LJSpeechDataset(root_path=hp.dataset_path,
meta_file=hp.meta_file,
mels_path=hp.mels_path,
aligns_path=hp.aligns_path,
sr=hp.sr,
n_fft=hp.n_fft,
win_len=hp.win_len,
hop_len=hp.hop_len,
n_mels=hp.num_mels,
mel_fmin=hp.mel_fmin,
mel_fmax=hp.mel_fmax,
)
tprint("Dataset size: {}".format(len(dataset)))
# data loader
data_loader = PadDataLoader(dataset,
batch_size=hp.batch_size,
num_workers=hp.n_workers,
drop_last=True,
)
# optimizer
def get_optimizer(model):
optimizer = torch.optim.Adam(
model.parameters(),
lr=hp.learning_rate,
betas=(0.9, 0.98),
eps=1e-9)
return optimizer
def get_warmup_lr_scheduler(optimizer):
d_model = hp.d_model
warmup_steps = hp.warmup_steps
lr = lambda step: d_model ** -0.5 * min((step + 1) ** -0.5,
(step + 1) * warmup_steps ** -1.5) / hp.learning_rate
scheduler = LambdaLR(optimizer, lr_lambda=[lr])
return scheduler
# trainer
trainer = FastspeechTrainer(data_loader,
'fastspeech',
model,
optimizer_fn=get_optimizer,
final_steps=hp.final_steps,
log_steps=hp.log_step,
ckpt_path=hp.checkpoint_path,
save_steps=hp.save_step,
log_path=hp.log_path,
lr_scheduler_fn=get_warmup_lr_scheduler,
pre_aligns=True if hp.aligns_path else False,
device=device,
use_amp=hp.use_amp,
nvprof_iter_start=hp.nvprof_iter_start,
nvprof_iter_end=hp.nvprof_iter_end,
pyprof_enabled=hp.pyprof_enabled,
)
trainer.train()
if __name__ == '__main__':
torch.backends.cudnn.enabled = True
torch.backends.cudnn.benchmark = False
fire.Fire(train)
|
PyTorch/Segmentation/nnUNet/triton/scripts/docker | docker | interactive | #Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
docker run -it --rm \
--gpus "device=all" \
--net=host \
--shm-size=1g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
-e WORKDIR="$(pwd)" \
-e PYTHONPATH=$(pwd) \
-v $(pwd):$(pwd) \
-v /mnt/nvdl/usr/jzarzycki/nnunet_pyt/results:/data \
-v /mnt/nvdl/usr/jzarzycki/nnunet_pyt/results:/results \
-w $(pwd) \
nnunet:latest bash
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/scripts/import_utils | import_utils | waveglow | #!/usr/bin/env python3
##
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# # Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# # Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# # Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
import pickle
import torch
from waveglow.model import WaveGlow
def split_cond_layers(model):
for WN in model.WN:
if hasattr(WN, "cond_layer"):
n_layers = len(WN.res_skip_layers)
conv_weights = WN.cond_layer.weight
conv_bias = WN.cond_layer.bias
conv_stride = WN.cond_layer.stride
conv_dilation = WN.cond_layer.dilation
conv_padding = WN.cond_layer.padding
num_in_channels = conv_weights.size(1)
num_out_channels = conv_weights.size(0)//n_layers
kernel_size = conv_weights.size(2)
WN.cond_layers = []
for i in range(n_layers):
layer = torch.nn.Conv1d(
in_channels=num_in_channels,
out_channels=num_out_channels,
kernel_size=kernel_size,
stride=conv_stride,
padding=conv_padding,
dilation=conv_dilation)
layer.weight.data[:, :, :] = conv_weights.data[
i*num_out_channels:(i+1)*num_out_channels, :, :]
layer.bias.data[:] = conv_bias.data[
i*num_out_channels:(i+1)*num_out_channels]
layer = torch.nn.utils.weight_norm(layer, name='weight')
WN.cond_layers.append(layer)
return model
def load_waveglow(filename, waveglow_config):
class RenamingUnpickler(pickle.Unpickler):
def find_class(self, module, name):
if module == 'glow':
module = 'waveglow.model'
return super().find_class(module, name)
class RenamingPickleModule:
def load(self, f, *args, **kw_args):
return self.Unpickler(f, *args, **kw_args).load()
def Unpickler(self, f, **pickle_load_args):
return RenamingUnpickler(f, **pickle_load_args)
pickle_module = RenamingPickleModule()
blob = torch.load(filename, pickle_module=pickle_module)
if 'state_dict' in blob:
waveglow = WaveGlow(**waveglow_config).cuda()
state_dict = {}
for key, value in blob["state_dict"].items():
newKey = key
if key.startswith("module."):
newKey = key[len("module."):]
state_dict[newKey] = value
waveglow.load_state_dict(state_dict)
else:
waveglow = blob['model']
waveglow = split_cond_layers(waveglow)
waveglow = waveglow.remove_weightnorm(waveglow)
waveglow.cuda().eval()
return waveglow
|
TensorFlow2/Recommendation/WideAndDeep/triton/deployment_toolkit/triton_performance_runner/model_analyzer | model_analyzer | model_analyzer | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import subprocess
from subprocess import CalledProcessError
from .exceptions import ModelAnalyzerException
SERVER_OUTPUT_TIMEOUT_SECS = 5
LOGGER = logging.getLogger(__name__)
class ModelAnalyzerMode:
PROFILE = "profile"
ANALYZE = "analyze"
REPORT = "report"
class ModelAnalyzerReportMode:
OFFLINE = "offline"
ONLINE = "online"
class ModelAnalyzer:
"""
Concrete Implementation of Model Analyzer interface that runs
analyzer locally as as subprocess.
"""
_analyzer_path = "model-analyzer"
def __init__(self, config, timeout: int = None):
"""
Parameters
----------
config : AnalyzerConfig
the config object containing arguments for this server instance
"""
self._analyzer_process = None
self._analyzer_config = config
self._log = None
self._timeout = timeout
def run(self, mode: str, verbose: bool = False, quiet: bool = False, report_mode: str = None):
"""
Starts the model analyzer locally
"""
if self._analyzer_path:
cmd = []
if self._timeout:
cmd = ["timeout", str(self._timeout)]
cmd += [self._analyzer_path]
if verbose:
cmd += ["--verbose"]
if quiet:
cmd += ["--quiet"]
if report_mode:
cmd += ["-m"]
cmd += [report_mode]
cmd += [mode]
cmd += self._analyzer_config.to_cli_string().split()
LOGGER.debug(f"Model Analyze command: {cmd}")
try:
subprocess.run(cmd, check=True, start_new_session=True)
except CalledProcessError as e:
raise ModelAnalyzerException(
f"Running {self._analyzer_path} with {e.cmd} failed with"
f" exit status {e.returncode} : {e.output}"
)
|
PyTorch/LanguageModeling/BERT/triton/runner | runner | summary | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import csv
import json
import pathlib
from typing import Dict, List, Union
# method from PEP-366 to support relative import in executed modules
import yaml
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from ..deployment_toolkit.report import save_results, sort_results
from .logger import LOGGER
def save_summary(result_type: str, results: List, summary_dir: pathlib.Path) -> None:
"""
Create file with summary for results of given type
Args:
result_type: Type of results to dump
results: Results data
summary_dir: Path where results should be stored
Returns:
None
"""
if len(results) == 0:
LOGGER.warning(f"No {result_type} results found.")
return
results = sort_results(results=results)
kind_file = summary_dir / f"{result_type}_summary.csv"
save_results(filename=kind_file.as_posix(), data=results, formatted=True)
LOGGER.info(f"Summary for {result_type} stored in {kind_file}")
def load_results(*, results_path: Union[pathlib.Path, str], result_type: str, parameters: Dict) -> List:
"""
Update results
Args:
results_path: Path to file or directory from which data should be read
result_type: type of results
parameters: Parameters used in experiment which generated results
Returns:
List of result rows
"""
LOGGER.debug(f"Loading {result_type} from {results_path} for summary")
results_path = pathlib.Path(results_path)
if results_path.is_file():
files = [results_path]
elif results_path.is_dir():
files = list(results_path.iterdir())
else:
LOGGER.debug(f"Unable to load file: {results_path}. Generating empty rows.")
data = [{}]
return data
if any([file.name.endswith(".ckpt") for file in files]):
model_analyzer_metrics = results_path / "metrics-model-inference.csv"
files = [model_analyzer_metrics]
else:
files = [file for file in files if file.name.endswith(".csv")]
results = list()
parameters_cpy = {key: value for key, value in parameters.items() if key != "batch"}
for file in files:
if file.suffix == ".csv":
data = _generate_data_from_csv(file=file)
elif file.suffix == ".json":
data = _generate_data_from_json(file=file)
elif file.suffix == ".yaml":
data = _generate_data_from_yaml(file=file)
else:
raise ValueError(f"Unsupported file extension: {file.suffix}")
for item in data:
result = {**parameters_cpy, **item}
results.append(result)
LOGGER.debug(f"Loading done. Collected {len(results)} results.")
return results
def _normalize_key(*, key: str) -> str:
"""
Normalize key
Args:
key: Key to normalize
Returns:
Normalized string
"""
key = "_".join(key.split(sep=" "))
key = key.lower()
return key
def _normalize_keys(*, data: Dict) -> Dict:
"""
Normalize keys in dictionary
Args:
data: Dictionary to normalize
Returns:
Normalized dictionary
"""
keys = {_normalize_key(key=key): value for key, value in data.items()}
return keys
def _generate_data_from_csv(*, file: Union[pathlib.Path, str]) -> List[Dict]:
"""
Generate result rows from CSV file
Args:
file: CSV file path
Returns:
List of rows
"""
LOGGER.debug(f"Reading data from {file}")
filtered_rows: List[Dict] = []
with open(file, "r") as csvfile:
reader = csv.DictReader(csvfile)
for r in reader:
r = _normalize_keys(data=r)
filtered_row = {k: v for k, v in r.items()}
filtered_rows.append(filtered_row)
LOGGER.debug("done")
return filtered_rows
def _generate_data_from_json(file: pathlib.Path) -> List[Dict]:
LOGGER.info(f"Reading data from {file}")
filtered_rows: List[Dict] = list()
with open(file, "r") as json_file:
file_data = json.load(json_file)
if not isinstance(file_data, list):
file_data = [file_data]
for r in file_data:
r = _normalize_keys(data=r)
filtered_row = {k: v for k, v in r.items()}
filtered_rows.append(filtered_row)
LOGGER.info("done")
return filtered_rows
def _generate_data_from_yaml(file: pathlib.Path) -> List[Dict]:
LOGGER.info(f"Reading data from {file}")
filtered_rows: List[Dict] = list()
with open(file, "r") as yaml_file:
file_data = yaml.safe_load(yaml_file)
if not isinstance(file_data, list):
file_data = [file_data]
for r in file_data:
r = _normalize_keys(data=r)
filtered_row = {k: v for k, v in r.items()}
filtered_rows.append(filtered_row)
LOGGER.info("done")
return filtered_rows
|
PyTorch/Classification/GPUNet/triton/scripts/docker | docker | triton_inference_server | #!/usr/bin/env bash
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES:=0}
WORKDIR="${WORKDIR:=$(pwd)}"
export WORKSPACE_DIR=${WORKDIR}/runner_workspace
export MODEL_REPOSITORY_PATH=${WORKSPACE_DIR}/model_store
docker run --rm -d \
-p 8000:8000 \
-p 8001:8001 \
-p 8002:8002 \
--runtime=nvidia \
-e NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES} \
-e ORT_TENSORRT_FP16_ENABLE=1 \
-v ${MODEL_REPOSITORY_PATH}:${MODEL_REPOSITORY_PATH} \
--shm-size=1g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
--ipc=host \
nvcr.io/nvidia/tritonserver:21.12-py3 tritonserver \
--model-store=${MODEL_REPOSITORY_PATH} \
--strict-model-config=false \
--exit-on-error=true \
--model-control-mode=explicit |
TensorFlow2/Recommendation/WideAndDeep/triton/runner/maintainer/docker | docker | __init__ | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
Tools/DGLPyTorch/SyntheticGraphGeneration/syngen/benchmark/models | models | __init__ | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# flake8: noqa
from .gat_ec import GATEC
from .gcn_ec import GCNEC
MODELS = {
"gat_ec": GATEC,
"gcn_ec": GCNEC,
}
|
PyTorch/SpeechRecognition/Jasper/triton/scripts | scripts | prepare_model_repository | #!/bin/bash
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Create folder deploy/model_repo that will be used by TRITON
SCRIPT_DIR=$(cd $(dirname $0); pwd)
PROJECT_DIR=${SCRIPT_DIR}/..
DEPLOY_DIR=${PROJECT_DIR}/deploy
HOST_REPO=${DEPLOY_DIR}/model_repo
MODELS_TENSORRT=${MODELS_TENSORRT:-"jasper-tensorrt jasper-tensorrt-ensemble"}
MODELS_TS_TRACE=${MODELS_TS_TRACE:-"jasper-ts-trace jasper-ts-trace-ensemble"}
MODELS_ONNX=${MODELS_ONNX:-"jasper-onnx jasper-onnx-ensemble"}
DECODERS="decoder-ts-script"
EXTRACTORS="feature-extractor-ts-trace"
MODELS=${MODELS:-"${MODELS_ONNX} ${MODELS_TENSORRT} ${MODELS_TS_TRACE}"}
PRECISION=${PRECISION:-"fp16" "fp32"}
# only link working models to install directory
rm -fr ${HOST_REPO} && mkdir -p ${HOST_REPO}
if [ -f /.dockerenv ]; then # inside docker
chmod -R a+w ${HOST_REPO}
fi
echo "Setting up model repo at ${HOST_REPO}, models: ${MODELS} ..."
for m in ${EXTRACTORS} ${DECODERS} ${MODELS}; do
mkdir -p ${HOST_REPO}/$m
cp ${PROJECT_DIR}/model_repo_configs/${PRECISION}/$m/config.pbtxt ${HOST_REPO}/$m/
if [ -d "${PROJECT_DIR}/model_repo/${PRECISION}/$m/1" ]; then
echo "Creating symlink ls -sf /model_repo/${PRECISION}/$m/1 ${HOST_REPO}/$m"
ln -sf /model_repo/${PRECISION}/$m/1 ${HOST_REPO}/$m
else
mkdir -p ${HOST_REPO}/$m/1
fi
if [ -f /.dockerenv ]; then # inside docker
chmod -R a+w ${HOST_REPO}/$m
fi
done
|
PyTorch/Translation/Transformer/scripts | scripts | build_sym_alignment | # Copyright (c) 2017-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the license found in the LICENSE file in
# the root directory of this source tree. An additional grant of patent rights
# can be found in the PATENTS file in the same directory.
#
"""
Use this script in order to build symmetric alignments for your translation
dataset.
This script depends on fast_align and mosesdecoder tools. You will need to
build those before running the script.
fast_align:
github: http://github.com/clab/fast_align
instructions: follow the instructions in README.md
mosesdecoder:
github: http://github.com/moses-smt/mosesdecoder
instructions: http://www.statmt.org/moses/?n=Development.GetStarted
The script produces the following files under --output_dir:
text.joined - concatenation of lines from the source_file and the
target_file.
align.forward - forward pass of fast_align.
align.backward - backward pass of fast_align.
aligned.sym_heuristic - symmetrized alignment.
"""
import argparse
import os
from itertools import zip_longest
def main():
parser = argparse.ArgumentParser(description='symmetric alignment builer')
parser.add_argument('--fast_align_dir',
help='path to fast_align build directory')
parser.add_argument('--mosesdecoder_dir',
help='path to mosesdecoder root directory')
parser.add_argument('--sym_heuristic',
help='heuristic to use for symmetrization',
default='grow-diag-final-and')
parser.add_argument('--source_file',
help='path to a file with sentences '
'in the source language')
parser.add_argument('--target_file',
help='path to a file with sentences '
'in the target language')
parser.add_argument('--output_dir',
help='output directory')
args = parser.parse_args()
fast_align_bin = os.path.join(args.fast_align_dir, 'fast_align')
symal_bin = os.path.join(args.mosesdecoder_dir, 'bin', 'symal')
sym_fast_align_bin = os.path.join(
args.mosesdecoder_dir, 'scripts', 'ems',
'support', 'symmetrize-fast-align.perl')
# create joined file
joined_file = os.path.join(args.output_dir, 'text.joined')
with open(args.source_file, 'r') as src, open(args.target_file, 'r') as tgt:
with open(joined_file, 'w') as joined:
for s, t in zip_longest(src, tgt):
print('{} ||| {}'.format(s.strip(), t.strip()), file=joined)
bwd_align_file = os.path.join(args.output_dir, 'align.backward')
# run forward alignment
fwd_align_file = os.path.join(args.output_dir, 'align.forward')
fwd_fast_align_cmd = '{FASTALIGN} -i {JOINED} -d -o -v > {FWD}'.format(
FASTALIGN=fast_align_bin,
JOINED=joined_file,
FWD=fwd_align_file)
assert os.system(fwd_fast_align_cmd) == 0
# run backward alignment
bwd_align_file = os.path.join(args.output_dir, 'align.backward')
bwd_fast_align_cmd = '{FASTALIGN} -i {JOINED} -d -o -v -r > {BWD}'.format(
FASTALIGN=fast_align_bin,
JOINED=joined_file,
BWD=bwd_align_file)
assert os.system(bwd_fast_align_cmd) == 0
# run symmetrization
sym_out_file = os.path.join(args.output_dir, 'aligned')
sym_cmd = '{SYMFASTALIGN} {FWD} {BWD} {SRC} {TGT} {OUT} {HEURISTIC} {SYMAL}'.format(
SYMFASTALIGN=sym_fast_align_bin,
FWD=fwd_align_file,
BWD=bwd_align_file,
SRC=args.source_file,
TGT=args.target_file,
OUT=sym_out_file,
HEURISTIC=args.sym_heuristic,
SYMAL=symal_bin
)
assert os.system(sym_cmd) == 0
if __name__ == '__main__':
main()
|
CUDA-Optimized/FastSpeech/tacotron2 | tacotron2 | audio_processing | # BSD 3-Clause License
# Copyright (c) 2018-2020, NVIDIA Corporation
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""https://github.com/NVIDIA/tacotron2"""
import torch
import numpy as np
from scipy.signal import get_window
import librosa.util as librosa_util
def window_sumsquare(window, n_frames, hop_length=200, win_length=800,
n_fft=800, dtype=np.float32, norm=None):
"""
# from librosa 0.6
Compute the sum-square envelope of a window function at a given hop length.
This is used to estimate modulation effects induced by windowing
observations in short-time fourier transforms.
Parameters
----------
window : string, tuple, number, callable, or list-like
Window specification, as in `get_window`
n_frames : int > 0
The number of analysis frames
hop_length : int > 0
The number of samples to advance between frames
win_length : [optional]
The length of the window function. By default, this matches `n_fft`.
n_fft : int > 0
The length of each analysis frame.
dtype : np.dtype
The data type of the output
Returns
-------
wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))`
The sum-squared envelope of the window function
"""
if win_length is None:
win_length = n_fft
n = n_fft + hop_length * (n_frames - 1)
x = np.zeros(n, dtype=dtype)
# Compute the squared window at the desired length
win_sq = get_window(window, win_length, fftbins=True)
win_sq = librosa_util.normalize(win_sq, norm=norm)**2
win_sq = librosa_util.pad_center(win_sq, n_fft)
# Fill the envelope
for i in range(n_frames):
sample = i * hop_length
x[sample:min(n, sample + n_fft)
] += win_sq[:max(0, min(n_fft, n - sample))]
return x
def griffin_lim(magnitudes, stft_fn, n_iters=30):
"""
PARAMS
------
magnitudes: spectrogram magnitudes
stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods
"""
angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size())))
angles = angles.astype(np.float32)
angles = torch.autograd.Variable(torch.from_numpy(angles))
signal = stft_fn.inverse(magnitudes, angles).squeeze(1)
for i in range(n_iters):
_, angles = stft_fn.transform(signal)
signal = stft_fn.inverse(magnitudes, angles).squeeze(1)
return signal
def dynamic_range_compression(x, C=1, clip_val=1e-5):
"""
PARAMS
------
C: compression factor
"""
return torch.log(torch.clamp(x, min=clip_val) * C)
def dynamic_range_decompression(x, C=1):
"""
PARAMS
------
C: compression factor used to compress
"""
return torch.exp(x) / C
|
DGLPyTorch/DrugDiscovery/SE3Transformer | SE3Transformer | .gitignore | data/
.DS_Store
*wandb/
*.pt
*.swp
# added by FAFU
.idea/
cache/
downloaded/
*.lprof
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
**/benchmark
**/results
*.pkl
*.log |
TensorFlow2/Classification/ConvNets/efficientnet_v1/B0/training/AMP | AMP | convergence_8xA100-80G | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
horovodrun -np 8 bash ./scripts/bind.sh --cpu=exclusive --ib=single -- python3 main.py \
--cfg config/efficientnet_v1/b0_cfg.py \
--mode train_and_eval \
--use_amp \
--use_xla \
--model_dir ./output \
--data_dir /data \
--log_steps 100 \
--max_epochs 500 \
--save_checkpoint_freq 5 \
--train_batch_size 1024 \
--eval_batch_size 1024 \
--augmenter_name autoaugment \
--lr_decay cosine \
--memory_limit 81000 \
--defer_img_mixing \
--moving_average_decay 0.9999 \
--lr_init 0.005
|
PyTorch/SpeechSynthesis/Tacotron2 | Tacotron2 | config | {
"audio": {
"max-wav-value": 32768.0,
"sampling-rate": 22050,
"filter-length": 1024,
"hop-length": 256,
"win-length": 1024,
"mel-fmin": 0.0,
"mel-fmax": 7000.0
}
}
|
TensorFlow/Detection/SSD/models/research/slim/datasets | datasets | preprocess_imagenet_validation_data | #!/usr/bin/python
# Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
r"""Process the ImageNet Challenge bounding boxes for TensorFlow model training.
Associate the ImageNet 2012 Challenge validation data set with labels.
The raw ImageNet validation data set is expected to reside in JPEG files
located in the following directory structure.
data_dir/ILSVRC2012_val_00000001.JPEG
data_dir/ILSVRC2012_val_00000002.JPEG
...
data_dir/ILSVRC2012_val_00050000.JPEG
This script moves the files into a directory structure like such:
data_dir/n01440764/ILSVRC2012_val_00000293.JPEG
data_dir/n01440764/ILSVRC2012_val_00000543.JPEG
...
where 'n01440764' is the unique synset label associated with
these images.
This directory reorganization requires a mapping from validation image
number (i.e. suffix of the original file) to the associated label. This
is provided in the ImageNet development kit via a Matlab file.
In order to make life easier and divorce ourselves from Matlab, we instead
supply a custom text file that provides this mapping for us.
Sample usage:
./preprocess_imagenet_validation_data.py ILSVRC2012_img_val \
imagenet_2012_validation_synset_labels.txt
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
from six.moves import xrange # pylint: disable=redefined-builtin
if __name__ == '__main__':
if len(sys.argv) < 3:
print('Invalid usage\n'
'usage: preprocess_imagenet_validation_data.py '
'<validation data dir> <validation labels file>')
sys.exit(-1)
data_dir = sys.argv[1]
validation_labels_file = sys.argv[2]
# Read in the 50000 synsets associated with the validation data set.
labels = [l.strip() for l in open(validation_labels_file).readlines()]
unique_labels = set(labels)
# Make all sub-directories in the validation data dir.
for label in unique_labels:
labeled_data_dir = os.path.join(data_dir, label)
os.makedirs(labeled_data_dir)
# Move all of the image to the appropriate sub-directory.
for i in xrange(len(labels)):
basename = 'ILSVRC2012_val_000%.5d.JPEG' % (i + 1)
original_filename = os.path.join(data_dir, basename)
if not os.path.exists(original_filename):
print('Failed to find: ', original_filename)
sys.exit(-1)
new_filename = os.path.join(data_dir, labels[i], basename)
os.rename(original_filename, new_filename)
|
TensorFlow/Segmentation/UNet_Industrial/utils | utils | __init__ | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# ==============================================================================
#
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ==============================================================================
from utils import hooks
from utils import cmdline_helper
from utils import hvd_utils
from utils import image_processing
from utils import logging
from utils import losses
from utils import metrics
|
Tools/PyTorch/TimeSeriesPredictionPlatform/conf/trainer/callbacks/callbacks | callbacks | throughput_benchmark | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
_target_: callbacks.ctl_callbacks.ThroughputBenchmark
warmup_epochs: 0
|
PyTorch/LanguageModeling/BERT/data | data | Downloader | # Copyright (c) 2019-2020 NVIDIA CORPORATION. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from GooglePretrainedWeightDownloader import GooglePretrainedWeightDownloader
from NVIDIAPretrainedWeightDownloader import NVIDIAPretrainedWeightDownloader
from WikiDownloader import WikiDownloader
from BooksDownloader import BooksDownloader
from GLUEDownloader import GLUEDownloader
from SquadDownloader import SquadDownloader
class Downloader:
def __init__(self, dataset_name, save_path):
self.dataset_name = dataset_name
self.save_path = save_path
def download(self):
if self.dataset_name == 'bookscorpus':
self.download_bookscorpus()
elif self.dataset_name == 'wikicorpus_en':
self.download_wikicorpus('en')
elif self.dataset_name == 'wikicorpus_zh':
self.download_wikicorpus('zh')
elif self.dataset_name == 'google_pretrained_weights':
self.download_google_pretrained_weights()
elif self.dataset_name == 'nvidia_pretrained_weights':
self.download_nvidia_pretrained_weights()
elif self.dataset_name in {'mrpc', 'sst-2'}:
self.download_glue(self.dataset_name)
elif self.dataset_name == 'squad':
self.download_squad()
elif self.dataset_name == 'all':
self.download_bookscorpus()
self.download_wikicorpus('en')
self.download_wikicorpus('zh')
self.download_google_pretrained_weights()
self.download_nvidia_pretrained_weights()
self.download_glue('mrpc')
self.download_glue('sst-2')
self.download_squad()
else:
print(self.dataset_name)
assert False, 'Unknown dataset_name provided to downloader'
def download_bookscorpus(self):
downloader = BooksDownloader(self.save_path)
downloader.download()
def download_wikicorpus(self, language):
downloader = WikiDownloader(language, self.save_path)
downloader.download()
def download_google_pretrained_weights(self):
downloader = GooglePretrainedWeightDownloader(self.save_path)
downloader.download()
def download_nvidia_pretrained_weights(self):
downloader = NVIDIAPretrainedWeightDownloader(self.save_path)
downloader.download()
def download_glue(self, task_name):
downloader = GLUEDownloader(self.save_path)
downloader.download(task_name)
def download_squad(self):
downloader = SquadDownloader(self.save_path)
downloader.download()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.