text
stringlengths 5
261k
| id
stringlengths 16
106
| metadata
dict | __index_level_0__
int64 0
266
|
---|---|---|---|
if [ "$#" -ne 4 ]; then
echo USAGE: ./process_weights.sh WEIGHTS_PATH OUTPUT_WEIGHTS_PATH MODEL_NAME GCS_PATH
exit 1
fi
WEIGHTS=$1
OUTPUT_WEIGHTS=$2
MODEL=$3
GCS_PATH=$4
python3 remove_top.py --weights_path=$WEIGHTS --output_weights_path=$OUTPUT_WEIGHTS --model_name=$MODEL
echo With top: $GCS_PATH/$WEIGHTS
echo With top checksum: $(shasum -a 256 $WEIGHTS)
echo Without top: $GCS_PATH/$OUTPUT_WEIGHTS
echo Without top checksum: $(shasum -a 256 $OUTPUT_WEIGHTS)
gsutil cp $WEIGHTS $GCS_PATH/
gsutil cp $OUTPUT_WEIGHTS $GCS_PATH/
gsutil acl ch -u AllUsers:R $GCS_PATH/$WEIGHTS
gsutil acl ch -u AllUsers:R $GCS_PATH/$OUTPUT_WEIGHTS
| keras-cv/shell/weights/process_backbone_weights.sh/0 | {
"file_path": "keras-cv/shell/weights/process_backbone_weights.sh",
"repo_id": "keras-cv",
"token_count": 258
} | 76 |
## ๆดปๆงๅ้ขๆฐใฎไฝฟใๆน
ๆดปๆงๅ้ขๆฐใฏ`Activation`ใฌใคใคใผ๏ผใพใใฏๅ
จใฆใฎใใฉใฏใผใใฌใคใคใผใงไฝฟใใๅผๆฐ`activation`ใงๅฉ็จใงใใพใ๏ผ
```python
from keras.layers.core import Activation, Dense
model.add(Dense(64))
model.add(Activation('tanh'))
```
ไธใฎใณใผใใฏไปฅไธใจ็ญไพกใงใ๏ผ
```python
model.add(Dense(64, activation='tanh'))
```
่ฆ็ด ใใจใซ้ฉ็จใงใใTensorFlow/Theano/CNTK้ขๆฐใๆดปๆงๅ้ขๆฐใซๆธกใใใจใใงใใพใ:
```python
from keras import backend as K
def tanh(x):
return K.tanh(x)
model.add(Dense(64, activation=tanh))
model.add(Activation(tanh))
```
## ๅฉ็จๅฏ่ฝใชๆดปๆงๅ้ขๆฐ
### softmax
```python
softmax(x, axis=-1)
```
Softmax้ขๆฐ
__ๅผๆฐ__
- __x__: ใใณใฝใซ๏ผ
- __axis__: ๆดๆฐ๏ผใฉใฎ่ปธใซsoftmaxใฎๆญฃ่ฆๅใใใใ๏ผ
__ๆปใๅค__
ใใณใฝใซ๏ผsoftmaxๅคๆใฎๅบๅ๏ผ
__Raises__
- __ValueError__: `dim(x) == 1`ใฎใจใ๏ผ
---
### elu
```python
elu(x, alpha=1.0)
```
---
### selu
```python
selu(x)
```
Scaled Exponential Linear Unit. (Klambauer et al., 2017)
__ๅผๆฐ__
- __x__: ๆดปๆงๅ้ขๆฐใ้ฉ็จใใใใณใฝใซใๅคๆฐ๏ผ
__ๅ่ๆ็ฎ__
- [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)
---
### softplus
```python
softplus(x)
```
---
### softsign
```python
softsign(x)
```
---
### relu
```python
relu(x, alpha=0.0, max_value=None)
```
---
### tanh
```python
tanh(x)
```
---
### sigmoid
```python
sigmoid(x)
```
---
### hard_sigmoid
```python
hard_sigmoid(x)
```
---
### linear
```python
linear
```
---
## ใใ้ซๅบฆใชๆดปๆงๅ้ขๆฐ
ๅ็ดใชTensorFlow/Theano/CNTK้ขๆฐใใใ้ซๅบฆใชๆดปๆงๅ้ขๆฐ (ไพ: ็ถๆ
ใๆใฆใlearnable activations) ใฏ๏ผ[Advanced Activation layers](layers/advanced-activations.md)ใจใใฆๅฉ็จๅฏ่ฝใงใ๏ผ
ใใใใฏ๏ผ`keras.layers.advanced_activations`ใขใธใฅใผใซใซใใ๏ผ`PReLU`ใ`LeakyReLU`ใๅซใพใใพใ๏ผ
| keras-docs-ja/sources/activations.md/0 | {
"file_path": "keras-docs-ja/sources/activations.md",
"repo_id": "keras-docs-ja",
"token_count": 972
} | 77 |
<span style="float:right;">[[source]](https://github.com/keras-team/keras/blob/master/keras/layers/embeddings.py#L11)</span>
### Embedding
```python
keras.layers.embeddings.Embedding(input_dim, output_dim, embeddings_initializer='uniform', embeddings_regularizer=None, activity_regularizer=None, embeddings_constraint=None, mask_zero=False, input_length=None)
```
ๆญฃใฎๆดๆฐ๏ผใคใณใใใฏใน๏ผใๅบๅฎๆฌกๅ
ใฎๅฏใใฏใใซใซๅคๆใใพใ๏ผ
ไพ๏ผ[[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
ใใฎใฌใคใคใผใฏใขใใซใฎๆๅใฎใฌใคใคใผใจใใฆใฎใฟๅฉ็จใงใใพใ๏ผ
__ไพ__
```python
model = Sequential()
model.add(Embedding(1000, 64, input_length=10))
# the model will take as input an integer matrix of size (batch, input_length).
# the largest integer (i.e. word index) in the input should be no larger than 999 (vocabulary size).
# now model.output_shape == (None, 10, 64), where None is the batch dimension.
input_array = np.random.randint(1000, size=(32, 10))
model.compile('rmsprop', 'mse')
output_array = model.predict(input_array)
assert output_array.shape == (32, 10, 64)
```
__ๅผๆฐ__
- __input_dim__: ๆญฃใฎๆดๆฐ๏ผ่ชๅฝๆฐ๏ผๅ
ฅๅใใผใฟใฎๆๅคงใคใณใใใฏใน + 1๏ผ
- __output_dim__: 0ไปฅไธใฎๆดๆฐ๏ผๅฏใชembeddingsใฎๆฌกๅ
ๆฐ๏ผ
- __embeddings_initializer__: `embeddings`่กๅใฎ[Initializers](../initializers.md)๏ผ
- __embeddings_regularizer__: `embeddings`่กๅใซ้ฉ็จใใ[Regularizers](../regularizers.md)๏ผ
- __embeddings_constraint__: `embeddings`่กๅใซ้ฉ็จใใ[Constraints](../constraints.md)๏ผ
- __mask_zero__: ็็ๅค๏ผๅ
ฅๅใฎ0ใใใใฃใณใฐใฎใใใฎ็นๅฅๅคใจใใฆๆฑใใใฉใใ๏ผ
ใใใฏๅ
ฅๅใฎ็ณปๅ้ทใๅฏๅค้ทใจใชใใใๅคๆฐใๅ
ฅๅใซใใค[Recurrentใฌใคใคใผ](recurrent.md)ใซๅฏพใใฆๆๅนใงใ๏ผ
ใใฎๅผๆฐใ`True`ใฎใจใ๏ผไปฅ้ใฎใฌใคใคใผใฏๅ
จใฆใใฎใในใฏใใตใใผใใใๅฟ
่ฆใใใ๏ผ
ใใใใชใใใฐ๏ผไพๅคใ่ตทใใพใ๏ผ
mask_zeroใTrueใฎใจใ๏ผindex 0ใฏ่ชๅฝใฎไธญใงไฝฟใใพใใ๏ผinput_dim ใฏ`่ชๅฝๆฐ+1`ใจ็ญใใใชใในใใงใ๏ผ๏ผ
- __input_length__: ๅ
ฅๅใฎ็ณปๅ้ท๏ผๅฎๆฐ๏ผ๏ผ
ใใฎๅผๆฐใฏใใฎใฌใคใคใผใฎๅพใซ`Flatten`ใใ`Dense`ใฌใคใคใผใธๆฅ็ถใใ้ใซๅฟ
่ฆใงใ (ใใใใชใใจ๏ผdenseใฎๅบๅใฎshapeใ่จ็ฎใงใใพใใ)๏ผ
__ๅ
ฅๅใฎshape__
shapeใ`(batch_size, sequence_length)`ใฎ2้ใใณใฝใซ๏ผ
__ๅบๅใฎshape__
shapeใ`(batch_size, sequence_length, output_dim)`ใฎ3้ใใณใฝใซ๏ผ
__ๅ่ๆ็ฎ__
- [A Theoretically Grounded Application of Dropout in Recurrent Neural Networks](http://arxiv.org/abs/1512.05287)
| keras-docs-ja/sources/layers/embeddings.md/0 | {
"file_path": "keras-docs-ja/sources/layers/embeddings.md",
"repo_id": "keras-docs-ja",
"token_count": 1185
} | 78 |
## TimeseriesGenerator
```python
keras.preprocessing.sequence.TimeseriesGenerator(data, targets, length, sampling_rate=1, stride=1, start_index=0, end_index=None, shuffle=False, reverse=False, batch_size=128)
```
ๆ็ณปๅใใผใฟใฎใใผใฟใฎใใใใ็ๆใใใใใฎใฆใผใใฃใชใใฃใฏใฉใน๏ผ
ใใฎใฏใฉในใฏ่จ็ทดใ่ฉไพกใฎใใใฎใใใใ็ๆใใใใใซ๏ผในใใฉใคใใๅฑฅๆญดใฎ้ทใ็ญใฎใใใชๆ็ณปๅใใผใฟใจใจใใซ๏ผ็ญ้้ใซ้ใใใใใใผใฟ็นใฎใทใผใฑใณในใๅใ่พผใฟใพใ๏ผ
__ๅผๆฐ__
- __data__: ้ฃ็ถ็ใชใใผใฟ็น๏ผใฟใคใ ในใใใ๏ผใๅซใใ ใชในใใNumpy้
ๅใฎใใใชใคใณใใใฏในๅฏ่ฝใชใธใงใใฌใผใฟ๏ผใใฎใใผใฟใฏ2ๆฌกๅ
ใงใใๅฟ
่ฆใใใ๏ผ่ปธ0ใฏๆ้ใฎๆฌกๅ
ใงใใไบใๆๅพ
ใใใพใ๏ผ
- __targets__: ใใผใฟใฎไธญใงใฟใคใ ในใใใใซๅฏพๅฟใใใฟใผใฒใใ๏ผใใผใฟใจๅใ้ทใใงใใๅฟ
่ฆใใใใพใ๏ผ
- __length__: ๏ผใฟใคใ ในใใใๆฐใซใใใฆ๏ผๅบๅใทใผใฑใณในใฎ้ทใ๏ผ
- __sampling_rate__: ใทใผใฑใณในๅ
ใง้ฃ็ถใใ็ฌ็ซใฎๆ้๏ผใฌใผใ`r`ใซใใฃใฆๆฑบใพใใฟใคใ ในใใใ`data[i]`, `data[i-r]`, ... `data[i - length]`ใฏใตใณใใซใฎใทใผใฑใณใน็ๆใซไฝฟใใใพใ๏ผ
- __stride__: ้ฃ็ถใใๅบๅใทใผใฑใณในใฎ็ฏๅฒ๏ผ้ฃ็ถใใๅบๅใตใณใใซใฏในใใฉใคใ`s`ใฎๅคใซใใฃใฆๆฑบใพใ`data[i]`, `data[i+s]`, `data[i+2*s]`ใชใฉใใๅบๆฅใฆใใพใ๏ผ
- __start_index__, __end_index__: `start_index`ใใๅใพใใฏ`end_index`ใใๅพใฎใใผใฟ็นใฏๅบๅใทใผใฑใณในใงใฏไฝฟใใใพใใ๏ผใใใฏใในใใๆค่จผใฎใใใซใใผใฟใฎไธ้จใไบ็ดใใใฎใซไพฟๅฉใงใ๏ผ
- __shuffle__: ๅบๅใตใณใใซใใทใฃใใใซใใใ๏ผๆ็ณปๅ้ ใซใใใ
- __reverse__: ็็ๅค๏ผ`true`ใชใๅๅบๅใตใณใใซใซใใใใฟใคใ ในใใใใ้้ ใซใชใใพใ๏ผ
- __batch_size__: ๅใใใใซใใใๆ็ณปๅใตใณใใซๆฐ๏ผใใใใๆๅพใฎ1ใคใ้คใใพใ๏ผ๏ผ
__ๆปใๅค__
[Sequence](/utils/#sequence)ใคใณในใฟใณใน๏ผ
__ไพ__
```python
from keras.preprocessing.sequence import TimeseriesGenerator
import numpy as np
data = np.array([[i] for i in range(50)])
targets = np.array([[i] for i in range(50)])
data_gen = TimeseriesGenerator(data, targets,
length=10, sampling_rate=2,
batch_size=2)
assert len(data_gen) == 20
batch_0 = data_gen[0]
x, y = batch_0
assert np.array_equal(x,
np.array([[[0], [2], [4], [6], [8]],
[[1], [3], [5], [7], [9]]]))
assert np.array_equal(y,
np.array([[10], [11]]))
```
---
## pad_sequences
```python
pad_sequences(sequences, maxlen=None, dtype='int32', padding='pre', truncating='pre', value=0.0)
```
ใทใผใฑใณในใๅใ้ทใใซใชใใใใซ่ฉฐใใพใ๏ผ
`num_samples` ใทใผใฑใณในใใๆงๆใใใใชในใ๏ผในใซใฉใฎใชในใ๏ผใshapeใ`(num_samples, num_timesteps)`ใฎ2ๆฌกๅ
ใฎNumpy ้
ๅใซๅคๆใใพใ๏ผ`num_timesteps`ใฏ`maxlen`ๅผๆฐใไธใใใใใฐ`maxlen`ใซ๏ผไธใใใใชใใใฐๆๅคงใฎใทใผใฑใณใน้ทใซใชใใพใ๏ผ
`num_timesteps`ใใ็ญใใทใผใฑใณในใฏ๏ผ`value`ใงใใใฃใณใฐใใใพใ๏ผ
`num_timesteps`ใใ้ทใใทใผใฑใณในใฏ๏ผๆๅฎใใใ้ทใใซๅใ่ฉฐใใใใพใ๏ผ
ใใใฃใณใฐใจๅใ่ฉฐใใฎไฝ็ฝฎใฏใใใใ`padding`ใจ`truncating`ใซใใฃใฆๆฑบใใใใพใ๏ผ
pre-paddingใใใใฉใซใใงใ๏ผ
__ๅผๆฐ__
- __sequences__: ใชในใใฎใชในใ๏ผๅ่ฆ็ด ใฏใใใใใทใผใฑใณในใงใ๏ผ
- __maxlen__: ๆดๆฐ๏ผใทใผใฑใณในใฎๆๅคง้ท๏ผ
- __dtype__: ๅบๅใทใผใฑใณในใฎๅ๏ผ
- __padding__: ๆๅญๅ๏ผ'pre'ใพใใฏ'post'๏ผๅใทใผใฑใณในใฎๅๅพใฉใกใใๅใใใ๏ผ
- __truncating__: ๆๅญๅ๏ผ'pre'ใพใใฏ'post'๏ผ`maxlen`ใใ้ทใใทใผใฑใณในใฎๅๅพใฉใกใใๅใ่ฉฐใใใ๏ผ
- __value__: ๆตฎๅๅฐๆฐ็นๆฐ๏ผใใใฃใณใฐใใๅค๏ผ
__ๆปใๅค__
- __x__: shapeใ`(len(sequences), maxlen)`ใฎNumpy้
ๅ๏ผ
__Raises__
- __ValueError__: `truncating`ใ`padding`ใ็กๅนใชๅคใฎๅ ดๅ๏ผใพใใฏ`sequences`ใฎใจใณใใชใ็กๅนใชshapeใฎๅ ดๅ๏ผ
---
## skipgrams
```python
skipgrams(sequence, vocabulary_size, window_size=4, negative_samples=1.0, shuffle=True, categorical=False, sampling_table=None, seed=None)
```
skipgramใฎๅ่ชใใขใ็ๆใใพใ๏ผ
ใใฎ้ขๆฐใฏๅ่ชใคใณใใใฏในใฎใทใผใฑใณใน๏ผๆดๆฐใฎใชในใ๏ผใไปฅไธใฎๅฝขๅผใฎๅ่ชใฎใฟใใซใซๅคๆใใพใ:
- ๏ผๅ่ช, ๅใๆ่ใงๅบ็พใใๅ่ช๏ผ, 1ใฎใฉใใซ ๏ผๆญฃไพ๏ผ๏ผ
- ๏ผๅ่ช, ่ชๅฝไธญใฎใฉใณใใ ใชๅ่ช๏ผ, 0ใฎใฉใใซ ๏ผ่ฒ ไพ๏ผ๏ผ
Skipgramใฎ่ฉณ็ดฐใฏMikolovใใฎ่ซๆใๅ็
งใใฆใใ ใใ: [Efficient Estimation of Word Representations in Vector Space](http://arxiv.org/pdf/1301.3781v3.pdf)
__ๅผๆฐ__
- __sequence__: ๅ่ชใฎใทใผใฑใณใน๏ผๆ๏ผใง๏ผๅ่ชใคใณใใใฏใน๏ผๆดๆฐ๏ผใฎใชในใใจใใฆใจใณใณใผใใใใใใฎ๏ผ`sampling_table`ใไฝฟใๅ ดๅ๏ผๅ่ชใคใณใใใฏในใฏๅ็
งใใใใผใฟใปใใใฎไธญใงๅ่ชใฎใฉใณใฏใซใใฃใใฉใณใฏใงใใไบใๆๅพ
ใใใพใ๏ผไพใใฐ10ใฏ10็ช็ฎใซ็ๅใใใใผใฏใณใซใจใณใณใผใใใใพใ๏ผ๏ผใคใณใใใฏใน0ใฏ็กๆๅณใช่ชใๆๅพ
ใใ๏ผในใญใใใใใพใ๏ผ
- __vocabulary_size__: ๆดๆฐ๏ผๅฏ่ฝใชๅ่ชใคใณใใใฏในใฎๆๅคงๅค+1๏ผ
- __window_size__: ๆดๆฐ๏ผใตใณใใชใณใฐใใใฆใฃใณใใฆใฎใตใคใบ๏ผๆ่ก็ใซใฏๅๅใฎใฆใฃใณใใฆ๏ผ๏ผๅ่ช`w_i`ใฎใฆใฃใณใใฆใฏ`[i - window_size, i + window_size+1]`ใซใชใใพใ๏ผ
- __negative_samples__: 0ไปฅไธใฎๆตฎๅๅฐๆฐ็นๆฐ๏ผ0ใฏใใฌใใฃใใตใณใใซๆฐใ0ใซใชใใพใ๏ผ1ใฏใใฌใใฃใใตใณใใซๆฐใใใธใใฃใใตใณใใซใจๅใๆฐใซใชใใพใ๏ผ
- __shuffle__: ๅ่ชใฎ็ตใๅคใใๅใซใทใฃใใใซใใใใฉใใ๏ผ
- __categorical__: ็็ๅค๏ผFalseใชใใฉใใซใฏๆดๆฐ๏ผไพใใฐ`[0, 1, 1 .. ]`๏ผใซใชใ๏ผ`True`ใชใใซใใดใชใซใซ๏ผไพใใฐ`[[1,0],[0,1],[0,1] .. ]`ใซใชใใพใ๏ผ
- __sampling_table__: ใตใคใบใ`vocabulary_size`ใฎ1ๆฌกๅ
้
ๅ๏ผใจใณใใชiใฏใคใณใใใฏในiใๆใคๅ่ช๏ผใใผใฟใปใใไธญใงi็ช็ฎใซ้ ปๅบใใๅ่ชใๆณๅฎใใพใ๏ผใฎใตใณใใชใณใฐ็ขบ็ใงใ๏ผ
- __seed__: ใฉใณใใ ใทใผใ๏ผ
__ๆปใๅค__
couples, labels: `couples`ใฏๆดๆฐใฎใใข๏ผ`labels`ใฏ0ใ1ใฎใใใใใงใ๏ผ
__ๆณจๆ__
ๆ
ฃไพใซใใ๏ผ่ชๅฝใฎไธญใงใคใณใใใฏในใ0ใฎใใฎใฏๅ่ชใงใฏใชใ๏ผในใญใใใใใพใ๏ผ
---
## make_sampling_table
```python
make_sampling_table(size, sampling_factor=1e-05)
```
ใฏใผใใฉใณใฏใใผในใฎ็ขบ็็ใชใตใณใใชใณใฐใใผใใซใ็ๆใใพใ๏ผ
`skipgrams`ใฎ`sampling_table`ๅผๆฐใ็ๆใใใใใซๅฉ็จใใพใ๏ผ`sampling_table[i]`ใฏใใผใฟใปใใไธญใงi็ช็ฎใซ้ ปๅบใใๅ่ชใใตใณใใชใณใฐใใ็ขบ็ใงใ๏ผใใฉใณในใไฟใคใใใซ๏ผ้ ปๅบ่ชใฏใใใใไฝใ้ ปๅบฆใงใตใณใใชใณใฐใใใพใ๏ผ๏ผ
ใตใณใใชใณใฐใฎ็ขบ็ใฏword2vecใงไฝฟใใใใตใณใใชใณใฐๅๅธใซๅพใฃใฆ็ๆใใใพใ๏ผ
`p(word) = min(1, sqrt(word_frequency / sampling_factor) / (word_frequency / sampling_factor))`
้ ปๅบฆ๏ผ้ ไฝ๏ผใฎๆฐๅค่ฟไผผใๅพใ๏ผs=1ใฎ๏ผใธใใใฎๆณๅใซๅ่ชใฎ้ ปๅบฆใๅพใฃใฆใใใจไปฎๅฎใใฆใใพใ๏ผ
`frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank))`
ใฎ`gamma`ใฏใชใคใฉใผใปใในใฑใญใผใๅฎๆฐใงใ๏ผ
__ๅผๆฐ__
- __size__: ๆดๆฐ๏ผใตใณใใชใณใฐๅฏ่ฝใช่ชๅฝๆฐ๏ผ
- __sampling_factor__: word2vecใฎๅผใซใใใใตใณใใชใณใฐๅ ๅญ๏ผ
__ๆปใๅค__
้ทใใ`size`ใฎ1ๆฌกๅ
ใฎNumpy้
ๅใง๏ผi็ช็ฎใฎ่ฆ็ด ใฏใฉใณใฏiใฎใฏใผใใใตใณใใชใณใฐใใใ็ขบ็ใงใ๏ผ
| keras-docs-ja/sources/preprocessing/sequence.md/0 | {
"file_path": "keras-docs-ja/sources/preprocessing/sequence.md",
"repo_id": "keras-docs-ja",
"token_count": 3875
} | 79 |
# Keras ์ธต์ ๊ดํ์ฌ
Keras์ ๋ชจ๋ ์ธต๋ค์ ์๋์ ๋ฉ์๋๋ค์ ๊ณตํต์ ์ผ๋ก ๊ฐ์ง๊ณ ์์ต๋๋ค.
- `layer.get_weights()`: NumPyํ ๋ฐฐ์ด์ ๋ฆฌ์คํธ ํํ๋ก ํด๋น ์ธต์ ๊ฐ์ค์น๋ค์ ๋ฐํํฉ๋๋ค.
- `layer.set_weights(weights)`: NumPyํ ๋ฐฐ์ด์ ๋ฆฌ์คํธ๋ก๋ถํฐ ํด๋น ์ธต์ ๊ฐ์ค์น๋ค์ ์ค์ ํฉ๋๋ค (`get_weights`์ ์ถ๋ ฅ๊ณผ ๋์ผํ ํฌ๊ธฐ๋ฅผ ๊ฐ์ ธ์ผ ํฉ๋๋ค).
- `layer.get_config()`: ํด๋น ์ธต์ ์ค์ ์ด ์ ์ฅ๋ ๋์
๋๋ฆฌ๋ฅผ ๋ฐํํฉ๋๋ค. ๋ชจ๋ ์ธต์ ๋ค์๊ณผ ๊ฐ์ด ์ค์ ๋ด์ฉ์ผ๋ก๋ถํฐ ๋ค์ ์ธ์คํด์คํ ๋ ์ ์์ต๋๋ค.
```python
layer = Dense(32)
config = layer.get_config()
reconstructed_layer = Dense.from_config(config)
```
๋๋
```python
from keras import layers
config = layer.get_config()
layer = layers.deserialize({'class_name': layer.__class__.__name__,
'config': config})
```
๋ง์ฝ ์ด๋ค ์ธต์ด ๋จ ํ๋์ ๋
ธ๋๋ง์ ๊ฐ์ง๊ณ ์๋ค๋ฉด (์ฆ, ๊ณต์ ๋ ์ธต์ด ์๋ ๊ฒฝ์ฐ), ์
๋ ฅ ํ
์, ์ถ๋ ฅ ํ
์, ์
๋ ฅ ํฌ๊ธฐ ๋ฐ ์ถ๋ ฅ ํฌ๊ธฐ๋ฅผ ๋ค์๊ณผ ๊ฐ์ด ์ป์ด์ฌ ์ ์์ต๋๋ค.
- `layer.input`
- `layer.output`
- `layer.input_shape`
- `layer.output_shape`
๋ง์ฝ ํด๋น ์ธต์ด ์ฌ๋ฌ ๊ฐ์ ๋
ธ๋๋ค์ ๊ฐ์ง๊ณ ์๋ค๋ฉด ([๋
ธ๋์ ๊ณต์ ๋ ์ธต์ ๊ฐ๋
](/getting-started/functional-api-guide/#the-concept-of-layer-node) ์ฐธ์กฐ), ๋ค์์ ๋ฉ์๋๋ค์ ์ฌ์ฉํ ์ ์์ต๋๋ค.
- `layer.get_input_at(node_index)`
- `layer.get_output_at(node_index)`
- `layer.get_input_shape_at(node_index)`
- `layer.get_output_shape_at(node_index)`
| keras-docs-ko/sources/layers/about-keras-layers.md/0 | {
"file_path": "keras-docs-ko/sources/layers/about-keras-layers.md",
"repo_id": "keras-docs-ko",
"token_count": 1050
} | 80 |
# Model ํด๋์ค API
ํจ์ํ<sub>Functional</sub> API๋ฅผ ์ฌ์ฉํ ๊ฒฝ์ฐ ๋ค์๊ณผ ๊ฐ์ ๋ฐฉ์์ผ๋ก ์
์ถ๋ ฅ ํ
์๋ฅผ ์ง์ ํ์ฌ `Model` ์ธ์คํด์ค๋ฅผ ๋ง๋ค ์ ์์ต๋๋ค.
```python
from keras.models import Model
from keras.layers import Input, Dense
a = Input(shape=(32,))
b = Dense(32)(a)
model = Model(inputs=a, outputs=b)
```
`a`๋ก๋ถํฐ `b`๋ฅผ ์ฐ์ถํ๋๋ฐ ํ์ํ ๋ชจ๋ ์ธต์ ์ด ๋ชจ๋ธ์ ๋ด์ ์ ์์ต๋๋ค. ๋ํ ๋ค์ค ์
๋ ฅ ํน์ ๋ค์ค ์ถ๋ ฅ ๋ชจ๋ธ์ ๊ฒฝ์ฐ ์
์ถ๋ ฅ ๋์์ ๋ฆฌ์คํธ๋ก ๋ฌถ์ด์ ์ง์ ํ ์ ์์ต๋๋ค.
```python
model = Model(inputs=[a1, a2], outputs=[b1, b2, b3])
```
๋ณด๋ค ์์ธํ `Model` ํ์ฉ๋ฒ์ [์ผ๋ผ์ค ํจ์ํ API ์์ํ๊ธฐ](/getting-started/functional-api-guide)๋ฅผ ์ฐธ์กฐํ์ญ์์ค.
## Model ๋ฉ์๋
### compile
```python
compile(optimizer, loss=None, metrics=None, loss_weights=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None)
```
ํ์ต์ํฌ ๋ชจ๋ธ์ ๊ตฌ์ฑํฉ๋๋ค.
__์ธ์__
- __optimizer__: ํ์ต์ ์ฌ์ฉํ ์ต์ ํ ํจ์<sub>optimizer</sub>๋ฅผ ์ง์ ํฉ๋๋ค. ์ผ๋ผ์ค๊ฐ ์ ๊ณตํ๋ ์ต์ ํ ํจ์์ ์ด๋ฆ ๋ฌธ์์ด<sub>string</sub> ๋๋ ๊ฐ๋ณ ์ต์ ํ ํจ์์ ์ธ์คํด์ค๋ฅผ ์
๋ ฅํฉ๋๋ค. ์์ธํ ์ฌํญ์ [์ต์ ํ ํจ์](/optimizers)๋ฅผ ์ฐธ๊ณ ํ์ญ์์ค.
- __loss__: ํ์ต์ ์ฌ์ฉํ ์์ค ํจ์<sub>loss function</sub>๋ฅผ ์ง์ ํฉ๋๋ค. ์ผ๋ผ์ค๊ฐ ์ ๊ณตํ๋ ์์ค ํจ์์ ์ด๋ฆ ๋ฌธ์์ด ๋๋ ๊ฐ๋ณ ์์ค ํจ์์ ์ธ์คํด์ค๋ฅผ ์
๋ ฅํฉ๋๋ค. ์์ธํ ์ฌํญ์ [์์ค ํจ์](/losses)์ ์ฐธ๊ณ ํ์ญ์์ค. ๋ชจ๋ธ์ด ๋ค์ค์ ๊ฒฐ๊ณผ<sub>output</sub>๋ฅผ ์ถ๋ ฅํ๋ ๊ฒฝ์ฐ, ์์ค ํจ์๋ฅผ ๋ฆฌ์คํธ ๋๋ ๋์
๋๋ฆฌ ํํ๋ก ์
๋ ฅํ์ฌ ๊ฒฐ๊ณผ์ ์ข
๋ฅ๋ณ๋ก ์๋ก ๋ค๋ฅธ ์์ค ํจ์๋ฅผ ์ ์ฉํ ์ ์์ต๋๋ค. ์ด ๊ฒฝ์ฐ ๋ชจ๋ธ์ด ์ต์ํํ ์์ค๊ฐ์ ๋ชจ๋ ์์ค ํจ์๋ณ ๊ฒฐ๊ณผ๊ฐ์ ํฉ์ด ๋ฉ๋๋ค.
- __metrics__: ํ์ต ๋ฐ ํ๊ฐ ๊ณผ์ ์์ ์ฌ์ฉํ ํ๊ฐ ์งํ<sub>metric</sub>์ ๋ฆฌ์คํธ์
๋๋ค. ๊ฐ์ฅ ๋น๋ฒํ ์ฌ์ฉํ๋ ์งํ๋ ์ ํ๋`metrics=['accuracy']`์
๋๋ค. ๋ชจ๋ธ์ด ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๊ฒฝ์ฐ, ๊ฐ ๊ฒฐ๊ณผ์ ์๋ก ๋ค๋ฅธ ์งํ๋ฅผ ํน์ ํ๋ ค๋ฉด `metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}`์ ๊ฐ์ ํ์์ ๋์
๋๋ฆฌ๋ฅผ ์
๋ ฅํฉ๋๋ค. ๋ํ ์ถ๋ ฅ ๊ฒฐ๊ณผ์ ๊ฐ์์ ๊ฐ์ ๊ธธ์ด์ ๋ฆฌ์คํธ๋ฅผ ์ ๋ฌํ๋ ํ์์ผ๋ก๋ ์ง์ ํ ์ ์์ต๋๋ค(์: `metrics=[['accuracy'], ['accuracy', 'mse']]` ํน์ `metrics=['accuracy', ['accuracy', 'mse']]`).
- __loss_weights__: ๋ชจ๋ธ์ด ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๊ฒฝ์ฐ ๊ฐ๊ฐ์ ์์ค๊ฐ์ด ์ ์ฒด ์์ค๊ฐ์ ๋ฏธ์น๋ ์ํฅ์ ์กฐ์ ํ๊ธฐ ์ํ ๊ณ์๋ฅผ ์ค์ ํฉ๋๋ค. ๋ชจ๋ธ์ด ์ต์ํํ ์์ค๊ฐ์ ๊ฐ๊ฐ `loss_weights` ๋งํผ์ ๊ฐ์ค์น๊ฐ ๊ณฑํด์ง ๊ฐ๋ณ ์์ค๊ฐ์ ํฉ์
๋๋ค. `float`ํ์์ ์ค์นผ๋ผ ๊ฐ์ผ๋ก ์ด๋ฃจ์ด์ง ๋ฆฌ์คํธ ๋๋ ๋์
๋๋ฆฌ๋ฅผ ์
๋ ฅ๋ฐ์ต๋๋ค. ๋ฆฌ์คํธ์ผ ๊ฒฝ์ฐ ๋ชจ๋ธ์ ๊ฒฐ๊ณผ์ ์์์ ๋ง๊ฒ 1:1๋ก ๋์ด๋์ด์ผ ํ๋ฉฐ, ๋์
๋๋ฆฌ์ ๊ฒฝ์ฐ ๊ฒฐ๊ณผ๊ฐ์ ๋ฌธ์์ด ์ด๋ฆ์ `key`๋ก, ์ ์ฉํ ๊ฐ์ค์น์ ์ค์นผ๋ผ ๊ฐ์ `value`๋ก ์ง์ ํด์ผ ํฉ๋๋ค.
- __sample_weight_mode__: `fit` ๋ฉ์๋์ `sample_weight`์ธ์๋ ๋ชจ๋ธ์ ํ์ต๊ณผ์ ์์ ์์ค์ ๊ณ์ฐํ ๋ ์
๋ ฅ๋ ํ๋ จ<sub>train</sub> ์ธํธ์ ํ๋ณธ<sub>sample</sub>๋ค ๊ฐ๊ฐ์ ๋ณ๋์ ๊ฐ์ค์น๋ฅผ ๋ถ์ฌํ๋ ์ญํ ์ ํฉ๋๋ค. ์ด๋ ๋ถ์ฌํ ๊ฐ์ค์น๊ฐ 2D(์๊ณ์ด) ํํ๋ผ๋ฉด `compile`๋ฉ์๋์ `sample_weight_mode`๋ฅผ `"temporal"`๋ก ์ค์ ํด์ผ ํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ 1D ํํ์ธ `None`์
๋๋ค. ๋ชจ๋ธ์ด ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๊ฒฝ์ฐ ๋์
๋๋ฆฌ ํน์ ๋ฆฌ์คํธ์ ํํ๋ก ์
๋ ฅํ์ฌ ๊ฐ ๊ฒฐ๊ณผ๋ง๋ค ์๋ก ๋ค๋ฅธ `sample_weight_mode`๋ฅผ ์ ์ฉํ ์ ์์ต๋๋ค.
- __weighted_metrics__: ํ์ต ๋ฐ ์ํ<sub>test</sub> ๊ณผ์ ์์ `sample_weight`๋ `class_weight`๋ฅผ ์ ์ฉํ ํ๊ฐ ์งํ๋ค์ ๋ฆฌ์คํธ ํ์์ผ๋ก ์ง์ ํฉ๋๋ค. `sample_weight`์ `class_weight`์ ๋ํด์๋ `fit`๋ฉ์๋์ ์ธ์ ํญ๋ชฉ์ ์ฐธ๊ณ ํ์ญ์์ค.
- __target_tensors__: ๊ธฐ๋ณธ์ ์ผ๋ก ์ผ๋ผ์ค๋ ํ์ต ๊ณผ์ ์์ ์ธ๋ถ๋ก๋ถํฐ ๋ชฉํ๊ฐ<sub>target</sub>์ ์
๋ ฅ๋ฐ์ ์ ์ฅํ ๊ณต๊ฐ์ธ ํ๋ ์ด์คํ๋<sub>placeholder</sub>๋ฅผ ๋ฏธ๋ฆฌ ์์ฑํฉ๋๋ค. ๋ง์ฝ ํ๋ ์ด์คํ๋๋ฅผ ์ฌ์ฉํ๋ ๋์ ํน์ ํ ํ
์๋ฅผ ๋ชฉํ๊ฐ์ผ๋ก ์ฌ์ฉํ๊ณ ์ ํ๋ค๋ฉด `target_tensors`์ธ์๋ฅผ ํตํด ์ง์ ์ง์ ํ๋ฉด ๋ฉ๋๋ค. ์ด๋ ๊ฒ ํ๋ฉด ํ์ต๊ณผ์ ์์ ์ธ๋ถ๋ก๋ถํฐ NumPy ๋ฐ์ดํฐ๋ฅผ ๋ชฉํ๊ฐ์ผ๋ก ์
๋ ฅ๋ฐ์ง ์๊ฒ ๋ฉ๋๋ค. ๋จ์ผ ๊ฒฐ๊ณผ ๋ชจ๋ธ์ผ ๊ฒฝ์ฐ ํ๋์ ํ
์๋ฅผ, ๋ค์ค ๋ชจ๋ธ์ ๊ฒฝ์ฐ๋ ํ
์์ ๋ฆฌ์คํธ ๋๋ ๊ฒฐ๊ณผ๊ฐ์ ๋ฌธ์์ด ์ด๋ฆ์ `key`๋ก, ํ
์๋ฅผ `value`๋ก ์ง์ ํ ๋์
๋๋ฆฌ๋ฅผ `target_tensors`๋ก ์
๋ ฅ๋ฐ์ต๋๋ค.
- __**kwargs__: Theano/CNTK ๋ฐฑ์๋๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์ด ์ธ์๋ `K.function`์ ์ ๋ฌ๋ฉ๋๋ค. TensorFlow ๋ฐฑ์๋๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ, ์ธ์๊ฐ `tf.Session.run`์ ์ ๋ฌ๋ฉ๋๋ค.
__์ค๋ฅ__
- __ValueError__: `optimizer`, `loss`, `metrics` ํน์ `sample_weight_mode`์ ์ธ์๊ฐ ์๋ชป๋ ๊ฒฝ์ฐ ๋ฐ์ํฉ๋๋ค.
----
### fit
```python
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)
```
์ ์ฒด ๋ฐ์ดํฐ์
์ ์ ํด์ง ํ์๋งํผ ๋ฐ๋ณตํ์ฌ ๋ชจ๋ธ์ ํ์ต์ํต๋๋ค.
__์ธ์__
- __x__: ์
๋ ฅ ๋ฐ์ดํฐ๋ก ๋ค์๊ณผ ๊ฐ์ ์ข
๋ฅ๊ฐ ๊ฐ๋ฅํฉ๋๋ค.
- NumPy ๋ฐฐ์ด<sub>array</sub> ๋๋ ๋ฐฐ์ด๊ณผ ๊ฐ์ ํ์์ ๋ฐ์ดํฐ. ๋ค์ค ์
๋ ฅ์ ๊ฒฝ์ฐ ๋ฐฐ์ด์ ๋ฆฌ์คํธ.
- ๋ชจ๋ธ์ด ์ด๋ฆ์ด ์ง์ ๋ ์
๋ ฅ๊ฐ์ ๋ฐ๋ ๊ฒฝ์ฐ ์ด๋ฆ๊ณผ ๋ฐฐ์ด/ํ
์๊ฐ `key`์ `value`๋ก ์ฐ๊ฒฐ๋ ๋์
๋๋ฆฌ.
- ํ์ด์ฌ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค๋ก `(inputs, targets)` ํน์ `(inputs, targets, sample weights)`๋ฅผ ๋ฐํํ๋ ๊ฒ.
- ํ๋ ์์ํฌ(์: TensorFlow)๋ฅผ ํตํด ์ด๋ฏธ ์ ์๋ ํ
์๋ฅผ ์
๋ ฅ๋ฐ๋ ๊ฒฝ์ฐ `None`(๊ธฐ๋ณธ๊ฐ).
- __y__: ๋ชฉํ ๋ฐ์ดํฐ๋ก ๋ค์๊ณผ ๊ฐ์ ์ข
๋ฅ๊ฐ ๊ฐ๋ฅํฉ๋๋ค.
- NumPy ๋ฐฐ์ด ๋๋ ๋ฐฐ์ด๊ณผ ๊ฐ์ ํ์์ ๋ฐ์ดํฐ. ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๊ฒฝ์ฐ ๋ฐฐ์ด์ ๋ฆฌ์คํธ.
- ํ๋ ์์ํฌ(์: TensorFlow)๋ฅผ ํตํด ์ด๋ฏธ ์ ์๋ ํ
์๋ฅผ ์
๋ ฅ๋ฐ๋ ๊ฒฝ์ฐ `None`(๊ธฐ๋ณธ๊ฐ).
- ๊ฒฐ๊ณผ๊ฐ์ ์ด๋ฆ์ด ์ง์ ๋์ด ์๋ ๊ฒฝ์ฐ ์ด๋ฆ๊ณผ ๋ฐฐ์ด/ํ
์๊ฐ `key`์ `value`๋ก ์ฐ๊ฒฐ๋ ๋์
๋๋ฆฌ.
- ๋ง์ฝ `x`์์ ํ์ด์ฌ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค๋ฅผ ์ฌ์ฉํ ๊ฒฝ์ฐ ๋ชฉํ๊ฐ์ด `x`์ ํจ๊ป ์
๋ ฅ๋๋ฏ๋ก ๋ณ๋์ `y`์
๋ ฅ์ ๋ถํ์ํฉ๋๋ค.
- __batch_size__: `int` ํน์ `None`. ์์ค๋ก๋ถํฐ ๊ทธ๋๋์ธํธ๋ฅผ ๊ตฌํ๊ณ ๊ฐ์ค์น๋ฅผ ์
๋ฐ์ดํธํ๋ ๊ณผ์ ํ ๋ฒ์ ์ฌ์ฉํ ํ๋ณธ์ ๊ฐ์์
๋๋ค. ๋ฐ๋ก ์ ํ์ง ์๋ ๊ฒฝ์ฐ `batch_size`๋ ๊ธฐ๋ณธ๊ฐ์ธ 32๊ฐ ๋ฉ๋๋ค. ๋ณ๋์ ์ฌ๋ณผ๋ฆญ ํ
์๋ ์ ๋ค๋ ์ดํฐ, ํน์ `Sequence` ์ธ์คํด์ค๋ก ๋ฐ์ดํฐ๋ฅผ ๋ฐ๋ ๊ฒฝ์ฐ ์ธ์คํด์ค๊ฐ ์๋์ผ๋ก ๋ฐฐ์น๋ฅผ ์์ฑํ๊ธฐ ๋๋ฌธ์ ๋ณ๋์ `batch_size`๋ฅผ ์ง์ ํ์ง ์์ต๋๋ค.
- __epochs__: `int`. ๋ชจ๋ธ์ ๋ฐ์ดํฐ ์ธํธ๋ฅผ ํ์ต์ํฌ ํ์์
๋๋ค. ํ ๋ฒ์ ์ํญ์ ํ๋ จ ๋ฐ์ดํฐ๋ก ์ฃผ์ด์ง ๋ชจ๋ `x`์ `y`๋ฅผ ๊ฐ 1ํ์ฉ ํ์ต์ํค๋ ๊ฒ์ ๋ปํฉ๋๋ค. ํ์์ ์ธ๋ฑ์ค๊ฐ `epochs`๋ก ์ฃผ์ด์ง ๊ฐ์ ๋๋ฌํ ๋๊น์ง ํ์ต์ด ๋ฐ๋ณต๋๋๋ก ๋์ด์๊ธฐ ๋๋ฌธ์ ๋ง์ฝ ์์ ํ์ฐจ๋ฅผ ์ง์ ํ๋ `initial_epoch` ์ธ์์ ๊ฐ์ด ์ฐ์ด๋ ๊ฒฝ์ฐ `epochs`๋ ์ ์ฒด ํ์๊ฐ ์๋ "๋ง์ง๋ง ํ์ฐจ"์ ์๋ฒ์ ๋ปํ๊ฒ ๋ฉ๋๋ค.
- __verbose__: `int`. `0`, `1`, ํน์ `2`. ํ์ต ์ค ์งํ ์ ๋ณด์ ํ๋ฉด ์ถ๋ ฅ ์ฌ๋ถ๋ฅผ ์ค์ ํ๋ ์ธ์์
๋๋ค. `0`์ ํ์ ์์, `1`์ ์งํ ํ์์ค<sub>progress bar</sub> ์ถ๋ ฅ, `2`๋ ์ํญ๋น ํ ์ค์ฉ ์ถ๋ ฅ์ ๋ปํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์
๋๋ค.
- __callbacks__: `keras.callbacks.Callback` ์ธ์คํด์ค์ ๋ฆฌ์คํธ. ํ์ต๊ณผ ๊ฒ์ฆ ๊ณผ์ ์์ ์ ์ฉํ ์ฝ๋ฐฑ์ ๋ฆฌ์คํธ์
๋๋ค. ์์ธํ ์ฌํญ์ [์ฝ๋ฐฑ](/callbacks)์ ์ฐธ์กฐํ์ญ์์ค.
- __validation_split__: 0๊ณผ 1์ฌ์ด์ `float`. ์
๋ ฅํ `x`์ `y` ํ๋ จ ๋ฐ์ดํฐ์ ๋ง์ง๋ง๋ถํฐ ์ง์ ๋ ๋น์จ๋งํผ์ ํ๋ณธ์ ๋ถ๋ฆฌํ์ฌ ๊ฒ์ฆ<sub>validation</sub> ๋ฐ์ดํฐ๋ฅผ ๋ง๋ญ๋๋ค. ์ด ๊ณผ์ ์ ๋ฐ์ดํฐ๋ฅผ ๋ค์๊ธฐ ์ ์ ์คํ๋ฉ๋๋ค. ๊ฒ์ฆ ๋ฐ์ดํฐ๋ ํ์ต์ ์ฌ์ฉ๋์ง ์์ผ๋ฉฐ ๊ฐ ์ํญ์ด ๋๋ ๋๋ง๋ค ๊ฒ์ฆ ์์ค๊ณผ ํ๊ฐ ์งํ๋ฅผ ๊ตฌํ๋๋ฐ ์ฌ์ฉ๋ฉ๋๋ค. `x`๊ฐ ์ ๋๋ ์ดํฐ ํน์ `Sequence`์ธ ๊ฒฝ์ฐ์๋ ์ง์๋์ง ์์ต๋๋ค.
- __validation_data__: ๋งค ์ํญ์ด ๋๋ ๋๋ง๋ค ์์ค ๋ฐ ํ๊ฐ์งํ๋ฅผ ์ธก์ ํ ๊ฒ์ฆ ๋ฐ์ดํฐ๋ฅผ ์ง์ ํฉ๋๋ค. ๊ฒ์ฆ ๋ฐ์ดํฐ๋ ์ค์ง ์ธก์ ์๋ง ํ์ฉ๋๋ฉฐ ํ์ต์๋ ์ฌ์ฉ๋์ง ์์ต๋๋ค. `validation_split`์ธ์์ ๊ฐ์ด ์ง์ ๋ ๊ฒฝ์ฐ `validation_split`์ธ์๋ฅผ ๋ฌด์ํ๊ณ ์ ์ฉ๋ฉ๋๋ค. `validation_data`๋ก๋ ๋ค์๊ณผ ๊ฐ์ ์ข
๋ฅ๊ฐ ๊ฐ๋ฅํฉ๋๋ค.
- NumPy ๋ฐฐ์ด ๋๋ ํ
์๋ก ์ด๋ฃจ์ด์ง `(x_val, y_val)` ํํ.
- NumPy ๋ฐฐ์ด๋ก ์ด๋ฃจ์ด์ง `(x_val, y_val, val_sample_weights)` ํํ.
- ์์ ๊ฐ์ด ํํ์ ์
๋ ฅํ๋ ๊ฒฝ์ฐ์๋ `batch_size`๋ ๊ฐ์ด ๋ช
์ํด์ผ ํฉ๋๋ค.
- ๋ฐ์ดํฐ ์ธํธ ๋๋ ๋ฐ์ดํฐ ์ธํธ์ ์ดํฐ๋ ์ดํฐ. ์ด ๊ฒฝ์ฐ `validation_steps`๋ฅผ ๊ฐ์ด ๋ช
์ํด์ผ ํฉ๋๋ค.
- __shuffle__: `bool` ๋๋ ๋ฌธ์์ด `'batch'`. ๋ถ๋ฆฌ์ธ ์
๋ ฅ์ ๊ฒฝ์ฐ ๊ฐ ์ํญ์ ์์ํ๊ธฐ ์ ์ ํ๋ จ ๋ฐ์ดํฐ๋ฅผ ๋ค์์์ง๋ฅผ ๊ฒฐ์ ํฉ๋๋ค. `'batch'`์ ๊ฒฝ์ฐ HDF5 ๋ฐ์ดํฐ์ ์ ์ฝ์ ํด๊ฒฐํ๊ธฐ ์ํ ์ค์ ์ผ๋ก ๊ฐ ๋ฐฐ์น ํฌ๊ธฐ ์์์ ๋ฐ์ดํฐ๋ฅผ ๋ค์์ต๋๋ค. `steps_per_epoch`๊ฐ์ด `None`์ด ์๋ ๊ฒฝ์ฐ `shuffle`์ธ์๋ ๋ฌดํจํ๋ฉ๋๋ค.
- __class_weight__: ํ์ํ ๊ฒฝ์ฐ์๋ง ์ฌ์ฉํ๋ ์ธ์๋ก, ๊ฐ ํด๋์ค ์ธ๋ฑ์ค๋ฅผ `key`๋ก, ๊ฐ์ค์น๋ฅผ `value`๋ก ๊ฐ๋ ๋์
๋๋ฆฌ๋ฅผ ์
๋ ฅํฉ๋๋ค. ์ธ๋ฑ์ค๋ ์ ์, ๊ฐ์ค์น๋ ๋ถ๋์์์ ๊ฐ์ ๊ฐ์ต๋๋ค. ์ฃผ๋ก ํ๋ จ ๋ฐ์ดํฐ์ ํด๋์ค ๋ถํฌ๊ฐ ๋ถ๊ท ํํ ๋ ์ด๋ฅผ ์ํํ๊ธฐ ์ํด ์ฌ์ฉํ๋ฉฐ, ์์ค ๊ณ์ฐ ๊ณผ์ ์์ ํ๋ณธ ์๊ฐ ๋ ์ ์ ํด๋์ค์ ์ํฅ์ ๊ฐ์ค์น๋ฅผ ํตํด ๋์ด์ฌ๋ฆฝ๋๋ค. ํ์ต ๊ณผ์ ์์๋ง ์ฌ์ฉ๋ฉ๋๋ค.
- __sample_weight__: ํน์ ํ ํ๋ จ ํ๋ณธ์ด ์์ค ํจ์์์ ๋ ํฐ ์ํฅ์ ์ฃผ๋๋ก ํ๊ณ ์ ํ ๊ฒฝ์ฐ์ ์ฌ์ฉํ๋ ์ธ์์
๋๋ค. ์ผ๋ฐ์ ์ผ๋ก ํ๋ณธ๊ณผ ๋์ผํ ๊ธธ์ด์ 1D NumPy ๋ฐฐ์ด์, ์๊ณ์ด ๋ฐ์ดํฐ์ ๊ฒฝ์ฐ `(samples, sequence_length)`๋ก ์ด๋ฃจ์ด์ง 2D ๋ฐฐ์ด์ ์
๋ ฅํ์ฌ ์์ค ๊ฐ์ค์น์ ํ๋ณธ์ด 1:1๋ก ์ง์ง์ด์ง๊ฒ๋ ํฉ๋๋ค. 2D ๋ฐฐ์ด์ ์
๋ ฅํ๋ ๊ฒฝ์ฐ ๋ฐ๋์ `compile()` ๋จ๊ณ์์ `sample_weight_mode="temporal"`๋ก ์ง์ ํด์ผ ํฉ๋๋ค. `x`๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `Sequence`์ธ์คํด์ค์ธ ๊ฒฝ์ฐ๋ ์์ค ๊ฐ์ค์น๋ฅผ `sample_weight`์ธ์ ๋์ `x`์ ์ธ ๋ฒ์งธ ๊ตฌ์ฑ์์๋ก ์
๋ ฅํด์ผ ์ ์ฉ๋ฉ๋๋ค.
- __initial_epoch__: `int`. ํน์ ํ ์ํญ์์ ํ์ต์ ์์ํ๋๋ก ์์ ํ์ฐจ๋ฅผ ์ง์ ํฉ๋๋ค. ์ด์ ์ ํ์ต์ ์ด์ด์ ํ ๋ ์ ์ฉํฉ๋๋ค.
- __steps_per_epoch__: `int` ํน์ `None`. 1ํ์ ์ํญ์ ์ด๋ฃจ๋ ๋ฐฐ์น์ ๊ฐ์๋ฅผ ์ ์ํ๋ฉฐ ๊ธฐ๋ณธ๊ฐ์ `None`์
๋๋ค. ํ๋ ์์ํฌ์์ ์ด๋ฏธ ์ ์๋ ํ
์(์: TensorFlow ๋ฐ์ดํฐ ํ
์)๋ฅผ ํ๋ จ ๋ฐ์ดํฐ๋ก ์ฌ์ฉํ ๋ `None`์ ์ง์ ํ๋ฉด ์๋์ผ๋ก ๋ฐ์ดํฐ์ ํ๋ณธ ์๋ฅผ ๋ฐฐ์น ํฌ๊ธฐ๋ก ๋๋์ด ์ฌ๋ฆผํ ๊ฐ์ ๊ฐ๊ฒ ๋๋ฉฐ, ๊ฐ์ ๊ตฌํ ์ ์๋ ๊ฒฝ์ฐ `1`์ด ๋ฉ๋๋ค.
- __validation_steps__: ๋ค์์ ๋ ๊ฐ์ง ๊ฒฝ์ฐ์ ํํด์ ์ ํจํ ์ธ์์
๋๋ค.
- `steps_per_epoch`๊ฐ์ด ํน์ ๋ ๊ฒฝ์ฐ, ๋งค ์ํญ์ ๊ฒ์ฆ์ ์ฌ์ฉ๋ ๋ฐฐ์น์ ๊ฐ์๋ฅผ ํน์ ํฉ๋๋ค.
- `validation_data`๋ฅผ ์ฌ์ฉํ๋ฉฐ ์ด์ ์ ๋๋ ์ดํฐ ํ์์ ๊ฐ์ ์
๋ ฅํ ๊ฒฝ์ฐ, ๋งค ์ํญ์ ๊ฒ์ฆ์ ์ฌ์ฉํ๊ธฐ ์ํด ์ ๋๋ ์ดํฐ๋ก๋ถํฐ ์์ฑํ ๋ฐฐ์น์ ๊ฐ์๋ฅผ ํน์ ํฉ๋๋ค.
- __validation_freq__: ๊ฒ์ฆ ๋ฐ์ดํฐ๊ฐ ์์ ๊ฒฝ์ฐ์ ํํด ์ ํจํ ์ธ์์
๋๋ค. ์ ์ ๋๋ ๋ฆฌ์คํธ/ํํ/์ธํธ ํ์
์ ์
๋ ฅ์ ๋ฐ์ต๋๋ค. ์ ์ ์
๋ ฅ์ ๊ฒฝ์ฐ ๋ช ํ์ ์ํญ๋ง๋ค 1ํ ๊ฒ์ฆํ ์ง๋ฅผ ์ ํฉ๋๋ค. ์์ปจ๋ `validation_freq=2`์ ๊ฒฝ์ฐ ๋งค 2ํ ์ํญ๋ง๋ค 1ํ ๊ฒ์ฆํฉ๋๋ค. ๋ง์ฝ ๋ฆฌ์คํธ๋ ํํ, ์ธํธ ํํ๋ก ์
๋ ฅ๋ ๊ฒฝ์ฐ ์
๋ ฅ๊ฐ ์์ ์ง์ ๋ ํ์ฐจ์ ํํด ๊ฒ์ฆ์ ์คํํฉ๋๋ค. ์๋ฅผ ๋ค์ด `validation_freq=[1, 2, 10]`์ ๊ฒฝ์ฐ 1, 2, 10๋ฒ์งธ ์ํญ์์ ๊ฒ์ฆํฉ๋๋ค.
- __max_queue_size__: `int`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. ์ ๋๋ ์ดํฐ ๋๊ธฐ์ด<sub>queue</sub>์ ์ต๋ ํฌ๊ธฐ๋ฅผ ์ง์ ํ๋ฉฐ, ๋ฏธ์ ์ธ ๊ฒฝ์ฐ ๊ธฐ๋ณธ๊ฐ `10`์ด ์ ์ฉ๋ฉ๋๋ค.
- __workers__: `int`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. ํ๋ก์ธ์ค ๊ธฐ๋ฐ์ผ๋ก ๋ค์ค ์ค๋ ๋ฉ์ ํ ๋ ์ ๋๋ ์ดํฐ ์๋์ ์ฌ์ฉํ ํ๋ก์ธ์ค์ ์ต๋ ๊ฐ์๋ฅผ ์ค์ ํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์ด๋ฉฐ, `0`์ ์
๋ ฅํ ๊ฒฝ์ฐ ๋ฉ์ธ ์ค๋ ๋์์ ์ ๋๋ ์ดํฐ๋ฅผ ์๋์ํต๋๋ค.
- __use_multiprocessing__: `bool`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. `True`์ธ ๊ฒฝ์ฐ ํ๋ก์ธ์ค ๊ธฐ๋ฐ ๋ค์ค ์ค๋ ๋ฉ์ ์ฌ์ฉํ๋ฉฐ ๊ธฐ๋ณธ๊ฐ์ `False`์
๋๋ค. ์ด ์ค์ ์ ์ฌ์ฉํ ๊ฒฝ์ฐ ์ ๋๋ ์ดํฐ์์ ๊ฐ์ฒด ์ง๋ ฌํ<sub>pickle</sub>๊ฐ ๋ถ๊ฐ๋ฅํ ์ธ์๋ค์ ์ฌ์ฉํ์ง ์๋๋ก ํฉ๋๋ค (๋ฉํฐํ๋ก์ธ์ฑ ๊ณผ์ ์์ ์์ ํ๋ก์ธ์ค๋ก ์ ๋ฌ๋์ง ์๊ธฐ ๋๋ฌธ์
๋๋ค).
- __**kwargs__: ์ด์ ๋ฒ์ ๊ณผ์ ํธํ์ฑ์ ์ํด ์ฌ์ฉ๋ฉ๋๋ค.
__๋ฐํ๊ฐ__
`History` ๊ฐ์ฒด๋ฅผ ๋ฐํํฉ๋๋ค. `History.history` ์์ฑ<sub>attribute</sub>์ ๊ฐ ์ํญ๋ง๋ค ๊ณ์ฐ๋ ํ์ต ์์ค ๋ฐ ํ๊ฐ ์งํ๊ฐ ์์๋๋ก ๊ธฐ๋ก๋ ๊ฐ์
๋๋ค. ๊ฒ์ฆ ๋ฐ์ดํฐ๋ฅผ ์ ์ฉํ ๊ฒฝ์ฐ ํด๋น ์์ค ๋ฐ ์งํ๋ ํจ๊ป ๊ธฐ๋ก๋ฉ๋๋ค.
__์ค๋ฅ__
- __RuntimeError__: ๋ชจ๋ธ์ด ์ปดํ์ผ๋์ง ์์ ๊ฒฝ์ฐ ๋ฐ์ํฉ๋๋ค.
- __ValueError__: ๋ชจ๋ธ์ ์ ์๋ ์
๋ ฅ๊ณผ ์ค์ ์
๋ ฅ์ด ์ผ์นํ์ง ์์ ๊ฒฝ์ฐ ๋ฐ์ํฉ๋๋ค.
----
### evaluate
```python
evaluate(x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)
```
์ํ ๋ชจ๋์์ ๋ชจ๋ธ์ ์์ค ๋ฐ ํ๊ฐ ์งํ ๊ฐ์ ๊ตฌํฉ๋๋ค. ๊ณ์ฐ์ ๋ฐฐ์น ๋จ์๋ก ์คํ๋ฉ๋๋ค.
__์ธ์__
- __x__: ์
๋ ฅ ๋ฐ์ดํฐ๋ก ๋ค์๊ณผ ๊ฐ์ ์ข
๋ฅ๊ฐ ๊ฐ๋ฅํฉ๋๋ค.
- NumPy ๋ฐฐ์ด ๋๋ ๋ฐฐ์ด๊ณผ ๊ฐ์ ํ์์ ๋ฐ์ดํฐ. ๋ค์ค ์
๋ ฅ์ ๊ฒฝ์ฐ ๋ฐฐ์ด์ ๋ฆฌ์คํธ.
- ๋ชจ๋ธ์ด ์ด๋ฆ์ด ์ง์ ๋ ์
๋ ฅ๊ฐ์ ๋ฐ๋ ๊ฒฝ์ฐ ์ด๋ฆ๊ณผ ๋ฐฐ์ด/ํ
์๊ฐ `key`์ `value`๋ก ์ฐ๊ฒฐ๋ ๋์
๋๋ฆฌ.
- ํ์ด์ฌ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค๋ก `(inputs, targets)` ํน์ `(inputs, targets, sample weights)`๋ฅผ ๋ฐํํ๋ ๊ฒ.
- ํ๋ ์์ํฌ(์: TensorFlow)๋ฅผ ํตํด ์ด๋ฏธ ์ ์๋ ํ
์๋ฅผ ์
๋ ฅ๋ฐ๋ ๊ฒฝ์ฐ `None`(๊ธฐ๋ณธ๊ฐ).
- __y__: ๋ชฉํ ๋ฐ์ดํฐ๋ก ๋ค์๊ณผ ๊ฐ์ ์ข
๋ฅ๊ฐ ๊ฐ๋ฅํฉ๋๋ค.
- NumPy ๋ฐฐ์ด ๋๋ ๋ฐฐ์ด๊ณผ ๊ฐ์ ํ์์ ๋ฐ์ดํฐ. ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๊ฒฝ์ฐ ๋ฐฐ์ด์ ๋ฆฌ์คํธ.
- ํ๋ ์์ํฌ(์: TensorFlow)๋ฅผ ํตํด ์ด๋ฏธ ์ ์๋ ํ
์๋ฅผ ์
๋ ฅ๋ฐ๋ ๊ฒฝ์ฐ `None`(๊ธฐ๋ณธ๊ฐ).
- ๊ฒฐ๊ณผ๊ฐ์ ์ด๋ฆ์ด ์ง์ ๋์ด ์๋ ๊ฒฝ์ฐ ์ด๋ฆ๊ณผ ๋ฐฐ์ด/ํ
์๊ฐ `key`์ `value`๋ก ์ฐ๊ฒฐ๋ ๋์
๋๋ฆฌ.
- ๋ง์ฝ `x`์์ ํ์ด์ฌ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค๋ฅผ ์ฌ์ฉํ ๊ฒฝ์ฐ ๋ชฉํ๊ฐ์ด `x`์ ํจ๊ป ์
๋ ฅ๋๋ฏ๋ก ๋ณ๋์ `y`์
๋ ฅ์ ๋ถํ์ํฉ๋๋ค.
- __batch_size__: `int` ํน์ `None`. ํ ๋ฒ์ ํ๊ฐ๋ ํ๋ณธ์ ๊ฐ์์
๋๋ค. ๋ฐ๋ก ์ ํ์ง ์๋ ๊ฒฝ์ฐ `batch_size`๋ ๊ธฐ๋ณธ๊ฐ์ธ 32๊ฐ ๋ฉ๋๋ค. ๋ณ๋์ ์ฌ๋ณผ๋ฆญ ํ
์๋ ์ ๋ค๋ ์ดํฐ, ํน์ `Sequence` ์ธ์คํด์ค๋ก ๋ฐ์ดํฐ๋ฅผ ๋ฐ๋ ๊ฒฝ์ฐ ์ธ์คํด์ค๊ฐ ์๋์ผ๋ก ๋ฐฐ์น๋ฅผ ์์ฑํ๊ธฐ ๋๋ฌธ์ ๋ณ๋์ `batch_size`๋ฅผ ์ง์ ํ์ง ์์ต๋๋ค.
- __verbose__: `0` ๋๋ `1`. ์งํ ์ ๋ณด์ ํ๋ฉด ์ถ๋ ฅ ์ฌ๋ถ๋ฅผ ์ค์ ํ๋ ์ธ์์
๋๋ค. `0`์ ํ์ ์์, `1`์ ์งํ ํ์์ค ์ถ๋ ฅ์ ๋ปํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์
๋๋ค.
- __sample_weight__: ํน์ ํ ์ํ ํ๋ณธ์ด ์์ค ํจ์์์ ๋ ํฐ ์ํฅ์ ์ฃผ๋๋ก ํ๊ณ ์ ํ ๊ฒฝ์ฐ์ ์ฌ์ฉํ๋ ์ธ์์
๋๋ค. ์ผ๋ฐ์ ์ผ๋ก ํ๋ณธ๊ณผ ๋์ผํ ๊ธธ์ด์ 1D NumPy ๋ฐฐ์ด์, ์๊ณ์ด ๋ฐ์ดํฐ์ ๊ฒฝ์ฐ `(samples, sequence_length)`๋ก ์ด๋ฃจ์ด์ง 2D ๋ฐฐ์ด์ ์
๋ ฅํ์ฌ ์์ค ๊ฐ์ค์น์ ํ๋ณธ์ด 1:1๋ก ์ง์ง์ด์ง๊ฒ๋ ํฉ๋๋ค. 2D ๋ฐฐ์ด์ ์
๋ ฅํ๋ ๊ฒฝ์ฐ ๋ฐ๋์ `compile()` ๋จ๊ณ์์ `sample_weight_mode="temporal"`๋ก ์ง์ ํด์ผ ํฉ๋๋ค.
- __steps__: `int` ํน์ `None`. ํ๊ฐ๋ฅผ ์๋ฃํ๊ธฐ๊น์ง์ ๋จ๊ณ(๋ฐฐ์น) ๊ฐ์๋ฅผ ์ ํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `None`์ผ๋ก, ์ด ๊ฒฝ์ฐ ๊ณ ๋ ค๋์ง ์์ต๋๋ค.
- __callbacks__: `keras.callbacks.Callback` ์ธ์คํด์ค์ ๋ฆฌ์คํธ. ํ๊ฐ ๊ณผ์ ์์ ์ ์ฉํ ์ฝ๋ฐฑ์ ๋ฆฌ์คํธ์
๋๋ค. [์ฝ๋ฐฑ](/callbacks)์ ์ฐธ์กฐํ์ญ์์ค.
- __max_queue_size__: `int`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. ์ ๋๋ ์ดํฐ ๋๊ธฐ์ด์ ์ต๋ ํฌ๊ธฐ๋ฅผ ์ง์ ํ๋ฉฐ, ๋ฏธ์ ์ธ ๊ฒฝ์ฐ ๊ธฐ๋ณธ๊ฐ `10`์ด ์ ์ฉ๋ฉ๋๋ค.
- __workers__: `int`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. ํ๋ก์ธ์ค ๊ธฐ๋ฐ์ผ๋ก ๋ค์ค ์ค๋ ๋ฉ์ ํ ๋ ์ ๋๋ ์ดํฐ ์๋์ ์ฌ์ฉํ ํ๋ก์ธ์ค์ ์ต๋ ๊ฐ์๋ฅผ ์ค์ ํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์ด๋ฉฐ, `0`์ ์
๋ ฅํ ๊ฒฝ์ฐ ๋ฉ์ธ ์ค๋ ๋์์ ์ ๋๋ ์ดํฐ๋ฅผ ์๋์ํต๋๋ค.
- __use_multiprocessing__: `bool`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. `True`์ธ ๊ฒฝ์ฐ ํ๋ก์ธ์ค ๊ธฐ๋ฐ ๋ค์ค ์ค๋ ๋ฉ์ ์ฌ์ฉํ๋ฉฐ ๊ธฐ๋ณธ๊ฐ์ `False`์
๋๋ค. ์ด ์ค์ ์ ์ฌ์ฉํ ๊ฒฝ์ฐ ์ ๋๋ ์ดํฐ์์ ๊ฐ์ฒด ์ง๋ ฌํ๊ฐ ๋ถ๊ฐ๋ฅํ ์ธ์๋ค์ ์ฌ์ฉํ์ง ์๋๋ก ํฉ๋๋ค (๋ฉํฐํ๋ก์ธ์ฑ ๊ณผ์ ์์ ์์ ํ๋ก์ธ์ค๋ก ์ ๋ฌ๋์ง ์๊ธฐ ๋๋ฌธ์
๋๋ค).
__๋ฐํ๊ฐ__
์ ์ฉํ ๋ชจ๋ธ์ด ๋จ์ผํ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ฉฐ ๋ณ๋์ ํ๊ฐ ์งํ๋ฅผ ์ฌ์ฉํ์ง ์๋ ๊ฒฝ์ฐ ์ํ ์์ค์ ์ค์นผ๋ผ ๊ฐ์ ์์ฑํฉ๋๋ค. ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๋ชจ๋ธ์ด๊ฑฐ๋ ์ฌ๋ฌ ํ๊ฐ ์งํ๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์ค์นผ๋ผ ๊ฐ์ ๋ฆฌ์คํธ๋ฅผ ์์ฑํฉ๋๋ค. `model.metrics_names` ์์ฑ์ ๊ฐ ์ค์นผ๋ผ ๊ฒฐ๊ณผ๊ฐ์ ํ ๋น๋ ์ด๋ฆ์ ๋ณด์ฌ์ค๋๋ค.
__์ค๋ฅ__
- __ValueError__: ์๋ชป๋ ์ธ์ ์ ๋ฌ์์ ๋ฐ์ํฉ๋๋ค.
----
### predict
```python
predict(x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)
```
๋ชจ๋ธ์ ํ๋ณธ์ ์
๋ ฅํ์ฌ ์์ธก๊ฐ์ ์์ฑํฉ๋๋ค. ๊ณ์ฐ์ ๋ฐฐ์น ๋จ์๋ก ์คํ๋ฉ๋๋ค.
__์ธ์__
- __x__: ์
๋ ฅ ๋ฐ์ดํฐ๋ก ๋ค์๊ณผ ๊ฐ์ ์ข
๋ฅ๊ฐ ๊ฐ๋ฅํฉ๋๋ค.
- NumPy ๋ฐฐ์ด ๋๋ ๋ฐฐ์ด๊ณผ ๊ฐ์ ํ์์ ๋ฐ์ดํฐ. ๋ค์ค ์
๋ ฅ์ ๊ฒฝ์ฐ ๋ฐฐ์ด์ ๋ฆฌ์คํธ.
- ๋ชจ๋ธ์ด ์ด๋ฆ์ด ์ง์ ๋ ์
๋ ฅ๊ฐ์ ๋ฐ๋ ๊ฒฝ์ฐ ์ด๋ฆ๊ณผ ๋ฐฐ์ด/ํ
์๊ฐ `key`์ `value`๋ก ์ฐ๊ฒฐ๋ ๋์
๋๋ฆฌ.
- ํ์ด์ฌ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค๋ก `(inputs, targets)` ํน์ `(inputs, targets, sample weights)`๋ฅผ ๋ฐํํ๋ ๊ฒ.
- ํ๋ ์์ํฌ(์: TensorFlow)๋ฅผ ํตํด ์ด๋ฏธ ์ ์๋ ํ
์๋ฅผ ์
๋ ฅ๋ฐ๋ ๊ฒฝ์ฐ `None`(๊ธฐ๋ณธ๊ฐ).
- __batch_size__: `int` ํน์ `None`. ํ ๋ฒ์ ์์ธก๋ ํ๋ณธ์ ๊ฐ์์
๋๋ค. ๋ฐ๋ก ์ ํ์ง ์๋ ๊ฒฝ์ฐ `batch_size`๋ ๊ธฐ๋ณธ๊ฐ์ธ 32๊ฐ ๋ฉ๋๋ค. ๋ณ๋์ ์ฌ๋ณผ๋ฆญ ํ
์๋ ์ ๋ค๋ ์ดํฐ, ํน์ `Sequence` ์ธ์คํด์ค๋ก ๋ฐ์ดํฐ๋ฅผ ๋ฐ๋ ๊ฒฝ์ฐ ์ธ์คํด์ค๊ฐ ์๋์ผ๋ก ๋ฐฐ์น๋ฅผ ์์ฑํ๊ธฐ ๋๋ฌธ์ ๋ณ๋์ `batch_size`๋ฅผ ์ง์ ํ์ง ์์ต๋๋ค.
- __verbose__: `0` ๋๋ `1`. ์งํ ์ ๋ณด์ ํ๋ฉด ์ถ๋ ฅ ์ฌ๋ถ๋ฅผ ์ค์ ํ๋ ์ธ์์
๋๋ค. `0`์ ํ์ ์์, `1`์ ์งํ ํ์์ค ์ถ๋ ฅ์ ๋ปํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์
๋๋ค.
- __steps__: `int` ํน์ `None`. ์์ธก์ ์๋ฃํ๊ธฐ๊น์ง์ ๋จ๊ณ(๋ฐฐ์น) ๊ฐ์๋ฅผ ์ ํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `None`์ผ๋ก, ์ด ๊ฒฝ์ฐ ๊ณ ๋ ค๋์ง ์์ต๋๋ค.
- __callbacks__: `keras.callbacks.Callback` ์ธ์คํด์ค์ ๋ฆฌ์คํธ. ์์ธก ๊ณผ์ ์์ ์ ์ฉํ ์ฝ๋ฐฑ์ ๋ฆฌ์คํธ์
๋๋ค. [์ฝ๋ฐฑ](/callbacks)์ ์ฐธ์กฐํ์ญ์์ค.
- __max_queue_size__: `int`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. ์ ๋๋ ์ดํฐ ๋๊ธฐ์ด์ ์ต๋ ํฌ๊ธฐ๋ฅผ ์ง์ ํ๋ฉฐ, ๋ฏธ์ ์ธ ๊ฒฝ์ฐ ๊ธฐ๋ณธ๊ฐ `10`์ด ์ ์ฉ๋ฉ๋๋ค.
- __workers__: `int`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. ํ๋ก์ธ์ค ๊ธฐ๋ฐ์ผ๋ก ๋ค์ค ์ค๋ ๋ฉ์ ํ ๋ ์ ๋๋ ์ดํฐ ์๋์ ์ฌ์ฉํ ํ๋ก์ธ์ค์ ์ต๋ ๊ฐ์๋ฅผ ์ค์ ํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์ด๋ฉฐ, `0`์ ์
๋ ฅํ ๊ฒฝ์ฐ ๋ฉ์ธ ์ค๋ ๋์์ ์ ๋๋ ์ดํฐ๋ฅผ ์๋์ํต๋๋ค.
- __use_multiprocessing__: `bool`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. `True`์ธ ๊ฒฝ์ฐ ํ๋ก์ธ์ค ๊ธฐ๋ฐ ๋ค์ค ์ค๋ ๋ฉ์ ์ฌ์ฉํ๋ฉฐ ๊ธฐ๋ณธ๊ฐ์ `False`์
๋๋ค. ์ด ์ค์ ์ ์ฌ์ฉํ ๊ฒฝ์ฐ ์ ๋๋ ์ดํฐ์์ ๊ฐ์ฒด ์ง๋ ฌํ๊ฐ ๋ถ๊ฐ๋ฅํ ์ธ์๋ค์ ์ฌ์ฉํ์ง ์๋๋ก ํฉ๋๋ค (๋ฉํฐํ๋ก์ธ์ฑ ๊ณผ์ ์์ ์์ ํ๋ก์ธ์ค๋ก ์ ๋ฌ๋์ง ์๊ธฐ ๋๋ฌธ์
๋๋ค).
__๋ฐํ๊ฐ__
์์ธก๊ฐ์ NumPy ๋ฐฐ์ด.
__์ค๋ฅ__
- __ValueError__: ๋ชจ๋ธ์ ์ ์๋ ์
๋ ฅ๊ณผ ์ค์ ์
๋ ฅ์ด ์ผ์นํ์ง ์์ ๊ฒฝ์ฐ, ๋๋ ์ํ ์ ์ฅ ๋ชจ๋ธ<sub>stateful model</sub>์ ์ฌ์ฉํ์ฌ ์์ธกํ ๋ ์
๋ ฅํ ํ๋ณธ์ ๊ฐ์๊ฐ ๋ฐฐ์น ํฌ๊ธฐ์ ๋ฐฐ์๊ฐ ์๋ ๊ฒฝ์ฐ ๋ฐ์ํฉ๋๋ค.
----
### train_on_batch
```python
train_on_batch(x, y, sample_weight=None, class_weight=None, reset_metrics=True)
```
ํ๋์ ๋ฐ์ดํฐ ๋ฐฐ์น์ ๋ํด์ ๊ทธ๋๋์ธํธ๋ฅผ ํ ๋ฒ ์ ์ฉํฉ๋๋ค.
__์ธ์__
- __x__: ๊ธฐ๋ณธ์ ์ผ๋ก ํ๋ จ ๋ฐ์ดํฐ์ NumPy ๋ฐฐ์ด์, ๋ชจ๋ธ์ด ๋ค์ค ์
๋ ฅ์ ๋ฐ๋ ๊ฒฝ์ฐ NumPy ๋ฐฐ์ด์ ๋ฆฌ์คํธ๋ฅผ ์
๋ ฅํฉ๋๋ค. ๋ชจ๋ธ์ ๋ชจ๋ ์
๋ ฅ์ ์ด๋ฆ์ด ๋ฐฐ์ ๋ ๊ฒฝ์ฐ ์
๋ ฅ๊ฐ ์ด๋ฆ์ `key`๋ก, NumPy ๋ฐฐ์ด์ `value`๋ก ๋ฌถ์ ๋์
๋๋ฆฌ๋ฅผ ์
๋ ฅํ ์ ์์ต๋๋ค.
- __y__: ๊ธฐ๋ณธ์ ์ผ๋ก ๋ชฉํ ๋ฐ์ดํฐ์ NumPy ๋ฐฐ์ด์, ๋ชจ๋ธ์ด ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๊ฒฝ์ฐ NumPy ๋ฐฐ์ด์ ๋ฆฌ์คํธ๋ฅผ ์
๋ ฅํฉ๋๋ค. ๋ชจ๋ธ์ ๋ชจ๋ ๊ฒฐ๊ณผ์ ์ด๋ฆ์ด ๋ฐฐ์ ๋ ๊ฒฝ์ฐ ๊ฒฐ๊ณผ๊ฐ ์ด๋ฆ์ `key`๋ก, NumPy ๋ฐฐ์ด์ `value`๋ก ๋ฌถ์ ๋์
๋๋ฆฌ๋ฅผ ์
๋ ฅํ ์ ์์ต๋๋ค.
- __sample_weight__: ํน์ ํ ํ๋ จ ํ๋ณธ์ด ์์ค ํจ์์์ ๋ ํฐ ์ํฅ์ ์ฃผ๋๋ก ํ๊ณ ์ ํ ๊ฒฝ์ฐ์ ์ฌ์ฉํ๋ ์ธ์์
๋๋ค. ์ผ๋ฐ์ ์ผ๋ก ํ๋ณธ๊ณผ ๋์ผํ ๊ธธ์ด์ 1D NumPy ๋ฐฐ์ด์, ์๊ณ์ด ๋ฐ์ดํฐ์ ๊ฒฝ์ฐ `(samples, sequence_length)`๋ก ์ด๋ฃจ์ด์ง 2D ๋ฐฐ์ด์ ์
๋ ฅํ์ฌ ์์ค ๊ฐ์ค์น์ ํ๋ณธ์ด 1:1๋ก ์ง์ง์ด์ง๊ฒ๋ ํฉ๋๋ค. 2D ๋ฐฐ์ด์ ์
๋ ฅํ๋ ๊ฒฝ์ฐ ๋ฐ๋์ `compile()` ๋จ๊ณ์์ `sample_weight_mode="temporal"`๋ก ์ง์ ํด์ผ ํฉ๋๋ค. `x`๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `Sequence`์ธ์คํด์ค์ธ ๊ฒฝ์ฐ๋ ์์ค ๊ฐ์ค์น๋ฅผ `sample_weight`์ธ์ ๋์ `x`์ ์ธ ๋ฒ์งธ ๊ตฌ์ฑ์์๋ก ์
๋ ฅํด์ผ ์ ์ฉ๋ฉ๋๋ค.
- __class_weight__: ํ์ํ ๊ฒฝ์ฐ์๋ง ์ฌ์ฉํ๋ ์ธ์๋ก, ๊ฐ ํด๋์ค ์ธ๋ฑ์ค๋ฅผ `key`๋ก, ๊ฐ์ค์น๋ฅผ `value`๋ก ๊ฐ๋ ๋์
๋๋ฆฌ๋ฅผ ์
๋ ฅํฉ๋๋ค. ์ธ๋ฑ์ค๋ ์ ์, ๊ฐ์ค์น๋ ๋ถ๋์์์ ๊ฐ์ ๊ฐ์ต๋๋ค. ์ฃผ๋ก ํ๋ จ ๋ฐ์ดํฐ์ ํด๋์ค ๋ถํฌ๊ฐ ๋ถ๊ท ํํ ๋ ์ด๋ฅผ ์ํํ๊ธฐ ์ํด ์ฌ์ฉํ๋ฉฐ, ์์ค ๊ณ์ฐ ๊ณผ์ ์์ ํ๋ณธ ์๊ฐ ๋ ์ ์ ํด๋์ค์ ์ํฅ์ ๊ฐ์ค์น๋ฅผ ํตํด ๋์ด์ฌ๋ฆฝ๋๋ค. ํ์ต ๊ณผ์ ์์๋ง ์ฌ์ฉ๋ฉ๋๋ค.
- __reset_metrics__: `True`์ธ ๊ฒฝ์ฐ ์ค์ง ํด๋น๋๋ ํ๋์ ๋ฐฐ์น๋ง์ ๊ณ ๋ คํ ํ๊ฐ ์งํ๊ฐ ์์ฑ๋ฉ๋๋ค. `False`์ธ ๊ฒฝ์ฐ ํ๊ฐ ์งํ๋ ์ดํ์ ์
๋ ฅ๋ ๋ค๋ฅธ ๋ฐฐ์น๋ค๊น์ง ๋์ ๋ฉ๋๋ค.
__๋ฐํ๊ฐ__
์ ์ฉํ ๋ชจ๋ธ์ด ๋จ์ผํ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ฉฐ ๋ณ๋์ ํ๊ฐ ์งํ๋ฅผ ์ฌ์ฉํ์ง ์๋ ๊ฒฝ์ฐ ํ๋ จ ์์ค์ ์ค์นผ๋ผ ๊ฐ์ ์์ฑํฉ๋๋ค. ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๋ชจ๋ธ์ด๊ฑฐ๋ ์ฌ๋ฌ ํ๊ฐ ์งํ๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์ค์นผ๋ผ ๊ฐ์ ๋ฆฌ์คํธ๋ฅผ ์์ฑํฉ๋๋ค. `model.metrics_names` ์์ฑ์ ๊ฐ ์ค์นผ๋ผ ๊ฒฐ๊ณผ๊ฐ์ ํ ๋น๋ ์ด๋ฆ์ ๋ณด์ฌ์ค๋๋ค.
----
### test_on_batch
```python
test_on_batch(x, y, sample_weight=None, reset_metrics=True)
```
ํ๋์ ํ๋ณธ ๋ฐฐ์น์ ๋ํด์ ๋ชจ๋ธ์ ํ
์คํธํฉ๋๋ค.
__์ธ์__
- __x__: ๊ธฐ๋ณธ์ ์ผ๋ก ํ๋ จ ๋ฐ์ดํฐ์ NumPy ๋ฐฐ์ด์, ๋ชจ๋ธ์ด ๋ค์ค ์
๋ ฅ์ ๋ฐ๋ ๊ฒฝ์ฐ NumPy ๋ฐฐ์ด์ ๋ฆฌ์คํธ๋ฅผ ์
๋ ฅํฉ๋๋ค. ๋ชจ๋ธ์ ๋ชจ๋ ์
๋ ฅ์ ์ด๋ฆ์ด ๋ฐฐ์ ๋ ๊ฒฝ์ฐ ์
๋ ฅ๊ฐ ์ด๋ฆ์ `key`๋ก, NumPy ๋ฐฐ์ด์ `value`๋ก ๋ฌถ์ ๋์
๋๋ฆฌ๋ฅผ ์
๋ ฅํ ์ ์์ต๋๋ค.
- __y__: ๊ธฐ๋ณธ์ ์ผ๋ก ๋ชฉํ ๋ฐ์ดํฐ์ NumPy ๋ฐฐ์ด์, ๋ชจ๋ธ์ด ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๊ฒฝ์ฐ NumPy ๋ฐฐ์ด์ ๋ฆฌ์คํธ๋ฅผ ์
๋ ฅํฉ๋๋ค. ๋ชจ๋ธ์ ๋ชจ๋ ๊ฒฐ๊ณผ์ ์ด๋ฆ์ด ๋ฐฐ์ ๋ ๊ฒฝ์ฐ ๊ฒฐ๊ณผ๊ฐ ์ด๋ฆ์ `key`๋ก, NumPy ๋ฐฐ์ด์ `value`๋ก ๋ฌถ์ ๋์
๋๋ฆฌ๋ฅผ ์
๋ ฅํ ์ ์์ต๋๋ค.
- __sample_weight__: ํน์ ํ ํ๋ จ ํ๋ณธ์ด ์์ค ํจ์์์ ๋ ํฐ ์ํฅ์ ์ฃผ๋๋ก ํ๊ณ ์ ํ ๊ฒฝ์ฐ์ ์ฌ์ฉํ๋ ์ธ์์
๋๋ค. ์ผ๋ฐ์ ์ผ๋ก ํ๋ณธ๊ณผ ๋์ผํ ๊ธธ์ด์ 1D NumPy ๋ฐฐ์ด์, ์๊ณ์ด ๋ฐ์ดํฐ์ ๊ฒฝ์ฐ `(samples, sequence_length)`๋ก ์ด๋ฃจ์ด์ง 2D ๋ฐฐ์ด์ ์
๋ ฅํ์ฌ ์์ค ๊ฐ์ค์น์ ํ๋ณธ์ด 1:1๋ก ์ง์ง์ด์ง๊ฒ๋ ํฉ๋๋ค. 2D ๋ฐฐ์ด์ ์
๋ ฅํ๋ ๊ฒฝ์ฐ ๋ฐ๋์ `compile()` ๋จ๊ณ์์ `sample_weight_mode="temporal"`๋ก ์ง์ ํด์ผ ํฉ๋๋ค. `x`๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `Sequence`์ธ์คํด์ค์ธ ๊ฒฝ์ฐ๋ ์์ค ๊ฐ์ค์น๋ฅผ `sample_weight`์ธ์ ๋์ `x`์ ์ธ ๋ฒ์งธ ๊ตฌ์ฑ์์๋ก ์
๋ ฅํด์ผ ์ ์ฉ๋ฉ๋๋ค.
- __reset_metrics__: `True`์ธ ๊ฒฝ์ฐ ์ค์ง ํด๋น๋๋ ํ๋์ ๋ฐฐ์น๋ง์ ๊ณ ๋ คํ ํ๊ฐ ์งํ๊ฐ ์์ฑ๋ฉ๋๋ค. `False`์ธ ๊ฒฝ์ฐ ํ๊ฐ ์งํ๋ ์ดํ์ ์
๋ ฅ๋ ๋ค๋ฅธ ๋ฐฐ์น๋ค๊น์ง ๋์ ๋ฉ๋๋ค.
__๋ฐํ๊ฐ__
์ ์ฉํ ๋ชจ๋ธ์ด ๋จ์ผํ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ฉฐ ๋ณ๋์ ํ๊ฐ ์งํ๋ฅผ ์ฌ์ฉํ์ง ์๋ ๊ฒฝ์ฐ ์ํ ์์ค์ ์ค์นผ๋ผ ๊ฐ์ ์์ฑํฉ๋๋ค. ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๋ชจ๋ธ์ด๊ฑฐ๋ ์ฌ๋ฌ ํ๊ฐ ์งํ๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์ค์นผ๋ผ ๊ฐ์ ๋ฆฌ์คํธ๋ฅผ ์์ฑํฉ๋๋ค. `model.metrics_names` ์์ฑ์ ๊ฐ ์ค์นผ๋ผ ๊ฒฐ๊ณผ๊ฐ์ ํ ๋น๋ ์ด๋ฆ์ ๋ณด์ฌ์ค๋๋ค.
----
### predict_on_batch
```python
predict_on_batch(x)
```
ํ๋์ ํ๋ณธ ๋ฐฐ์น์ ๋ํ ์์ธก๊ฐ์ ์์ฑํฉ๋๋ค.
__์ธ์__
- __x__: NumPy ๋ฐฐ์ด๋ก ์ด๋ฃจ์ด์ง ์
๋ ฅ ํ๋ณธ.
__๋ฐํ๊ฐ__
์์ธก๊ฐ์ NumPy ๋ฐฐ์ด.
----
### fit_generator
```python
fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)
```
ํ์ด์ฌ ์ ๋๋ ์ดํฐ ๋๋ `Sequence` ์ธ์คํด์ค์์ ์์ฑ๋ ๋ฐ์ดํฐ์ ๋ฐฐ์น๋ณ๋ก ๋ชจ๋ธ์ ํ์ต์ํต๋๋ค.
์ฐ์ฐ ํจ์จ์ ๋์ด๊ธฐ ์ํด ์ ๋๋ ์ดํฐ๋ ๋ชจ๋ธ๊ณผ ๋ณ๋ ฌ๋ก ์๋ํฉ๋๋ค. ์๋ฅผ ๋ค์ด GPU๋ก ๋ชจ๋ธ์ ํ์ต์ํค๋ ๋์์ CPU๋ก ์ค์๊ฐ ์ด๋ฏธ์ง ๋ฐ์ดํฐ ์ฆ๊ฐ<sub>data augmentation</sub>์ ์ฒ๋ฆฌํ ์ ์์ต๋๋ค. ๋ํ `keras.utils.Sequence`๋ฅผ ์ฌ์ฉํ ๊ฒฝ์ฐ ํ๋ณธ์ ์์๊ฐ ์ ์ง๋จ์ ๋ฌผ๋ก , `use_multiprocessing=True`์ธ ๊ฒฝ์ฐ์๋ ๋งค ์ํญ๋น ์
๋ ฅ์ด ํ ๋ฒ๋ง ๊ณ์ฐ๋๋๋ก ๋ณด์ฅ๋ฉ๋๋ค.
__์ธ์__
- __generator__: ํ์ด์ฌ ์ ๋๋ ์ดํฐ ํน์ `Sequence`(`keras.utils.Sequence`)๊ฐ์ฒด<sub>object</sub>์
๋๋ค. `Sequence` ๊ฐ์ฒด๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ๋ฉํฐํ๋ก์ธ์ฑ ์ ์ฉ์ ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ค๋ณต ์ฌ์ฉ๋๋ ๋ฌธ์ ๋ฅผ ํผํ ์ ์์ต๋๋ค. `generator`๋ ๋ฐ๋์ ๋ค์ ์ค ํ๋์ ๊ฐ์ ์์ฑํด์ผ ํฉ๋๋ค.
- `(inputs, targets)` ํํ
- `(inputs, targets, sample_weights)` ํํ.
๋งค๋ฒ ์ ๋๋ ์ดํฐ๊ฐ ์์ฑํ๋ ํํ์ ๊ฐ๋ค์ ๊ฐ๊ฐ ํ๋์ ๋ฐฐ์น๊ฐ ๋ฉ๋๋ค. ๋ฐ๋ผ์ ํํ์ ํฌํจ๋ ๋ฐฐ์ด๋ค์ ๋ชจ๋ ๊ฐ์ ๊ธธ์ด๋ฅผ ๊ฐ์ ธ์ผ ํฉ๋๋ค. ๋ฌผ๋ก ๋ฐฐ์น๊ฐ ๋ค๋ฅธ ๊ฒฝ์ฐ ๊ธธ์ด๋ ๋ฌ๋ผ์ง ์ ์์ต๋๋ค. ์์ปจ๋, ์ ์ฒด ํ๋ณธ ์๊ฐ ๋ฐฐ์น ํฌ๊ธฐ๋ก ๋ฑ ๋๋์ด์ง์ง ์๋ ๊ฒฝ์ฐ ๋ง์ง๋ง ๋ฐฐ์น์ ๊ธธ์ด๋ ๋ค๋ฅธ ๋ฐฐ์น๋ค๋ณด๋ค ์งง์ ๊ฒ๊ณผ ๊ฐ์ต๋๋ค. ์ ๋๋ ์ดํฐ๋ ์ฃผ์ด์ง ๋ฐ์ดํฐ๋ฅผ ๋ฌดํ์ ๋ฐ๋ณต ์ถ์ถํ๋ ๊ฒ์ด์ด์ผ ํ๋ฉฐ, ๋ชจ๋ธ์ด `steps_per_epoch`๋ก ์ง์ ๋ ๊ฐ์์ ๋ฐฐ์น๋ฅผ ์ฒ๋ฆฌํ ๋ 1ํ์ ์ํญ์ด ๋๋๋ ๊ฒ์ผ๋ก ๊ฐ์ฃผ๋ฉ๋๋ค.
- __steps_per_epoch__: `int`. 1ํ ์ํญ์ ๋ง์น๊ณ ๋ค์ ์ํญ์ ์์ํ ๋๊น์ง `generator`๋ก๋ถํฐ ์์ฑํ ๋ฐฐ์น์ ๊ฐ์๋ฅผ ์ง์ ํฉ๋๋ค. ์ผ๋ฐ์ ์ผ๋ก `ceil(num_samples / batch_size)`, ์ฆ ์ ์ฒด ํ๋ณธ์ ์๋ฅผ ๋ฐฐ์น ํฌ๊ธฐ๋ก ๋๋์ด ์ฌ๋ฆผํ ๊ฐ๊ณผ ๊ฐ์์ผ ํฉ๋๋ค. `Sequence` ์ธ์คํด์ค๋ฅผ ์
๋ ฅ์ผ๋ก ๋ฐ๋ ๊ฒฝ์ฐ์ ํํด `steps_per_epoch`๋ฅผ ์ง์ ํ์ง ์์ผ๋ฉด ์๋์ผ๋ก `len(generator)`๊ฐ์ ์ทจํฉ๋๋ค.
- __epochs__: `int`. ๋ชจ๋ธ์ ๋ฐ์ดํฐ ์ธํธ๋ฅผ ํ์ต์ํฌ ํ์์
๋๋ค. ํ ๋ฒ์ ์ํญ์ `steps_per_epoch`๋ก ์ ์๋ ๋ฐ์ดํฐ ์ ์ฒด๋ฅผ 1ํ ํ์ต์ํค๋ ๊ฒ์ ๋ปํฉ๋๋ค. ํ์์ ์ธ๋ฑ์ค๊ฐ `epochs`๋ก ์ฃผ์ด์ง ๊ฐ์ ๋๋ฌํ ๋๊น์ง ํ์ต์ด ๋ฐ๋ณต๋๋๋ก ๋์ด์๊ธฐ ๋๋ฌธ์ ๋ง์ฝ ์์ ํ์ฐจ๋ฅผ ์ง์ ํ๋ `initial_epoch` ์ธ์์ ๊ฐ์ด ์ฐ์ด๋ ๊ฒฝ์ฐ `epochs`๋ ์ ์ฒด ํ์๊ฐ ์๋ "๋ง์ง๋ง ํ์ฐจ"์ ์๋ฒ์ ๋ปํ๊ฒ ๋ฉ๋๋ค.
- __verbose__: `int`. `0`, `1`, ํน์ `2`. ํ์ต ์ค ์งํ ์ ๋ณด์ ํ๋ฉด ์ถ๋ ฅ ์ฌ๋ถ๋ฅผ ์ค์ ํ๋ ์ธ์์
๋๋ค. `0`์ ํ์ ์์, `1`์ ์งํ ํ์์ค ์ถ๋ ฅ, `2`๋ ์ํญ๋น ํ ์ค์ฉ ์ถ๋ ฅ์ ๋ปํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์
๋๋ค.
- __callbacks__: `keras.callbacks.Callback` ์ธ์คํด์ค์ ๋ฆฌ์คํธ. ํ์ต ๊ณผ์ ์์ ์ ์ฉํ ์ฝ๋ฐฑ์ ๋ฆฌ์คํธ์
๋๋ค. ์์ธํ ์ฌํญ์ [์ฝ๋ฐฑ](/callbacks)์ ์ฐธ์กฐํ์ญ์์ค.
- __validation_data__: ๋ค์ ์ค ํ๋์ ํํ๋ฅผ ์ทจํฉ๋๋ค.
- ์ ๋๋ ์ดํฐ ๋๋ ๊ฒ์ฆ ๋ฐ์ดํฐ์ฉ `Sequence` ๊ฐ์ฒด.
- `(x_val, y_val)` ํํ.
- `(x_val, y_val, val_sample_weights)` ํํ.
๋งคํ์ ํ์ต ์ํญ์ด ๋๋ ๋๋ง๋ค ์
๋ ฅ๋ ๋ฐ์ดํฐ๋ฅผ ์ฌ์ฉํ์ฌ ๊ฒ์ฆ ์์ค ๋ฐ ํ๊ฐ ์งํ๋ฅผ ๊ตฌํฉ๋๋ค. ํ์ต์๋ ์ฌ์ฉ๋์ง ์์ต๋๋ค.
- __validation_steps__: `validation_data`๊ฐ ์ ๋๋ ์ดํฐ ํ์์ ๊ฐ์ ์
๋ ฅํ ๊ฒฝ์ฐ์๋ง ์ ํจํ ์ธ์์
๋๋ค. ๋งค ์ํญ์ ๊ฒ์ฆ์ ์ฌ์ฉํ๊ธฐ ์ํด ์ ๋๋ ์ดํฐ๋ก๋ถํฐ ์์ฑํ ๋ฐฐ์น์ ๊ฐ์๋ฅผ ํน์ ํฉ๋๋ค. ์ผ๋ฐ์ ์ผ๋ก ๊ฒ์ฆ ์ธํธ์ ํ๋ณธ ๊ฐ์๋ฅผ ๋ฐฐ์น ํฌ๊ธฐ๋ก ๋๋์ด ์ฌ๋ฆผํ ๊ฐ๊ณผ ๊ฐ์์ผ ํฉ๋๋ค. `Sequence` ์ธ์คํด์ค๋ฅผ ์
๋ ฅ์ผ๋ก ๋ฐ๋ ๊ฒฝ์ฐ์ ํํด `steps_per_epoch`๋ฅผ ์ง์ ํ์ง ์์ผ๋ฉด ์๋์ผ๋ก `len(validation_data)`๊ฐ์ ์ทจํฉ๋๋ค.
- __validation_freq__: ๊ฒ์ฆ ๋ฐ์ดํฐ๊ฐ ์์ ๊ฒฝ์ฐ์ ํํด ์ ํจํ ์ธ์์
๋๋ค. ์ ์ ๋๋ ๋ฆฌ์คํธ/ํํ/์ธํธ ํ์
์ ์
๋ ฅ์ ๋ฐ์ต๋๋ค. ์ ์ ์
๋ ฅ์ ๊ฒฝ์ฐ ๋ช ํ์ ์ํญ๋ง๋ค 1ํ ๊ฒ์ฆํ ์ง๋ฅผ ์ ํฉ๋๋ค. ์์ปจ๋ `validation_freq=2`์ ๊ฒฝ์ฐ ๋งค 2ํ ์ํญ๋ง๋ค 1ํ ๊ฒ์ฆํฉ๋๋ค. ๋ง์ฝ ๋ฆฌ์คํธ๋ ํํ, ์ธํธ ํํ๋ก ์
๋ ฅ๋ ๊ฒฝ์ฐ ์
๋ ฅ๊ฐ ์์ ์ง์ ๋ ํ์ฐจ์ ํํด ๊ฒ์ฆ์ ์คํํฉ๋๋ค. ์๋ฅผ ๋ค์ด `validation_freq=[1, 2, 10]`์ ๊ฒฝ์ฐ 1, 2, 10๋ฒ์งธ ์ํญ์์ ๊ฒ์ฆํฉ๋๋ค.
- __class_weight__: ํ์ํ ๊ฒฝ์ฐ์๋ง ์ฌ์ฉํ๋ ์ธ์๋ก, ๊ฐ ํด๋์ค ์ธ๋ฑ์ค๋ฅผ `key`๋ก, ๊ฐ์ค์น๋ฅผ `value`๋ก ๊ฐ๋ ๋์
๋๋ฆฌ๋ฅผ ์
๋ ฅํฉ๋๋ค. ์ธ๋ฑ์ค๋ ์ ์, ๊ฐ์ค์น๋ ๋ถ๋์์์ ๊ฐ์ ๊ฐ์ต๋๋ค. ์ฃผ๋ก ํ๋ จ ๋ฐ์ดํฐ์ ํด๋์ค ๋ถํฌ๊ฐ ๋ถ๊ท ํํ ๋ ์ด๋ฅผ ์ํํ๊ธฐ ์ํด ์ฌ์ฉํ๋ฉฐ, ์์ค ๊ณ์ฐ ๊ณผ์ ์์ ํ๋ณธ ์๊ฐ ๋ ์ ์ ํด๋์ค์ ์ํฅ์ ๊ฐ์ค์น๋ฅผ ํตํด ๋์ด์ฌ๋ฆฝ๋๋ค. ํ์ต ๊ณผ์ ์์๋ง ์ฌ์ฉ๋ฉ๋๋ค.
- __max_queue_size__: `int`. ์ ๋๋ ์ดํฐ ๋๊ธฐ์ด์ ์ต๋ ํฌ๊ธฐ๋ฅผ ์ง์ ํ๋ฉฐ, ๋ฏธ์ ์ธ ๊ฒฝ์ฐ ๊ธฐ๋ณธ๊ฐ `10`์ด ์ ์ฉ๋ฉ๋๋ค.
- __workers__: `int`. ํ๋ก์ธ์ค ๊ธฐ๋ฐ์ผ๋ก ๋ค์ค ์ค๋ ๋ฉ์ ํ ๋ ์ ๋๋ ์ดํฐ ์๋์ ์ฌ์ฉํ ํ๋ก์ธ์ค์ ์ต๋ ๊ฐ์๋ฅผ ์ค์ ํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์ด๋ฉฐ, `0`์ ์
๋ ฅํ ๊ฒฝ์ฐ ๋ฉ์ธ ์ค๋ ๋์์ ์ ๋๋ ์ดํฐ๋ฅผ ์๋์ํต๋๋ค.
- __use_multiprocessing__: `bool`. `True`์ธ ๊ฒฝ์ฐ ํ๋ก์ธ์ค ๊ธฐ๋ฐ ๋ค์ค ์ค๋ ๋ฉ์ ์ฌ์ฉํ๋ฉฐ ๊ธฐ๋ณธ๊ฐ์ `False`์
๋๋ค. ์ด ์ค์ ์ ์ฌ์ฉํ ๊ฒฝ์ฐ ์ ๋๋ ์ดํฐ์์ ๊ฐ์ฒด ์ง๋ ฌํ๊ฐ ๋ถ๊ฐ๋ฅํ ์ธ์๋ค์ ์ฌ์ฉํ์ง ์๋๋ก ํฉ๋๋ค (๋ฉํฐํ๋ก์ธ์ฑ ๊ณผ์ ์์ ์์ ํ๋ก์ธ์ค๋ก ์ ๋ฌ๋์ง ์๊ธฐ ๋๋ฌธ์
๋๋ค).
- __shuffle__: `bool`. `Sequence`์ธ์คํด์ค ์
๋ ฅ์ ๋ฐ์ ๋์๋ง ์ฌ์ฉ๋ฉ๋๋ค. ๊ฐ ์ํญ์ ์์ํ๊ธฐ ์ ์ ํ๋ จ ๋ฐ์ดํฐ๋ฅผ ๋ค์์์ง๋ฅผ ๊ฒฐ์ ํฉ๋๋ค. `steps_per_epoch`๊ฐ์ด `None`์ด ์๋ ๊ฒฝ์ฐ `shuffle`์ธ์๋ ๋ฌดํจํ๋ฉ๋๋ค.
- __initial_epoch__: `int`. ํน์ ํ ์ํญ์์ ํ์ต์ ์์ํ๋๋ก ์์ ํ์ฐจ๋ฅผ ์ง์ ํฉ๋๋ค. ์ด์ ์ ํ์ต์ ์ด์ด์ ํ ๋ ์ ์ฉํฉ๋๋ค.
__๋ฐํ๊ฐ__
`History` ๊ฐ์ฒด๋ฅผ ๋ฐํํฉ๋๋ค. `History.history` ์์ฑ์ ๊ฐ ์ํญ๋ง๋ค ๊ณ์ฐ๋ ํ์ต ์์ค ๋ฐ ํ๊ฐ ์งํ๊ฐ ์์๋๋ก ๊ธฐ๋ก๋ ๊ฐ์
๋๋ค. ๊ฒ์ฆ ๋ฐ์ดํฐ๋ฅผ ์ ์ฉํ ๊ฒฝ์ฐ ํด๋น ์์ค ๋ฐ ์งํ๋ ํจ๊ป ๊ธฐ๋ก๋ฉ๋๋ค.
__์ค๋ฅ__
- __ValueError__: ์ ๋๋ ์ดํฐ๊ฐ ์ ํจํ์ง ์์ ํ์์ ๋ฐ์ดํฐ๋ฅผ ๋ง๋ค์ด ๋ด๋ ๊ฒฝ์ฐ ๋ฐ์ํฉ๋๋ค.
__์์__
```python
def generate_arrays_from_file(path):
while True:
with open(path) as f:
for line in f:
# ํ์ผ์ ๊ฐ ๋ผ์ธ์ผ๋ก๋ถํฐ
# ์
๋ ฅ ๋ฐ์ดํฐ์ ๋ ์ด๋ธ์ NumPy ๋ฐฐ์ด์ ๋ง๋ญ๋๋ค
x1, x2, y = process_line(line)
yield ({'input_1': x1, 'input_2': x2}, {'output': y})
model.fit_generator(generate_arrays_from_file('/my_file.txt'),
steps_per_epoch=10000, epochs=10)
```
----
### evaluate_generator
```python
evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)
```
์ ๋๋ ์ดํฐ์์ ์์ฑํ ๋ฐ์ดํฐ๋ฅผ ์ด์ฉํ์ฌ ๋ชจ๋ธ์ ํ๊ฐํฉ๋๋ค. ์ ๋๋ ์ดํฐ๋ `test_on_batch`๊ฐ ์๊ตฌํ๋ ๊ฒ๊ณผ ๋์ผํ ์ข
๋ฅ์ ๋ฐ์ดํฐ๋ฅผ ์์ฑํ๋ ๊ฒ์ด์ด์ผ ํฉ๋๋ค.
__์ธ์__
- __generator__:
`(inputs, targets)` ํํ ๋๋ `(inputs, targets, sample_weights)` ํํ์ ์์ฑํ๋ ํ์ด์ฌ ์ ๋๋ ์ดํฐ ํน์ `Sequence`(`keras.utils.Sequence`)์ธ์คํด์ค์
๋๋ค. `Sequence`์ธ์คํด์ค๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ๋ฉํฐํ๋ก์ธ์ฑ ์ ์ฉ์ ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ค๋ณต ์ฌ์ฉ๋๋ ๋ฌธ์ ๋ฅผ ํผํ ์ ์์ต๋๋ค.
- __steps__: ํ๊ฐ๋ฅผ ๋ง์น๊ธฐ ์ ๊น์ง `generator`๋ก๋ถํฐ ์์ฑํ ๋ฐฐ์น์ ๊ฐ์๋ฅผ ์ง์ ํฉ๋๋ค. `Sequence` ์ธ์คํด์ค๋ฅผ ์
๋ ฅ์ผ๋ก ๋ฐ๋ ๊ฒฝ์ฐ์ ํํด `steps`๋ฅผ ์ง์ ํ์ง ์์ผ๋ฉด ์๋์ผ๋ก `len(generator)`๊ฐ์ ์ทจํฉ๋๋ค.
- __callbacks__: `keras.callbacks.Callback` ์ธ์คํด์ค์ ๋ฆฌ์คํธ. ํ๊ฐ ๊ณผ์ ์์ ์ ์ฉํ ์ฝ๋ฐฑ์ ๋ฆฌ์คํธ์
๋๋ค. [์ฝ๋ฐฑ](/callbacks)์ ์ฐธ์กฐํ์ญ์์ค.
- __max_queue_size__: `int`. ์ ๋๋ ์ดํฐ ๋๊ธฐ์ด์ ์ต๋ ํฌ๊ธฐ๋ฅผ ์ง์ ํ๋ฉฐ, ๋ฏธ์ ์ธ ๊ฒฝ์ฐ ๊ธฐ๋ณธ๊ฐ `10`์ด ์ ์ฉ๋ฉ๋๋ค.
- __workers__: `int`. ํ๋ก์ธ์ค ๊ธฐ๋ฐ์ผ๋ก ๋ค์ค ์ค๋ ๋ฉ์ ํ ๋ ์ ๋๋ ์ดํฐ ์๋์ ์ฌ์ฉํ ํ๋ก์ธ์ค์ ์ต๋ ๊ฐ์๋ฅผ ์ค์ ํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์ด๋ฉฐ, `0`์ ์
๋ ฅํ ๊ฒฝ์ฐ ๋ฉ์ธ ์ค๋ ๋์์ ์ ๋๋ ์ดํฐ๋ฅผ ์๋์ํต๋๋ค.
- __use_multiprocessing__: `bool`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. `True`์ธ ๊ฒฝ์ฐ ํ๋ก์ธ์ค ๊ธฐ๋ฐ ๋ค์ค ์ค๋ ๋ฉ์ ์ฌ์ฉํ๋ฉฐ ๊ธฐ๋ณธ๊ฐ์ `False`์
๋๋ค. ์ด ์ค์ ์ ์ฌ์ฉํ ๊ฒฝ์ฐ ์ ๋๋ ์ดํฐ์์ ๊ฐ์ฒด ์ง๋ ฌํ๊ฐ ๋ถ๊ฐ๋ฅํ ์ธ์๋ค์ ์ฌ์ฉํ์ง ์๋๋ก ํฉ๋๋ค (๋ฉํฐํ๋ก์ธ์ฑ ๊ณผ์ ์์ ์์ ํ๋ก์ธ์ค๋ก ์ ๋ฌ๋์ง ์๊ธฐ ๋๋ฌธ์
๋๋ค).
- __verbose__: `0` ๋๋ `1`. ์งํ ์ ๋ณด์ ํ๋ฉด ์ถ๋ ฅ ์ฌ๋ถ๋ฅผ ์ค์ ํ๋ ์ธ์์
๋๋ค. `0`์ ํ์ ์์, `1`์ ์งํ ํ์์ค ์ถ๋ ฅ์ ๋ปํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์
๋๋ค.
__๋ฐํ๊ฐ__
์ ์ฉํ ๋ชจ๋ธ์ด ๋จ์ผํ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ฉฐ ๋ณ๋์ ํ๊ฐ ์งํ๋ฅผ ์ฌ์ฉํ์ง ์๋ ๊ฒฝ์ฐ ์ํ ์์ค์ ์ค์นผ๋ผ ๊ฐ์ ์์ฑํฉ๋๋ค. ๋ค์ค์ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ ๋ชจ๋ธ์ด๊ฑฐ๋ ์ฌ๋ฌ ํ๊ฐ ์งํ๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์ค์นผ๋ผ ๊ฐ์ ๋ฆฌ์คํธ๋ฅผ ์์ฑํฉ๋๋ค. `model.metrics_names` ์์ฑ์ ๊ฐ ์ค์นผ๋ผ ๊ฒฐ๊ณผ๊ฐ์ ํ ๋น๋ ์ด๋ฆ์ ๋ณด์ฌ์ค๋๋ค.
__์ค๋ฅ__
- __ValueError__: ์ ๋๋ ์ดํฐ๊ฐ ์ ํจํ์ง ์์ ํ์์ ๋ฐ์ดํฐ๋ฅผ ๋ง๋ค์ด ๋ด๋ ๊ฒฝ์ฐ ๋ฐ์ํฉ๋๋ค.
----
### predict_generator
```python
predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)
```
์ ๋๋ ์ดํฐ์์ ์์ฑํ ํ๋ณธ ๋ฐ์ดํฐ์ ๋ํ ์์ธก๊ฐ์ ์์ฑํฉ๋๋ค. ์ ๋๋ ์ดํฐ๋ `predict_on_batch`๊ฐ ์๊ตฌํ๋ ๊ฒ๊ณผ ๋์ผํ ์ข
๋ฅ์ ๋ฐ์ดํฐ๋ฅผ ์์ฑํ๋ ๊ฒ์ด์ด์ผ ํฉ๋๋ค.
__์ธ์__
- __generator__: ํ๋ณธ ๋ฐ์ดํฐ์ ๋ฐฐ์น๋ฅผ ์์ฑํ๋ ์ ๋๋ ์ดํฐ ํน์ `Sequence`(`keras.utils.Sequence`)์ธ์คํด์ค์
๋๋ค. `Sequence`์ธ์คํด์ค๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ๋ฉํฐํ๋ก์ธ์ฑ ์ ์ฉ์ ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ค๋ณต ์ฌ์ฉ๋๋ ๋ฌธ์ ๋ฅผ ํผํ ์ ์์ต๋๋ค.
- __steps__: ์์ธก์ ๋ง์น๊ธฐ ์ ๊น์ง `generator`๋ก๋ถํฐ ์์ฑํ ๋ฐฐ์น์ ๊ฐ์๋ฅผ ์ง์ ํฉ๋๋ค. `Sequence` ์ธ์คํด์ค๋ฅผ ์
๋ ฅ์ผ๋ก ๋ฐ๋ ๊ฒฝ์ฐ์ ํํด `steps`๋ฅผ ์ง์ ํ์ง ์์ผ๋ฉด ์๋์ผ๋ก `len(generator)`๊ฐ์ ์ทจํฉ๋๋ค.
- __callbacks__: `keras.callbacks.Callback` ์ธ์คํด์ค์ ๋ฆฌ์คํธ. ์์ธก ๊ณผ์ ์์ ์ ์ฉํ ์ฝ๋ฐฑ์ ๋ฆฌ์คํธ์
๋๋ค. [์ฝ๋ฐฑ](/callbacks)์ ์ฐธ์กฐํ์ญ์์ค.
- __max_queue_size__: `int`. ์ ๋๋ ์ดํฐ ๋๊ธฐ์ด์ ์ต๋ ํฌ๊ธฐ๋ฅผ ์ง์ ํ๋ฉฐ, ๋ฏธ์ ์ธ ๊ฒฝ์ฐ ๊ธฐ๋ณธ๊ฐ `10`์ด ์ ์ฉ๋ฉ๋๋ค.
- __workers__: `int`. ํ๋ก์ธ์ค ๊ธฐ๋ฐ์ผ๋ก ๋ค์ค ์ค๋ ๋ฉ์ ํ ๋ ์ ๋๋ ์ดํฐ ์๋์ ์ฌ์ฉํ ํ๋ก์ธ์ค์ ์ต๋ ๊ฐ์๋ฅผ ์ค์ ํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์ด๋ฉฐ, `0`์ ์
๋ ฅํ ๊ฒฝ์ฐ ๋ฉ์ธ ์ค๋ ๋์์ ์ ๋๋ ์ดํฐ๋ฅผ ์๋์ํต๋๋ค.
- __use_multiprocessing__: `bool`. ์
๋ ฅ ๋ฐ์ดํฐ๊ฐ ์ ๋๋ ์ดํฐ ๋๋ `keras.utils.Sequence` ์ธ์คํด์ค์ผ ๋๋ง ์ ํจํฉ๋๋ค. `True`์ธ ๊ฒฝ์ฐ ํ๋ก์ธ์ค ๊ธฐ๋ฐ ๋ค์ค ์ค๋ ๋ฉ์ ์ฌ์ฉํ๋ฉฐ ๊ธฐ๋ณธ๊ฐ์ `False`์
๋๋ค. ์ด ์ค์ ์ ์ฌ์ฉํ ๊ฒฝ์ฐ ์ ๋๋ ์ดํฐ์์ ๊ฐ์ฒด ์ง๋ ฌํ๊ฐ ๋ถ๊ฐ๋ฅํ ์ธ์๋ค์ ์ฌ์ฉํ์ง ์๋๋ก ํฉ๋๋ค (๋ฉํฐํ๋ก์ธ์ฑ ๊ณผ์ ์์ ์์ ํ๋ก์ธ์ค๋ก ์ ๋ฌ๋์ง ์๊ธฐ ๋๋ฌธ์
๋๋ค).
- __verbose__: `0` ๋๋ `1`. ์งํ ์ ๋ณด์ ํ๋ฉด ์ถ๋ ฅ ์ฌ๋ถ๋ฅผ ์ค์ ํ๋ ์ธ์์
๋๋ค. `0`์ ํ์ ์์, `1`์ ์งํ ํ์์ค ์ถ๋ ฅ์ ๋ปํฉ๋๋ค. ๊ธฐ๋ณธ๊ฐ์ `1`์
๋๋ค.
__๋ฐํ๊ฐ__
์์ธก ๊ฐ์ NumPy ๋ฐฐ์ด.
__์ค๋ฅ__
- __ValueError__: ์ ๋๋ ์ดํฐ๊ฐ ์ ํจํ์ง ์์ ํ์์ ๋ฐ์ดํฐ๋ฅผ ๋ง๋ค์ด ๋ด๋ ๊ฒฝ์ฐ ๋ฐ์ํฉ๋๋ค.
----
### get_layer
```python
get_layer(name=None, index=None)
```
์ธต<sub>layer</sub>์ (๊ณ ์ ํ) ์ด๋ฆ, ํน์ ์ธ๋ฑ์ค๋ฅผ ๋ฐํ์ผ๋ก ํด๋น ์ธต์ ๊ฐ์ ธ์ต๋๋ค. `name`๊ณผ `index`๊ฐ ๋ชจ๋ ์ ๊ณต๋๋ ๊ฒฝ์ฐ, `index`๊ฐ ์ฐ์ ์์๋ฅผ ๊ฐ์ต๋๋ค.
์ธ๋ฑ์ค๋ (์ํฅ์) ๋๋น ์ฐ์ ๊ทธ๋ํ ํ์<sub>(bottom-up) horizontal graph traversal</sub> ์์๋ฅผ ๋ฐ๋ฆ
๋๋ค.
__์ธ์__
- __name__: `str`. ์ธต์ ์ด๋ฆ์
๋๋ค.
- __index__: `int`. ์ธต์ ์ธ๋ฑ์ค์
๋๋ค.
__๋ฐํ๊ฐ__
์ธต ์ธ์คํด์ค.
__์ค๋ฅ__
- __ValueError__: ์ธต์ ์ด๋ฆ์ด๋ ์ธ๋ฑ์ค๊ฐ ์ ํจํ์ง ์์ ๊ฒฝ์ฐ ๋ฐ์ํฉ๋๋ค.
| keras-docs-ko/sources/models/model.md/0 | {
"file_path": "keras-docs-ko/sources/models/model.md",
"repo_id": "keras-docs-ko",
"token_count": 31237
} | 81 |
# ๆฌ็คบไพๆผ็คบไบๅฆไฝไธบ Keras ็ผๅ่ชๅฎไน็ฝ็ปๅฑใ
ๆไปฌๆๅปบไบไธไธช็งฐไธบ 'Antirectifier' ็่ชๅฎไนๆฟๆดปๅฑ๏ผ่ฏฅๅฑๅฏไปฅไฟฎๆน้่ฟๅฎ็ๅผ ้็ๅฝข็ถใ
ๆไปฌ้่ฆๆๅฎไธคไธชๆนๆณ: `compute_output_shape` ๅ `call`ใ
ๆณจๆ๏ผ็ธๅ็็ปๆไนๅฏไปฅ้่ฟ Lambda ๅฑๅๅพใ
ๆไปฌ็่ชๅฎไนๅฑๆฏไฝฟ็จ Keras ๅ็ซฏ (`K`) ไธญ็ๅบๅ
็ผๅ็๏ผๅ ่ไปฃ็ ๅฏไปฅๅจ TensorFlow ๅ Theano ไธ่ฟ่กใ
```python
from __future__ import print_function
import keras
from keras.models import Sequential
from keras import layers
from keras.datasets import mnist
from keras import backend as K
class Antirectifier(layers.Layer):
'''่ฟๆฏๆ ทๆฌ็บง็ L2 ๆ ๅๅไธ่พๅ
ฅ็ๆญฃ่ด้จๅไธฒ่็็ปๅใ
็ปๆๆฏไธคๅไบ่พๅ
ฅๆ ทๆฌ็ๆ ทๆฌๅผ ้ใ
ๅฎๅฏไปฅ็จไบๆฟไปฃ ReLUใ
# ่พๅ
ฅๅฐบๅฏธ
2D ๅผ ้๏ผๅฐบๅฏธไธบ (samples, n)
# ่พๅบๅฐบๅฏธ
2D ๅผ ้๏ผๅฐบๅฏธไธบ (samples, 2*n)
# ็่ฎบไพๆฎ
ๅจๅบ็จ ReLU ๆถ๏ผๅ่ฎพๅ
ๅ่พๅบ็ๅๅธๆฅ่ฟไบ 0 ็ไธญๅฟ๏ผ
้ฃไนๅฐไธขๅผไธๅ็่พๅ
ฅใ่ฟๆฏ้ๅธธไฝๆ็ใ
Antirectifier ๅ
่ฎธๅ ReLU ไธๆ ท่ฟๅๅ
จๆญฃ่พๅบ๏ผ่ไธไผไธขๅผไปปไฝๆฐๆฎใ
ๅจ MNIST ไธ่ฟ่ก็ๆต่ฏ่กจๆ๏ผAntirectifier ๅฏไปฅ่ฎญ็ปๅๆฐๅฐไธคๅไฝๅ
ท
ๆไธๅบไบ ReLU ็็ญๆ็ฝ็ป็ธๅฝ็ๅ็ฑป็ฒพๅบฆ็็ฝ็ปใ
'''
def compute_output_shape(self, input_shape):
shape = list(input_shape)
assert len(shape) == 2 # ไป
ๅฏน 2D ๅผ ้ๆๆ
shape[-1] *= 2
return tuple(shape)
def call(self, inputs):
inputs -= K.mean(inputs, axis=1, keepdims=True)
inputs = K.l2_normalize(inputs, axis=1)
pos = K.relu(inputs)
neg = K.relu(-inputs)
return K.concatenate([pos, neg], axis=1)
# ๅ
จๅฑๅๆฐ
batch_size = 128
num_classes = 10
epochs = 40
# ๅๅไธบ่ฎญ็ปๅๆต่ฏ็ๆฐๆฎ
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# ๅฐ็ฑปๅ้่ฝฌๅไธบไบ่ฟๅถ็ฑป็ฉ้ต
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# ๆๅปบๆจกๅ
model = Sequential()
model.add(layers.Dense(256, input_shape=(784,)))
model.add(Antirectifier())
model.add(layers.Dropout(0.1))
model.add(layers.Dense(256))
model.add(Antirectifier())
model.add(layers.Dropout(0.1))
model.add(layers.Dense(num_classes))
model.add(layers.Activation('softmax'))
# ็ผ่ฏๆจกๅ
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# ่ฎญ็ปๆจกๅ
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
# ๆฅไธๆฅ๏ผไธๅ
ทๆ 2 ๅๅคง็ๅฏ้ๅฑ
# ๅ ReLU ็็ญๆ็ฝ็ป่ฟ่กๆฏ่พ
```
| keras-docs-zh/sources/examples/antirectifier.md/0 | {
"file_path": "keras-docs-zh/sources/examples/antirectifier.md",
"repo_id": "keras-docs-zh",
"token_count": 1748
} | 82 |
# Keras ๅบๅๅฐๅบๅๆจกๅ็คบไพ๏ผๅญ็ฌฆ็บง๏ผใ
่ฏฅ่ๆฌๆผ็คบไบๅฆไฝๅฎ็ฐๅบๆฌ็ๅญ็ฌฆ็บงๅบๅๅฐๅบๅๆจกๅใ
ๆไปฌๅฐๅ
ถ็จไบๅฐ่ฑๆ็ญๅฅ้ไธชๅญ็ฌฆ็ฟป่ฏๆๆณ่ฏญ็ญๅฅใ
่ฏทๆณจๆ๏ผ่ฟ่กๅญ็ฌฆ็บงๆบๅจ็ฟป่ฏๆฏ้ๅธธไธๅฏปๅธธ็๏ผๅ ไธบๅจๆญค้ขๅไธญ่ฏ็บงๆจกๅๆดไธบๅธธ่งใ
**็ฎๆณๆป็ป**
- ๆไปฌไปไธไธช้ขๅ็่พๅ
ฅๅบๅ๏ผไพๅฆ่ฑ่ฏญๅฅๅญ๏ผๅๅฆไธไธช้ขๅ็ๅฏนๅบ็ฎๆ ๅบๅ๏ผไพๅฆๆณ่ฏญๅฅๅญ๏ผๅผๅง๏ผ
- ็ผ็ ๅจ LSTM ๅฐ่พๅ
ฅๅบๅๅๆขไธบ 2 ไธช็ถๆๅ้๏ผๆไปฌไฟ็ๆๅ็ LSTM ็ถๆๅนถไธขๅผ่พๅบ๏ผ๏ผ
- ๅฏน่งฃ็ ๅจ LSTM ่ฟ่ก่ฎญ็ป๏ผไปฅๅฐ็ฎๆ ๅบๅ่ฝฌๆขไธบ็ธๅๅบๅ๏ผไฝไปฅๅๅฐๅ็งปไธไธชๆถ้ดๆญฅ๏ผๅจ่ฟ็งๆ
ๅตไธ๏ผ่ฏฅ่ฎญ็ป่ฟ็จ็งฐไธบ "ๆๅธๅผบๅถ"ใ
ๅฎไฝฟ็จ็ผ็ ๅจ็่พๅบใๅฎ้
ไธ๏ผ่งฃ็ ๅจไผๆ นๆฎ่พๅ
ฅๅบๅ๏ผๆ นๆฎ็ปๅฎ็ `targets[...t]` ๆฅๅญฆไน ็ๆ `target[t+1...]`ใ
- ๅจๆจ็ๆจกๅผไธ๏ผๅฝๆไปฌๆณ่งฃ็ ๆช็ฅ็่พๅ
ฅๅบๅๆถ๏ผๆไปฌ๏ผ
- ๅฏน่พๅ
ฅๅบๅ่ฟ่ก็ผ็ ๏ผ
- ไปๅคงๅฐไธบ1็็ฎๆ ๅบๅๅผๅง๏ผไป
ๆฏๅบๅๅผๅงๅญ็ฌฆ๏ผ๏ผ
- ๅฐ่พๅ
ฅๅบๅๅ 1 ไธชๅญ็ฌฆ็็ฎๆ ๅบๅ้ฆ้ๅฐ่งฃ็ ๅจ๏ผไปฅ็ๆไธไธไธชๅญ็ฌฆ็้ขๆต๏ผ
- ไฝฟ็จ่ฟไบ้ขๆตๆฅ้ๆ ทไธไธไธชๅญ็ฌฆ๏ผๆไปฌไป
ไฝฟ็จ argmax๏ผ;
- ๅฐ้ๆ ท็ๅญ็ฌฆ้ๅ ๅฐ็ฎๆ ๅบๅ๏ผ
- ้ๅค็ดๅฐๆไปฌ่พพๅฐๅญ็ฌฆๆฐ้ๅถใ
**ๆฐๆฎไธ่ฝฝ**
[English to French sentence pairs.
](http://www.manythings.org/anki/fra-eng.zip)
[Lots of neat sentence pairs datasets.
](http://www.manythings.org/anki/)
**ๅ่**
- [Sequence to Sequence Learning with Neural Networks
](https://arxiv.org/abs/1409.3215)
- [Learning Phrase Representations using
RNN Encoder-Decoder for Statistical Machine Translation
](https://arxiv.org/abs/1406.1078)
```python
from __future__ import print_function
from keras.models import Model
from keras.layers import Input, LSTM, Dense
import numpy as np
batch_size = 64 # ่ฎญ็ปๆนๆฌกๅคงๅฐใ
epochs = 100 # ่ฎญ็ป่ฟญไปฃ่ฝฎๆฌกใ
latent_dim = 256 # ็ผ็ ็ฉบ้ด้ๅฑ็ปดๅบฆใ
num_samples = 10000 # ่ฎญ็ปๆ ทๆฌๆฐใ
# ็ฃ็ๆฐๆฎๆไปถ่ทฏๅพใ
data_path = 'fra-eng/fra.txt'
# ๅ้ๅๆฐๆฎใ
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, 'r', encoding='utf-8') as f:
lines = f.read().split('\n')
for line in lines[: min(num_samples, len(lines) - 1)]:
input_text, target_text = line.split('\t')
# ๆไปฌไฝฟ็จ "tab" ไฝไธบ "่ตทๅงๅบๅ" ๅญ็ฌฆ๏ผ
# ๅฏนไบ็ฎๆ ๏ผไฝฟ็จ "\n" ไฝไธบ "็ปๆญขๅบๅ" ๅญ็ฌฆใ
target_text = '\t' + target_text + '\n'
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
num_decoder_tokens = len(target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print('Number of samples:', len(input_texts))
print('Number of unique input tokens:', num_encoder_tokens)
print('Number of unique output tokens:', num_decoder_tokens)
print('Max sequence length for inputs:', max_encoder_seq_length)
print('Max sequence length for outputs:', max_decoder_seq_length)
input_token_index = dict(
[(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict(
[(char, i) for i, char in enumerate(target_characters)])
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.
encoder_input_data[i, t + 1:, input_token_index[' ']] = 1.
for t, char in enumerate(target_text):
# decoder_target_data ้ขๅ
decoder_input_data by ไธไธชๆถ้ดๆญฅใ
decoder_input_data[i, t, target_token_index[char]] = 1.
if t > 0:
# decoder_target_data ๅฐๆๅไธไธชๆถ้ดๆญฅ๏ผๅนถไธๅฐไธๅ
ๅซๅผๅงๅญ็ฌฆใ
decoder_target_data[i, t - 1, target_token_index[char]] = 1.
decoder_input_data[i, t + 1:, target_token_index[' ']] = 1.
decoder_target_data[i, t:, target_token_index[' ']] = 1.
# ๅฎไน่พๅ
ฅๅบๅๅนถๅค็ๅฎใ
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# ๆไปฌๆๅผ `encoder_outputs`๏ผๅชไฟ็็ถๆใ
encoder_states = [state_h, state_c]
# ไฝฟ็จ `encoder_states` ไฝไธบๅๅง็ถๆๆฅ่ฎพ็ฝฎ่งฃ็ ๅจใ
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# ๆไปฌๅฐ่งฃ็ ๅจ่ฎพ็ฝฎไธบ่ฟๅๅฎๆด็่พๅบๅบๅ๏ผๅนถ่ฟๅๅ
้จ็ถๆใ
# ๆไปฌไธๅจ่ฎญ็ปๆจกๅไธญไฝฟ็จ่ฟๅ็ถๆ๏ผไฝๅฐๅจๆจ็ไธญไฝฟ็จๅฎไปฌใ
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# ๅฎไนๆจกๅ๏ผๅฐ `encoder_input_data` & `decoder_input_data` ่ฝฌๆขไธบ `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# ๆง่ก่ฎญ็ป
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2)
# ไฟๅญๆจกๅ
model.save('s2s.h5')
# ๆฅไธๆฅ: ๆจ็ๆจกๅผ (้ๆ ท)ใ
# ่ฟๆฏๆผไน ๏ผ
# 1) ็ผ็ ่พๅ
ฅๅนถๆฃ็ดขๅๅง่งฃ็ ๅจ็ถๆ
# 2) ไปฅ่ฏฅๅๅง็ถๆๅ "ๅบๅๅผๅง" token ไธบ็ฎๆ ่ฟ่ก่งฃ็ ๅจ็ไธๆญฅใ ่พๅบๅฐๆฏไธไธไธช็ฎๆ tokenใ
# 3) ้ๅคๅฝๅ็ฎๆ token ๅๅฝๅ็ถๆ
# ๅฎไน้ๆ ทๆจกๅ
encoder_model = Model(encoder_inputs, encoder_states)
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
# ๅๅๆฅ่ฏข token ็ดขๅผๅฏๅฐๅบๅ่งฃ็ ๅๅฏ่ฏป็ๅ
ๅฎนใ
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# ๅฐ่พๅ
ฅ็ผ็ ไธบ็ถๆๅ้ใ
states_value = encoder_model.predict(input_seq)
# ็ๆ้ฟๅบฆไธบ 1 ็็ฉบ็ฎๆ ๅบๅใ
target_seq = np.zeros((1, 1, num_decoder_tokens))
# ็จ่ตทๅงๅญ็ฌฆๅกซๅ
็ฎๆ ๅบๅ็็ฌฌไธไธชๅญ็ฌฆใ
target_seq[0, 0, target_token_index['\t']] = 1.
# ไธๆนๅบๅ็้ๆ ทๅพช็ฏ
# (ไธบไบ็ฎๅ๏ผ่ฟ้ๆไปฌๅ่ฎพไธๆนๅคงๅฐไธบ 1)ใ
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# ้ๆ ทไธไธช token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# ้ๅบๆกไปถ๏ผ่พพๅฐๆๅคง้ฟๅบฆๆๆพๅฐๅๆญข็ฌฆใ
if (sampled_char == '\n' or
len(decoded_sentence) > max_decoder_seq_length):
stop_condition = True
# ๆดๆฐ็ฎๆ ๅบๅ๏ผ้ฟๅบฆไธบ 1๏ผใ
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.
# ๆดๆฐ็ถๆ
states_value = [h, c]
return decoded_sentence
for seq_index in range(100):
# ๆฝๅไธไธชๅบๅ๏ผ่ฎญ็ป้็ไธ้จๅ๏ผ่ฟ่ก่งฃ็ ใ
input_seq = encoder_input_data[seq_index: seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print('-')
print('Input sentence:', input_texts[seq_index])
print('Decoded sentence:', decoded_sentence)
```
| keras-docs-zh/sources/examples/lstm_seq2seq.md/0 | {
"file_path": "keras-docs-zh/sources/examples/lstm_seq2seq.md",
"repo_id": "keras-docs-zh",
"token_count": 4553
} | 83 |
# Keras ็ฅ็ป้ฃๆ ผ่ฝฌๆขใ
ไฝฟ็จไปฅไธๅฝไปค่ฟ่ก่ๆฌ๏ผ
```
python neural_style_transfer.py path_to_your_base_image.jpg path_to_your_reference.jpg prefix_for_results
```
ไพๅฆ๏ผ
```
python neural_style_transfer.py img/tuebingen.jpg img/starry_night.jpg results/my_result
```
ๅฏ้ๅๆฐ๏ผ
```
--iter: ่ฆๆๅฎ่ฟ่กๆ ทๅผ่ฝฌ็งป็่ฟญไปฃๆฌกๆฐ๏ผ้ป่ฎคไธบ 10๏ผ
--content_weight: ๅ
ๅฎนๆๅคฑ็ๆ้๏ผ้ป่ฎคไธบ 0.025๏ผ
--style_weight: ่ตไบๆ ทๅผๆๅคฑ็ๆ้๏ผ้ป่ฎคไธบ 1.0๏ผ
--tv_weight: ่ตไบๆปๅๅๆๅคฑ็ๆ้๏ผ้ป่ฎคไธบ 1.0๏ผ
```
ไธบไบๆ้ซ้ๅบฆ๏ผๆๅฅฝๅจ GPU ไธ่ฟ่กๆญค่ๆฌใ
็คบไพ็ปๆ: https://twitter.com/fchollet/status/686631033085677568
# ่ฏฆๆ
ๆ ทๅผ่ฝฌๆขๅ
ๆฌ็ๆๅ
ทๆไธๅบๆฌๅพๅ็ธๅ็ "ๅ
ๅฎน"๏ผไฝๅ
ทๆไธๅๅพ็๏ผ้ๅธธๆฏ่บๆฏ็๏ผ็ "ๆ ทๅผ" ็ๅพๅใ
่ฟๆฏ้่ฟไผๅๅ
ทๆ 3 ไธชๆๅ็ๆๅคฑๅฝๆฐๆฅๅฎ็ฐ็๏ผๆ ทๅผๆๅคฑ๏ผๅ
ๅฎนๆๅคฑๅๆปๅๅๆๅคฑ๏ผ
- ๆปๅๅๆๅคฑๅจ็ปๅๅพๅ็ๅ็ด ไน้ดๅผบๅ ไบๅฑ้จ็ฉบ้ด่ฟ็ปญๆง๏ผไฝฟๅ
ถๅ
ทๆ่ง่ง่ฟ่ดฏๆงใ
- ๆ ทๅผๆๅคฑๆฏๆทฑๅบฆๅญฆไน ็ๆ นๆบ-ไฝฟ็จๆทฑๅบฆๅท็งฏ็ฅ็ป็ฝ็ปๅฎไนๆทฑๅบฆๅญฆไน ใ
็ฒพ็กฎๅฐ๏ผๅฎๅ
ๆฌไปๅท็งฏ็ฝ็ป็ไธๅๅฑ๏ผๅจ ImageNet ไธ่ฎญ็ป๏ผๆๅ็ๅบ็กๅพๅ่กจ็คบ
ๅฝขๅผๅๆ ทๅผๅ่ๅพๅ่กจ็คบๅฝขๅผ็ Gram ็ฉ้ตไน้ด็ L2 ่ท็ฆปไนๅใ
ๆปไฝๆ่ทฏๆฏๅจไธๅ็็ฉบ้ดๆฏไพ๏ผ็ธๅฝๅคง็ๆฏไพ-็ฑๆ่่็ๅพๅฑ็ๆทฑๅบฆๅฎไน๏ผไธๆ่ท้ข่ฒ/็บน็ไฟกๆฏใ
- ๅ
ๅฎนๆๅคฑๆฏๅบ็กๅพๅ๏ผไป่พๆทฑๅฑๆๅ๏ผ็็นๅพไธ็ปๅๅพๅ็็นๅพไน้ด็ L2 ่ท็ฆป๏ผไป่ไฝฟ็ๆ็ๅพๅ่ถณๅคๆฅ่ฟๅๅงๅพๅใ
# ๅ่ๆ็ฎ
- [A Neural Algorithm of Artistic Style](http://arxiv.org/abs/1508.06576)
```python
from __future__ import print_function
from keras.preprocessing.image import load_img, save_img, img_to_array
import numpy as np
from scipy.optimize import fmin_l_bfgs_b
import time
import argparse
from keras.applications import vgg19
from keras import backend as K
parser = argparse.ArgumentParser(description='Neural style transfer with Keras.')
parser.add_argument('base_image_path', metavar='base', type=str,
help='Path to the image to transform.')
parser.add_argument('style_reference_image_path', metavar='ref', type=str,
help='Path to the style reference image.')
parser.add_argument('result_prefix', metavar='res_prefix', type=str,
help='Prefix for the saved results.')
parser.add_argument('--iter', type=int, default=10, required=False,
help='Number of iterations to run.')
parser.add_argument('--content_weight', type=float, default=0.025, required=False,
help='Content weight.')
parser.add_argument('--style_weight', type=float, default=1.0, required=False,
help='Style weight.')
parser.add_argument('--tv_weight', type=float, default=1.0, required=False,
help='Total Variation weight.')
args = parser.parse_args()
base_image_path = args.base_image_path
style_reference_image_path = args.style_reference_image_path
result_prefix = args.result_prefix
iterations = args.iter
# ่ฟไบๆฏไธๅๆๅคฑๆๅ็ๆ้
total_variation_weight = args.tv_weight
style_weight = args.style_weight
content_weight = args.content_weight
# ็ๆๅพ็็ๅฐบๅฏธใ
width, height = load_img(base_image_path).size
img_nrows = 400
img_ncols = int(width * img_nrows / height)
# util ๅฝๆฐๅฏๅฐๅพ็ๆๅผ๏ผ่ฐๆดๅคงๅฐๅนถๅฐๅ
ถๆ ผๅผๅไธบ้ๅฝ็ๅผ ้
def preprocess_image(image_path):
img = load_img(image_path, target_size=(img_nrows, img_ncols))
img = img_to_array(img)
img = np.expand_dims(img, axis=0)
img = vgg19.preprocess_input(img)
return img
# util ๅฝๆฐๅฐๅผ ้่ฝฌๆขไธบๆๆๅพๅ
def deprocess_image(x):
if K.image_data_format() == 'channels_first':
x = x.reshape((3, img_nrows, img_ncols))
x = x.transpose((1, 2, 0))
else:
x = x.reshape((img_nrows, img_ncols, 3))
# ้่ฟๅนณๅๅ็ด ๅป้ค้ถไธญๅฟ
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
# 'BGR'->'RGB'
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype('uint8')
return x
# ๅพๅฐๆไปฌๅพๅ็ๅผ ้่กจ็คบ
base_image = K.variable(preprocess_image(base_image_path))
style_reference_image = K.variable(preprocess_image(style_reference_image_path))
# ่ฟๅฐๅ
ๅซๆไปฌ็ๆ็ๅพๅ
if K.image_data_format() == 'channels_first':
combination_image = K.placeholder((1, 3, img_nrows, img_ncols))
else:
combination_image = K.placeholder((1, img_nrows, img_ncols, 3))
# ๅฐ 3 ๅผ ๅพๅๅๅนถไธบไธไธช Keras ๅผ ้
input_tensor = K.concatenate([base_image,
style_reference_image,
combination_image], axis=0)
# ไปฅๆไปฌ็ 3 ๅผ ๅพๅไธบ่พๅ
ฅๆๅปบ VGG19 ็ฝ็ป
# ่ฏฅๆจกๅๅฐๅ ่ฝฝ้ข่ฎญ็ป็ ImageNet ๆ้
model = vgg19.VGG19(input_tensor=input_tensor,
weights='imagenet', include_top=False)
print('Model loaded.')
# ่ทๅๆฏไธช "ๅ
ณ้ฎ" ๅฑ็็ฌฆๅท่พๅบ๏ผๆไปฌไธบๅฎไปฌๆๅฎไบๅฏไธ็ๅ็งฐ๏ผใ
outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])
# ่ฎก็ฎ็ฅ็ป้ฃๆ ผๆๅคฑ
# ้ฆๅ
๏ผๆไปฌ้่ฆๅฎไน 4 ไธช util ๅฝๆฐ
# ๅพๅๅผ ้็ gram ็ฉ้ต๏ผๆ็นๅพ้็ๅค้จไน็งฏ๏ผ
def gram_matrix(x):
assert K.ndim(x) == 3
if K.image_data_format() == 'channels_first':
features = K.batch_flatten(x)
else:
features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))
gram = K.dot(features, K.transpose(features))
return gram
# "ๆ ทๅผๆๅคฑ" ็จไบๅจ็ๆ็ๅพๅไธญไฟๆๅ่ๅพๅ็ๆ ทๅผใ
# ๅฎๅบไบๆฅ่ชๆ ทๅผๅ่ๅพๅๅ็ๆ็ๅพๅ็็นๅพๅพ็ gram ็ฉ้ต๏ผๆ่ทๆ ทๅผ๏ผ
def style_loss(style, combination):
assert K.ndim(style) == 3
assert K.ndim(combination) == 3
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_nrows * img_ncols
return K.sum(K.square(S - C)) / (4.0 * (channels ** 2) * (size ** 2))
# ่พ
ๅฉๆๅคฑๅฝๆฐ๏ผ็จไบๅจ็ๆ็ๅพๅไธญ็ปดๆๅบๆฌๅพๅ็ "ๅ
ๅฎน"
def content_loss(base, combination):
return K.sum(K.square(combination - base))
# ็ฌฌ 3 ไธชๆๅคฑๅฝๆฐ๏ผๆปๅๅๆๅคฑ๏ผๆจๅจไฝฟ็ๆ็ๅพๅไฟๆๅฑ้จ่ฟ่ดฏ
def total_variation_loss(x):
assert K.ndim(x) == 4
if K.image_data_format() == 'channels_first':
a = K.square(
x[:, :, :img_nrows - 1, :img_ncols - 1] - x[:, :, 1:, :img_ncols - 1])
b = K.square(
x[:, :, :img_nrows - 1, :img_ncols - 1] - x[:, :, :img_nrows - 1, 1:])
else:
a = K.square(
x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, 1:, :img_ncols - 1, :])
b = K.square(
x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, :img_nrows - 1, 1:, :])
return K.sum(K.pow(a + b, 1.25))
# ๅฐ่ฟไบๆๅคฑๅฝๆฐ็ปๅๆๅไธชๆ ้
loss = K.variable(0.0)
layer_features = outputs_dict['block5_conv2']
base_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss = loss + content_weight * content_loss(base_image_features,
combination_features)
feature_layers = ['block1_conv1', 'block2_conv1',
'block3_conv1', 'block4_conv1',
'block5_conv1']
for layer_name in feature_layers:
layer_features = outputs_dict[layer_name]
style_reference_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
sl = style_loss(style_reference_features, combination_features)
loss = loss + (style_weight / len(feature_layers)) * sl
loss = loss + total_variation_weight * total_variation_loss(combination_image)
# ่ทๅพๆๅคฑๅ็ๆๅพๅ็ๆขฏๅบฆ
grads = K.gradients(loss, combination_image)
outputs = [loss]
if isinstance(grads, (list, tuple)):
outputs += grads
else:
outputs.append(grads)
f_outputs = K.function([combination_image], outputs)
def eval_loss_and_grads(x):
if K.image_data_format() == 'channels_first':
x = x.reshape((1, 3, img_nrows, img_ncols))
else:
x = x.reshape((1, img_nrows, img_ncols, 3))
outs = f_outputs([x])
loss_value = outs[0]
if len(outs[1:]) == 1:
grad_values = outs[1].flatten().astype('float64')
else:
grad_values = np.array(outs[1:]).flatten().astype('float64')
return loss_value, grad_values
# ่ฏฅ Evaluator ็ฑปๅฏไปฅ้่ฟไธคไธชๅ็ฌ็ๅฝๆฐ "loss" ๅ "grads" ๆฅไธๆฌก่ฎก็ฎๆๅคฑๅๆขฏๅบฆ๏ผๅๆถๆฃ็ดขๅฎไปฌใ
# ่ฟๆ ทๅๆฏๅ ไธบ scipy.optimize ้่ฆไฝฟ็จๆ่ๅๆขฏๅบฆ็ๅ็ฌๅฝๆฐ๏ผไฝๆฏๅๅซ่ฎก็ฎๅฎไปฌๅฐๆ็ไฝไธใ
class Evaluator(object):
def __init__(self):
self.loss_value = None
self.grads_values = None
def loss(self, x):
assert self.loss_value is None
loss_value, grad_values = eval_loss_and_grads(x)
self.loss_value = loss_value
self.grad_values = grad_values
return self.loss_value
def grads(self, x):
assert self.loss_value is not None
grad_values = np.copy(self.grad_values)
self.loss_value = None
self.grad_values = None
return grad_values
evaluator = Evaluator()
# ๅฏน็ๆ็ๅพๅ็ๅ็ด ่ฟ่กๅบไบ Scipy ็ไผๅ๏ผL-BFGS๏ผ๏ผไปฅๆๅคง็จๅบฆๅฐๅๅฐ็ฅ็ปๆ ทๅผๆๅคฑ
x = preprocess_image(base_image_path)
for i in range(iterations):
print('Start of iteration', i)
start_time = time.time()
x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(),
fprime=evaluator.grads, maxfun=20)
print('Current loss value:', min_val)
# ไฟๅญๅฝๅ็ๆ็ๅพๅ
img = deprocess_image(x.copy())
fname = result_prefix + '_at_iteration_%d.png' % i
save_img(fname, img)
end_time = time.time()
print('Image saved as', fname)
print('Iteration %d completed in %ds' % (i, end_time - start_time))
``` | keras-docs-zh/sources/examples/neural_style_transfer.md/0 | {
"file_path": "keras-docs-zh/sources/examples/neural_style_transfer.md",
"repo_id": "keras-docs-zh",
"token_count": 5188
} | 84 |
## ๆๅคฑๅฝๆฐ็ไฝฟ็จ
ๆๅคฑๅฝๆฐ๏ผๆ็งฐ็ฎๆ ๅฝๆฐใไผๅ่ฏๅๅฝๆฐ๏ผๆฏ็ผ่ฏๆจกๅๆถๆ้็ไธคไธชๅๆฐไนไธ๏ผ
```python
model.compile(loss='mean_squared_error', optimizer='sgd')
```
```python
from keras import losses
model.compile(loss=losses.mean_squared_error, optimizer='sgd')
```
ไฝ ๅฏไปฅไผ ้ไธไธช็ฐๆ็ๆๅคฑๅฝๆฐๅ๏ผๆ่
ไธไธช TensorFlow/Theano ็ฌฆๅทๅฝๆฐใ
่ฏฅ็ฌฆๅทๅฝๆฐไธบๆฏไธชๆฐๆฎ็น่ฟๅไธไธชๆ ้๏ผๆไปฅไธไธคไธชๅๆฐ:
- __y_true__: ็ๅฎๆ ็ญพใTensorFlow/Theano ๅผ ้ใ
- __y_pred__: ้ขๆตๅผใTensorFlow/Theano ๅผ ้๏ผๅ
ถ shape ไธ y_true ็ธๅใ
ๅฎ้
็ไผๅ็ฎๆ ๆฏๆๆๆฐๆฎ็น็่พๅบๆฐ็ป็ๅนณๅๅผใ
ๆๅ
ณ่ฟไบๅฝๆฐ็ๅ ไธชไพๅญ๏ผ่ฏทๆฅ็ [losses source](https://github.com/keras-team/keras/blob/master/keras/losses.py)ใ
## ๅฏ็จๆๅคฑๅฝๆฐ
### mean_squared_error
```python
keras.losses.mean_squared_error(y_true, y_pred)
```
----
### mean_absolute_error
```python
eras.losses.mean_absolute_error(y_true, y_pred)
```
----
### mean_absolute_percentage_error
```python
keras.losses.mean_absolute_percentage_error(y_true, y_pred)
```
----
### mean_squared_logarithmic_error
```python
keras.losses.mean_squared_logarithmic_error(y_true, y_pred)
```
----
### squared_hinge
```python
keras.losses.squared_hinge(y_true, y_pred)
```
----
### hinge
```python
keras.losses.hinge(y_true, y_pred)
```
----
### categorical_hinge
```python
keras.losses.categorical_hinge(y_true, y_pred)
```
----
### logcosh
```python
keras.losses.logcosh(y_true, y_pred)
```
้ขๆต่ฏฏๅทฎ็ๅๆฒไฝๅผฆ็ๅฏนๆฐใ
ๅฏนไบๅฐ็ `x`๏ผ`log(cosh(x))` ่ฟไผผ็ญไบ `(x ** 2) / 2`ใๅฏนไบๅคง็ `x`๏ผ่ฟไผผไบ `abs(x) - log(2)`ใ่ฟ่กจ็คบ 'logcosh' ไธๅๆน่ฏฏๅทฎๅคง่ด็ธๅ๏ผไฝๆฏไธไผๅๅฐๅถๅฐ็ฏ็็้่ฏฏ้ขๆต็ๅผบ็ๅฝฑๅใ
__ๅๆฐ__
- __y_true__: ็ฎๆ ็ๅฎๅผ็ๅผ ้ใ
- __y_pred__: ็ฎๆ ้ขๆตๅผ็ๅผ ้ใ
__่ฟๅ__
ๆฏไธชๆ ทๆฌ้ฝๆไธไธชๆ ้ๆๅคฑ็ๅผ ้ใ
----
### huber_loss
```python
keras.losses.huber_loss(y_true, y_pred, delta=1.0)
```
---
### categorical_crossentropy
```python
keras.losses.categorical_crossentropy(y_true, y_pred, from_logits=False, label_smoothing=0)
```
----
### sparse_categorical_crossentropy
```python
keras.losses.sparse_categorical_crossentropy(y_true, y_pred, from_logits=False, axis=-1)
```
----
### binary_crossentropy
```python
keras.losses.binary_crossentropy(y_true, y_pred, from_logits=False, label_smoothing=0)
```
----
### kullback_leibler_divergence
```python
keras.losses.kullback_leibler_divergence(y_true, y_pred)
```
----
### poisson
```python
keras.losses.poisson(y_true, y_pred)
```
----
### cosine_proximity
```python
keras.losses.cosine_proximity(y_true, y_pred, axis=-1)
```
---
### is_categorical_crossentropy
```python
keras.losses.is_categorical_crossentropy(loss)
```
----
**ๆณจๆ**: ๅฝไฝฟ็จ `categorical_crossentropy` ๆๅคฑๆถ๏ผไฝ ็็ฎๆ ๅผๅบ่ฏฅๆฏๅ็ฑปๆ ผๅผ (ๅณ๏ผๅฆๆไฝ ๆ 10 ไธช็ฑป๏ผๆฏไธชๆ ทๆฌ็็ฎๆ ๅผๅบ่ฏฅๆฏไธไธช 10 ็ปด็ๅ้๏ผ่ฟไธชๅ้้คไบ่กจ็คบ็ฑปๅซ็้ฃไธช็ดขๅผไธบ 1๏ผๅ
ถไปๅไธบ 0)ใ ไธบไบๅฐ *ๆดๆฐ็ฎๆ ๅผ* ่ฝฌๆขไธบ *ๅ็ฑป็ฎๆ ๅผ*๏ผไฝ ๅฏไปฅไฝฟ็จ Keras ๅฎ็จๅฝๆฐ `to_categorical`๏ผ
```python
from keras.utils.np_utils import to_categorical
categorical_labels = to_categorical(int_labels, num_classes=None)
```
ๅฝไฝฟ็จ sparse_categorical_crossentropy ๆๅคฑๆถ๏ผไฝ ็็ฎๆ ๅบ่ฏฅๆฏๆดๆฐใๅฆๆไฝ ๆฏ็ฑปๅซ็ฎๆ ๏ผๅบ่ฏฅไฝฟ็จ categorical_crossentropyใ
categorical_crossentropy ๆฏๅค็ฑปๅฏนๆฐๆๅคฑ็ๅฆไธ็งๅฝขๅผใ
| keras-docs-zh/sources/losses.md/0 | {
"file_path": "keras-docs-zh/sources/losses.md",
"repo_id": "keras-docs-zh",
"token_count": 1896
} | 85 |
"""
Title: Text Generation using FNet
Author: [Darshan Deshpande](https://twitter.com/getdarshan)
Date created: 2021/10/05
Last modified: 2021/10/05
Description: FNet transformer for text generation in Keras.
Accelerator: GPU
"""
"""
## Introduction
The original transformer implementation (Vaswani et al., 2017) was one of the major
breakthroughs in Natural Language Processing, giving rise to important architectures such BERT and GPT.
However, the drawback of these architectures is
that the self-attention mechanism they use is computationally expensive. The FNet
architecture proposes to replace this self-attention attention with a leaner mechanism:
a Fourier transformation-based linear mixer for input tokens.
The FNet model was able to achieve 92-97% of BERT's accuracy while training 80% faster on
GPUs and almost 70% faster on TPUs. This type of design provides an efficient and small
model size, leading to faster inference times.
In this example, we will implement and train this architecture on the Cornell Movie
Dialog corpus to show the applicability of this model to text generation.
"""
"""
## Imports
"""
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import os
# Defining hyperparameters
VOCAB_SIZE = 8192
MAX_SAMPLES = 50000
BUFFER_SIZE = 20000
MAX_LENGTH = 40
EMBED_DIM = 256
LATENT_DIM = 512
NUM_HEADS = 8
BATCH_SIZE = 64
"""
## Loading data
We will be using the Cornell Dialog Corpus. We will parse the movie conversations into
questions and answers sets.
"""
path_to_zip = keras.utils.get_file(
"cornell_movie_dialogs.zip",
origin="http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip",
extract=True,
)
path_to_dataset = os.path.join(
os.path.dirname(path_to_zip), "cornell movie-dialogs corpus"
)
path_to_movie_lines = os.path.join(path_to_dataset, "movie_lines.txt")
path_to_movie_conversations = os.path.join(path_to_dataset, "movie_conversations.txt")
def load_conversations():
# Helper function for loading the conversation splits
id2line = {}
with open(path_to_movie_lines, errors="ignore") as file:
lines = file.readlines()
for line in lines:
parts = line.replace("\n", "").split(" +++$+++ ")
id2line[parts[0]] = parts[4]
inputs, outputs = [], []
with open(path_to_movie_conversations, "r") as file:
lines = file.readlines()
for line in lines:
parts = line.replace("\n", "").split(" +++$+++ ")
# get conversation in a list of line ID
conversation = [line[1:-1] for line in parts[3][1:-1].split(", ")]
for i in range(len(conversation) - 1):
inputs.append(id2line[conversation[i]])
outputs.append(id2line[conversation[i + 1]])
if len(inputs) >= MAX_SAMPLES:
return inputs, outputs
return inputs, outputs
questions, answers = load_conversations()
# Splitting training and validation sets
train_dataset = tf.data.Dataset.from_tensor_slices((questions[:40000], answers[:40000]))
val_dataset = tf.data.Dataset.from_tensor_slices((questions[40000:], answers[40000:]))
"""
### Preprocessing and Tokenization
"""
def preprocess_text(sentence):
sentence = tf.strings.lower(sentence)
# Adding a space between the punctuation and the last word to allow better tokenization
sentence = tf.strings.regex_replace(sentence, r"([?.!,])", r" \1 ")
# Replacing multiple continuous spaces with a single space
sentence = tf.strings.regex_replace(sentence, r"\s\s+", " ")
# Replacing non english words with spaces
sentence = tf.strings.regex_replace(sentence, r"[^a-z?.!,]+", " ")
sentence = tf.strings.strip(sentence)
sentence = tf.strings.join(["[start]", sentence, "[end]"], separator=" ")
return sentence
vectorizer = layers.TextVectorization(
VOCAB_SIZE,
standardize=preprocess_text,
output_mode="int",
output_sequence_length=MAX_LENGTH,
)
# We will adapt the vectorizer to both the questions and answers
# This dataset is batched to parallelize and speed up the process
vectorizer.adapt(tf.data.Dataset.from_tensor_slices((questions + answers)).batch(128))
"""
### Tokenizing and padding sentences using `TextVectorization`
"""
def vectorize_text(inputs, outputs):
inputs, outputs = vectorizer(inputs), vectorizer(outputs)
# One extra padding token to the right to match the output shape
outputs = tf.pad(outputs, [[0, 1]])
return (
{"encoder_inputs": inputs, "decoder_inputs": outputs[:-1]},
{"outputs": outputs[1:]},
)
train_dataset = train_dataset.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE)
val_dataset = val_dataset.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE)
train_dataset = (
train_dataset.cache()
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE)
.prefetch(tf.data.AUTOTUNE)
)
val_dataset = val_dataset.cache().batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
"""
## Creating the FNet Encoder
The FNet paper proposes a replacement for the standard attention mechanism used by the
Transformer architecture (Vaswani et al., 2017).

The outputs of the FFT layer are complex numbers. To avoid dealing with complex layers,
only the real part (the magnitude) is extracted.
The dense layers that follow the Fourier transformation act as convolutions applied on
the frequency domain.
"""
class FNetEncoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.dense_proj = keras.Sequential(
[
layers.Dense(dense_dim, activation="relu"),
layers.Dense(embed_dim),
]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
def call(self, inputs):
# Casting the inputs to complex64
inp_complex = tf.cast(inputs, tf.complex64)
# Projecting the inputs to the frequency domain using FFT2D and
# extracting the real part of the output
fft = tf.math.real(tf.signal.fft2d(inp_complex))
proj_input = self.layernorm_1(inputs + fft)
proj_output = self.dense_proj(proj_input)
return self.layernorm_2(proj_input + proj_output)
"""
## Creating the Decoder
The decoder architecture remains the same as the one proposed by (Vaswani et al., 2017)
in the original transformer architecture, consisting of an embedding, positional
encoding, two masked multi-head attention layers and finally the dense output layers.
The architecture that follows is taken from
[Deep Learning with Python, second edition, chapter 11](https://www.manning.com/books/deep-learning-with-python-second-edition).
"""
class PositionalEmbedding(layers.Layer):
def __init__(self, sequence_length, vocab_size, embed_dim, **kwargs):
super().__init__(**kwargs)
self.token_embeddings = layers.Embedding(
input_dim=vocab_size, output_dim=embed_dim
)
self.position_embeddings = layers.Embedding(
input_dim=sequence_length, output_dim=embed_dim
)
self.sequence_length = sequence_length
self.vocab_size = vocab_size
self.embed_dim = embed_dim
def call(self, inputs):
length = tf.shape(inputs)[-1]
positions = tf.range(start=0, limit=length, delta=1)
embedded_tokens = self.token_embeddings(inputs)
embedded_positions = self.position_embeddings(positions)
return embedded_tokens + embedded_positions
def compute_mask(self, inputs, mask=None):
return tf.math.not_equal(inputs, 0)
class FNetDecoder(layers.Layer):
def __init__(self, embed_dim, latent_dim, num_heads, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.latent_dim = latent_dim
self.num_heads = num_heads
self.attention_1 = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim
)
self.attention_2 = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim
)
self.dense_proj = keras.Sequential(
[
layers.Dense(latent_dim, activation="relu"),
layers.Dense(embed_dim),
]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
self.layernorm_3 = layers.LayerNormalization()
self.supports_masking = True
def call(self, inputs, encoder_outputs, mask=None):
causal_mask = self.get_causal_attention_mask(inputs)
if mask is not None:
padding_mask = tf.cast(mask[:, tf.newaxis, :], dtype="int32")
padding_mask = tf.minimum(padding_mask, causal_mask)
attention_output_1 = self.attention_1(
query=inputs, value=inputs, key=inputs, attention_mask=causal_mask
)
out_1 = self.layernorm_1(inputs + attention_output_1)
attention_output_2 = self.attention_2(
query=out_1,
value=encoder_outputs,
key=encoder_outputs,
attention_mask=padding_mask,
)
out_2 = self.layernorm_2(out_1 + attention_output_2)
proj_output = self.dense_proj(out_2)
return self.layernorm_3(out_2 + proj_output)
def get_causal_attention_mask(self, inputs):
input_shape = tf.shape(inputs)
batch_size, sequence_length = input_shape[0], input_shape[1]
i = tf.range(sequence_length)[:, tf.newaxis]
j = tf.range(sequence_length)
mask = tf.cast(i >= j, dtype="int32")
mask = tf.reshape(mask, (1, input_shape[1], input_shape[1]))
mult = tf.concat(
[tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)],
axis=0,
)
return tf.tile(mask, mult)
def create_model():
encoder_inputs = keras.Input(shape=(None,), dtype="int32", name="encoder_inputs")
x = PositionalEmbedding(MAX_LENGTH, VOCAB_SIZE, EMBED_DIM)(encoder_inputs)
encoder_outputs = FNetEncoder(EMBED_DIM, LATENT_DIM)(x)
encoder = keras.Model(encoder_inputs, encoder_outputs)
decoder_inputs = keras.Input(shape=(None,), dtype="int32", name="decoder_inputs")
encoded_seq_inputs = keras.Input(
shape=(None, EMBED_DIM), name="decoder_state_inputs"
)
x = PositionalEmbedding(MAX_LENGTH, VOCAB_SIZE, EMBED_DIM)(decoder_inputs)
x = FNetDecoder(EMBED_DIM, LATENT_DIM, NUM_HEADS)(x, encoded_seq_inputs)
x = layers.Dropout(0.5)(x)
decoder_outputs = layers.Dense(VOCAB_SIZE, activation="softmax")(x)
decoder = keras.Model(
[decoder_inputs, encoded_seq_inputs], decoder_outputs, name="outputs"
)
decoder_outputs = decoder([decoder_inputs, encoder_outputs])
fnet = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs, name="fnet")
return fnet
"""
## Creating and Training the model
"""
fnet = create_model()
fnet.compile("adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"])
"""
Here, the `epochs` parameter is set to a single epoch, but in practice the model will take around
**20-30 epochs** of training to start outputting comprehensible sentences. Although accuracy
is not a good measure for this task, we will use it just to get a hint of the improvement
of the network.
"""
fnet.fit(train_dataset, epochs=1, validation_data=val_dataset)
"""
## Performing inference
"""
VOCAB = vectorizer.get_vocabulary()
def decode_sentence(input_sentence):
# Mapping the input sentence to tokens and adding start and end tokens
tokenized_input_sentence = vectorizer(
tf.constant("[start] " + preprocess_text(input_sentence) + " [end]")
)
# Initializing the initial sentence consisting of only the start token.
tokenized_target_sentence = tf.expand_dims(VOCAB.index("[start]"), 0)
decoded_sentence = ""
for i in range(MAX_LENGTH):
# Get the predictions
predictions = fnet.predict(
{
"encoder_inputs": tf.expand_dims(tokenized_input_sentence, 0),
"decoder_inputs": tf.expand_dims(
tf.pad(
tokenized_target_sentence,
[[0, MAX_LENGTH - tf.shape(tokenized_target_sentence)[0]]],
),
0,
),
}
)
# Calculating the token with maximum probability and getting the corresponding word
sampled_token_index = tf.argmax(predictions[0, i, :])
sampled_token = VOCAB[sampled_token_index.numpy()]
# If sampled token is the end token then stop generating and return the sentence
if tf.equal(sampled_token_index, VOCAB.index("[end]")):
break
decoded_sentence += sampled_token + " "
tokenized_target_sentence = tf.concat(
[tokenized_target_sentence, [sampled_token_index]], 0
)
return decoded_sentence
decode_sentence("Where have you been all this time?")
"""
## Conclusion
This example shows how to train and perform inference using the FNet model.
For getting insight into the architecture or for further reading, you can refer to:
1. [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824v3)
(Lee-Thorp et al., 2021)
2. [Attention Is All You Need](https://arxiv.org/abs/1706.03762v5) (Vaswani et al.,
2017)
Thanks to Franรงois Chollet for his Keras example on
[English-to-Spanish translation with a sequence-to-sequence Transformer](https://keras.io/examples/nlp/neural_machine_translation_with_transformer/)
from which the decoder implementation was extracted.
"""
| keras-io/examples/generative/text_generation_fnet.py/0 | {
"file_path": "keras-io/examples/generative/text_generation_fnet.py",
"repo_id": "keras-io",
"token_count": 5469
} | 86 |
"""
Title: Knowledge distillation recipes
Author: [Sayak Paul](https://twitter.com/RisingSayak)
Date created: 2021/08/01
Last modified: 2021/08/01
Description: Training better student models via knowledge distillation with function matching.
Accelerator: GPU
"""
"""
## Introduction
Knowledge distillation ([Hinton et al.](https://arxiv.org/abs/1503.02531)) is a technique
that enables us to compress larger models into smaller ones. This allows us to reap the
benefits of high performing larger models, while reducing storage and memory costs and
achieving higher inference speed:
* Smaller models -> smaller memory footprint
* Reduced complexity -> fewer floating-point operations (FLOPs)
In [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237),
Beyer et al. investigate various existing setups for performing knowledge distillation
and show that all of them lead to sub-optimal performance. Due to this,
practitioners often settle for other alternatives (quantization, pruning, weight
clustering, etc.) when developing production systems that are resource-constrained.
Beyer et al. investigate how we can improve the student models that come out
of the knowledge distillation process and always match the performance of
their teacher models. In this example, we will study the recipes introduced by them, using
the [Flowers102 dataset](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/). As a
reference, with these recipes, the authors were able to produce a ResNet50 model that
achieves 82.8% accuracy on the ImageNet-1k dataset.
In case you need a refresher on knowledge distillation and want to study how it is
implemented in Keras, you can refer to
[this example](https://keras.io/examples/vision/knowledge_distillation/).
You can also follow
[this example](https://keras.io/examples/vision/consistency_training/)
that shows an extension of knowledge distillation applied to consistency training.
"""
"""
## Imports
"""
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
"""
## Hyperparameters and contants
"""
AUTO = tf.data.AUTOTUNE # Used to dynamically adjust parallelism.
BATCH_SIZE = 64
# Comes from Table 4 and "Training setup" section.
TEMPERATURE = 10 # Used to soften the logits before they go to softmax.
INIT_LR = 0.003 # Initial learning rate that will be decayed over the training period.
WEIGHT_DECAY = 0.001 # Used for regularization.
CLIP_THRESHOLD = 1.0 # Used for clipping the gradients by L2-norm.
# We will first resize the training images to a bigger size and then we will take
# random crops of a lower size.
BIGGER = 160
RESIZE = 128
"""
## Load the Flowers102 dataset
"""
train_ds, validation_ds, test_ds = tfds.load(
"oxford_flowers102", split=["train", "validation", "test"], as_supervised=True
)
print(f"Number of training examples: {train_ds.cardinality()}.")
print(f"Number of validation examples: {validation_ds.cardinality()}.")
print(f"Number of test examples: {test_ds.cardinality()}.")
"""
## Teacher model
As is common with any distillation technique, it's important to first train a
well-performing teacher model which is usually larger than the subsequent student model.
The authors distill a BiT ResNet152x2 model (teacher) into a BiT ResNet50 model
(student).
BiT stands for Big Transfer and was introduced in
[Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370).
BiT variants of ResNets use Group Normalization ([Wu et al.](https://arxiv.org/abs/1803.08494))
and Weight Standardization ([Qiao et al.](https://arxiv.org/abs/1903.10520v2))
in place of Batch Normalization ([Ioffe et al.](https://arxiv.org/abs/1502.03167)).
In order to limit the time it takes to run this example, we will be using a BiT
ResNet101x3 already trained on the Flowers102 dataset. You can refer to
[this notebook](https://github.com/sayakpaul/FunMatch-Distillation/blob/main/train_bit.ipynb)
to learn more about the training process. This model reaches 98.18% accuracy on the
test set of Flowers102.
The model weights are hosted on Kaggle as a dataset.
To download the weights, follow these steps:
1. Create an account on Kaggle [here](https://www.kaggle.com).
2. Go to the "Account" tab of your [user profile](https://www.kaggle.com/account).
3. Select "Create API Token". This will trigger the download of `kaggle.json`, a file
containing your API credentials.
4. From that JSON file, copy your Kaggle username and API key.
Now run the following:
```python
import os
os.environ["KAGGLE_USERNAME"] = "" # TODO: enter your Kaggle user name here
os.environ["KAGGLE_KEY"] = "" # TODO: enter your Kaggle key here
```
Once the environment variables are set, run:
```shell
$ kaggle datasets download -d spsayakpaul/bitresnet101x3flowers102
$ unzip -qq bitresnet101x3flowers102.zip
```
This should generate a folder named `T-r101x3-128` which is essentially a teacher
[`SavedModel`](https://www.tensorflow.org/guide/saved_model).
"""
os.environ["KAGGLE_USERNAME"] = "" # TODO: enter your Kaggle user name here
os.environ["KAGGLE_KEY"] = "" # TODO: enter your Kaggle API key here
"""shell
!kaggle datasets download -d spsayakpaul/bitresnet101x3flowers102
"""
"""shell
!unzip -qq bitresnet101x3flowers102.zip
"""
# Since the teacher model is not going to be trained further we make
# it non-trainable.
teacher_model = keras.layers.TFSMLayer(
"/home/jupyter/keras-io/examples/keras_recipes/T-r101x3-128"
)
teacher_model.trainable = False
"""
## The "function matching" recipe
To train a high-quality student model, the authors propose the following changes to the
student training workflow:
* Use an aggressive variant of MixUp ([Zhang et al.](https://arxiv.org/abs/1710.09412)).
This is done by sampling the `alpha` parameter from a uniform distribution instead of a
beta distribution. MixUp is used here in order to help the student model capture the
function underlying the teacher model. MixUp linearly interpolates between different
samples across the data manifold. So the rationale here is if the student is trained to
fit that it should be able to match the teacher model better. To incorporate more
invariance MixUp is coupled with "Inception-style" cropping
([Szegedy et al.](https://arxiv.org/abs/1409.4842)). This is where the
"function matching" term makes its way in the
[original paper](https://arxiv.org/abs/2106.05237).
* Unlike other works ([Noisy Student Training](https://arxiv.org/abs/1911.04252) for
example), both the teacher and student models receive the same copy of an image, which is
mixed up and randomly cropped. By providing the same inputs to both the models, the
authors make the teacher consistent with the student.
* With MixUp, we are essentially introducing a strong form of regularization when
training the student. As such, it should be trained for a
relatively long period of time (1000 epochs at least). Since the student is trained with
strong regularization, the risk of overfitting due to a longer training
schedule are also mitigated.
In summary, one needs to be consistent and patient while training the student model.
"""
"""
## Data input pipeline
"""
def mixup(images, labels):
alpha = tf.random.uniform([], 0, 1)
mixedup_images = alpha * images + (1 - alpha) * tf.reverse(images, axis=[0])
# The labels do not matter here since they are NOT used during
# training.
return mixedup_images, labels
def preprocess_image(image, label, train=True):
image = tf.cast(image, tf.float32) / 255.0
if train:
image = tf.image.resize(image, (BIGGER, BIGGER))
image = tf.image.random_crop(image, (RESIZE, RESIZE, 3))
image = tf.image.random_flip_left_right(image)
else:
# Central fraction amount is from here:
# https://git.io/J8Kda.
image = tf.image.central_crop(image, central_fraction=0.875)
image = tf.image.resize(image, (RESIZE, RESIZE))
return image, label
def prepare_dataset(dataset, train=True, batch_size=BATCH_SIZE):
if train:
dataset = dataset.map(preprocess_image, num_parallel_calls=AUTO)
dataset = dataset.shuffle(BATCH_SIZE * 10)
else:
dataset = dataset.map(
lambda x, y: (preprocess_image(x, y, train)), num_parallel_calls=AUTO
)
dataset = dataset.batch(batch_size)
if train:
dataset = dataset.map(mixup, num_parallel_calls=AUTO)
dataset = dataset.prefetch(AUTO)
return dataset
"""
Note that for brevity, we used mild crops for the training set but in practice
"Inception-style" preprocessing should be applied. You can refer to
[this script](https://github.com/sayakpaul/FunMatch-Distillation/blob/main/crop_resize.py)
for a closer implementation. Also, _**the ground-truth labels are not used for
training the student.**_
"""
train_ds = prepare_dataset(train_ds, True)
validation_ds = prepare_dataset(validation_ds, False)
test_ds = prepare_dataset(test_ds, False)
"""
## Visualization
"""
sample_images, _ = next(iter(train_ds))
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(sample_images[n].numpy())
plt.axis("off")
plt.show()
"""
## Student model
For the purpose of this example, we will use the standard ResNet50V2
([He et al.](https://arxiv.org/abs/1603.05027)).
"""
def get_resnetv2():
resnet_v2 = keras.applications.ResNet50V2(
weights=None,
input_shape=(RESIZE, RESIZE, 3),
classes=102,
classifier_activation="linear",
)
return resnet_v2
get_resnetv2().count_params()
"""
Compared to the teacher model, this model has 358 Million fewer parameters.
"""
"""
## Distillation utility
We will reuse some code from
[this example](https://keras.io/examples/vision/knowledge_distillation/)
on knowledge distillation.
"""
class Distiller(tf.keras.Model):
def __init__(self, student, teacher):
super().__init__()
self.student = student
self.teacher = teacher
self.loss_tracker = keras.metrics.Mean(name="distillation_loss")
@property
def metrics(self):
metrics = super().metrics
metrics.append(self.loss_tracker)
return metrics
def compile(
self,
optimizer,
metrics,
distillation_loss_fn,
temperature=TEMPERATURE,
):
super().compile(optimizer=optimizer, metrics=metrics)
self.distillation_loss_fn = distillation_loss_fn
self.temperature = temperature
def train_step(self, data):
# Unpack data
x, _ = data
# Forward pass of teacher
teacher_predictions = self.teacher(x, training=False)
with tf.GradientTape() as tape:
# Forward pass of student
student_predictions = self.student(x, training=True)
# Compute loss
distillation_loss = self.distillation_loss_fn(
tf.nn.softmax(teacher_predictions / self.temperature, axis=1),
tf.nn.softmax(student_predictions / self.temperature, axis=1),
)
# Compute gradients
trainable_vars = self.student.trainable_variables
gradients = tape.gradient(distillation_loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Report progress
self.loss_tracker.update_state(distillation_loss)
return {"distillation_loss": self.loss_tracker.result()}
def test_step(self, data):
# Unpack data
x, y = data
# Forward passes
teacher_predictions = self.teacher(x, training=False)
student_predictions = self.student(x, training=False)
# Calculate the loss
distillation_loss = self.distillation_loss_fn(
tf.nn.softmax(teacher_predictions / self.temperature, axis=1),
tf.nn.softmax(student_predictions / self.temperature, axis=1),
)
# Report progress
self.loss_tracker.update_state(distillation_loss)
self.compiled_metrics.update_state(y, student_predictions)
results = {m.name: m.result() for m in self.metrics}
return results
"""
## Learning rate schedule
A warmup cosine learning rate schedule is used in the paper. This schedule is also
typical for many pre-training methods especially for computer vision.
"""
# Some code is taken from:
# https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2.
class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule):
def __init__(
self, learning_rate_base, total_steps, warmup_learning_rate, warmup_steps
):
super().__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.pi = tf.constant(np.pi)
def __call__(self, step):
if self.total_steps < self.warmup_steps:
raise ValueError("Total_steps must be larger or equal to warmup_steps.")
cos_annealed_lr = tf.cos(
self.pi
* (tf.cast(step, tf.float32) - self.warmup_steps)
/ float(self.total_steps - self.warmup_steps)
)
learning_rate = 0.5 * self.learning_rate_base * (1 + cos_annealed_lr)
if self.warmup_steps > 0:
if self.learning_rate_base < self.warmup_learning_rate:
raise ValueError(
"Learning_rate_base must be larger or equal to "
"warmup_learning_rate."
)
slope = (
self.learning_rate_base - self.warmup_learning_rate
) / self.warmup_steps
warmup_rate = slope * tf.cast(step, tf.float32) + self.warmup_learning_rate
learning_rate = tf.where(
step < self.warmup_steps, warmup_rate, learning_rate
)
return tf.where(
step > self.total_steps, 0.0, learning_rate, name="learning_rate"
)
"""
We can now plot a a graph of learning rates generated using this schedule.
"""
ARTIFICIAL_EPOCHS = 1000
ARTIFICIAL_BATCH_SIZE = 512
DATASET_NUM_TRAIN_EXAMPLES = 1020
TOTAL_STEPS = int(
DATASET_NUM_TRAIN_EXAMPLES / ARTIFICIAL_BATCH_SIZE * ARTIFICIAL_EPOCHS
)
scheduled_lrs = WarmUpCosine(
learning_rate_base=INIT_LR,
total_steps=TOTAL_STEPS,
warmup_learning_rate=0.0,
warmup_steps=1500,
)
lrs = [scheduled_lrs(step) for step in range(TOTAL_STEPS)]
plt.plot(lrs)
plt.xlabel("Step", fontsize=14)
plt.ylabel("LR", fontsize=14)
plt.show()
"""
The original paper uses at least 1000 epochs and a batch size of 512 to perform
"function matching". The objective of this example is to present a workflow to
implement the recipe and not to demonstrate the results when they are applied at full scale.
However, these recipes will transfer to the original settings from the paper. Please
refer to [this repository](https://github.com/sayakpaul/FunMatch-Distillation) if you are
interested in finding out more.
"""
"""
## Training
"""
optimizer = keras.optimizers.AdamW(
weight_decay=WEIGHT_DECAY, learning_rate=scheduled_lrs, clipnorm=CLIP_THRESHOLD
)
student_model = get_resnetv2()
distiller = Distiller(student=student_model, teacher=teacher_model)
distiller.compile(
optimizer,
metrics=[keras.metrics.SparseCategoricalAccuracy()],
distillation_loss_fn=keras.losses.KLDivergence(),
temperature=TEMPERATURE,
)
history = distiller.fit(
train_ds,
steps_per_epoch=int(np.ceil(DATASET_NUM_TRAIN_EXAMPLES / BATCH_SIZE)),
validation_data=validation_ds,
epochs=30, # This should be at least 1000.
)
student = distiller.student
student_model.compile(metrics=["accuracy"])
_, top1_accuracy = student.evaluate(test_ds)
print(f"Top-1 accuracy on the test set: {round(top1_accuracy * 100, 2)}%")
"""
## Results
With just 30 epochs of training, the results are nowhere near expected.
This is where the benefits of patience aka a longer training schedule
will come into play. Let's investigate what the model trained for 1000 epochs can do.
"""
"""shell
# Download the pre-trained weights.
!wget https://git.io/JBO3Y -O S-r50x1-128-1000.tar.gz
!tar xf S-r50x1-128-1000.tar.gz
"""
pretrained_student = keras.layers.TFSMLayer("S-r50x1-128-1000")
"""
This model exactly follows what the authors have used in their student models.
"""
_, top1_accuracy = pretrained_student.evaluate(test_ds)
print(f"Top-1 accuracy on the test set: {round(top1_accuracy * 100, 2)}%")
"""
With 100000 epochs of training, this same model leads to a top-1 accuracy of 95.54%.
There are a number of important ablations studies presented in the paper that show the
effectiveness of these recipes compared to the prior art. So if you are skeptical about
these recipes, definitely consult the paper.
"""
"""
## Note on training for longer
With TPU-based hardware infrastructure, we can train the model for 1000 epochs faster.
This does not even require adding a lot of changes to this codebase. You
are encouraged to check
[this repository](https://github.com/sayakpaul/FunMatch-Distillation)
as it presents TPU-compatible training workflows for these recipes and can be run on
[Kaggle Kernel](https://www.kaggle.com/kernels) leveraging their free TPU v3-8 hardware.
"""
| keras-io/examples/keras_recipes/better_knowledge_distillation.py/0 | {
"file_path": "keras-io/examples/keras_recipes/better_knowledge_distillation.py",
"repo_id": "keras-io",
"token_count": 6141
} | 87 |
<jupyter_start><jupyter_text>Reproducibility in Keras Models**Author:** [Frightera](https://github.com/Frightera)**Date created:** 2023/05/05**Last modified:** 2023/05/05**Description:** Demonstration of random weight initialization and reproducibility in Keras models. IntroductionThis example demonstrates how to control randomness in Keras models. Sometimesyou may want to reproduce the exact same results across runs, for experimentationpurposes or to debug a problem. Setup<jupyter_code>import json
import numpy as np
import tensorflow as tf
import keras
from keras import layers
from keras import initializers
# Set the seed using keras.utils.set_random_seed. This will set:
# 1) `numpy` seed
# 2) backend random seed
# 3) `python` random seed
keras.utils.set_random_seed(812)
# If using TensorFlow, this will make GPU ops as deterministic as possible,
# but it will affect the overall performance, so be mindful of that.
tf.config.experimental.enable_op_determinism()<jupyter_output><empty_output><jupyter_text>Weight initialization in KerasMost of the layers in Keras have `kernel_initializer` and `bias_initializer`parameters. These parameters allow you to specify the strategy used forinitializing the weights of layer variables. The following built-in initializersare available as part of `keras.initializers`:<jupyter_code>initializers_list = [
initializers.RandomNormal,
initializers.RandomUniform,
initializers.TruncatedNormal,
initializers.VarianceScaling,
initializers.GlorotNormal,
initializers.GlorotUniform,
initializers.HeNormal,
initializers.HeUniform,
initializers.LecunNormal,
initializers.LecunUniform,
initializers.Orthogonal,
]<jupyter_output><empty_output><jupyter_text>In a reproducible model, the weights of the model should be initialized withsame values in subsequent runs. First, we'll check how initializers behave whenthey are called multiple times with same `seed` value.<jupyter_code>for initializer in initializers_list:
print(f"Running {initializer}")
for iteration in range(2):
# In order to get same results across multiple runs from an initializer,
# you can specify a seed value.
result = float(initializer(seed=42)(shape=(1, 1)))
print(f"\tIteration --> {iteration} // Result --> {result}")
print("\n")<jupyter_output><empty_output><jupyter_text>Now, let's inspect how two different initializer objects behave when they arehave the same seed value.<jupyter_code># Setting the seed value for an initializer will cause two different objects
# to produce same results.
glorot_normal_1 = keras.initializers.GlorotNormal(seed=42)
glorot_normal_2 = keras.initializers.GlorotNormal(seed=42)
input_dim, neurons = 3, 5
# Call two different objects with same shape
result_1 = glorot_normal_1(shape=(input_dim, neurons))
result_2 = glorot_normal_2(shape=(input_dim, neurons))
# Check if the results are equal.
equal = np.allclose(result_1, result_2)
print(f"Are the results equal? {equal}")<jupyter_output><empty_output><jupyter_text>If the seed value is not set (or different seed values are used), two differentobjects will produce different results. Since the random seed is set at the beginningof the notebook, the results will be same in the sequential runs. This is relatedto the `keras.utils.set_random_seed`.<jupyter_code>glorot_normal_3 = keras.initializers.GlorotNormal()
glorot_normal_4 = keras.initializers.GlorotNormal()
# Let's call the initializer.
result_3 = glorot_normal_3(shape=(input_dim, neurons))
# Call the second initializer.
result_4 = glorot_normal_4(shape=(input_dim, neurons))
equal = np.allclose(result_3, result_4)
print(f"Are the results equal? {equal}")<jupyter_output><empty_output><jupyter_text>`result_3` and `result_4` will be different, but when you run the notebookagain, `result_3` will have identical values to the ones in the previous run.Same goes for `result_4`. Reproducibility in model training processIf you want to reproduce the results of a model training process, you need tocontrol the randomness sources during the training process. In order to show arealistic example, this section utilizes `tf.data` using parallel map and shuffleoperations.In order to start, let's create a simple function which returns the historyobject of the Keras model.<jupyter_code>def train_model(train_data: tf.data.Dataset, test_data: tf.data.Dataset) -> dict:
model = keras.Sequential(
[
layers.Conv2D(32, (3, 3), activation="relu"),
layers.MaxPooling2D((2, 2)),
layers.Dropout(0.2),
layers.Conv2D(32, (3, 3), activation="relu"),
layers.MaxPooling2D((2, 2)),
layers.Dropout(0.2),
layers.Conv2D(32, (3, 3), activation="relu"),
layers.GlobalAveragePooling2D(),
layers.Dense(64, activation="relu"),
layers.Dropout(0.2),
layers.Dense(10, activation="softmax"),
]
)
model.compile(
optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
# model.fit has a `shuffle` parameter which has a default value of `True`.
# If you are using array-like objects, this will shuffle the data before
# training. This argument is ignored when `x` is a generator or
# `tf.data.Dataset`.
history = model.fit(train_data, epochs=2, validation_data=test_data)
print(f"Model accuracy on test data: {model.evaluate(test_data)[1] * 100:.2f}%")
return history.history
# Load the MNIST dataset
(train_images, train_labels), (
test_images,
test_labels,
) = keras.datasets.mnist.load_data()
# Construct tf.data.Dataset objects
train_ds = tf.data.Dataset.from_tensor_slices((train_images, train_labels))
test_ds = tf.data.Dataset.from_tensor_slices((test_images, test_labels))<jupyter_output><empty_output><jupyter_text>Remember we called `tf.config.experimental.enable_op_determinism()` at thebeginning of the function. This makes the `tf.data` operations deterministic.However, making `tf.data` operations deterministic comes with a performancecost. If you want to learn more about it, please check this[official guide](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_op_determinismdeterminism_and_tfdata).Small summary what's going on here. Models have `kernel_initializer` and`bias_initializer` parameters. Since we set random seeds using`keras.utils.set_random_seed` in the beginning of the notebook, the initializerswill produce same results in the sequential runs. Additionally, TensorFlowoperations have now become deterministic. Frequently, you will be utilizing GPUsthat have thousands of hardware threads which causes non-deterministic behaviorto occur.<jupyter_code>def prepare_dataset(image, label):
# Cast and normalize the image
image = tf.cast(image, tf.float32) / 255.0
# Expand the channel dimension
image = tf.expand_dims(image, axis=-1)
# Resize the image
image = tf.image.resize(image, (32, 32))
return image, label<jupyter_output><empty_output><jupyter_text>`tf.data.Dataset` objects have a `shuffle` method which shuffles the data.This method has a `buffer_size` parameter which controls the size of thebuffer. If you set this value to `len(train_images)`, the whole dataset willbe shuffled. If the buffer size is equal to the length of the dataset,then the elements will be shuffled in a completely random order.Main drawback of setting the buffer size to the length of the dataset is thatfilling the buffer can take a while depending on the size of the dataset.Here is a small summary of what's going on here:1) The `shuffle()` method creates a buffer of the specified size.2) The elements of the dataset are randomly shuffled and placed into the buffer.3) The elements of the buffer are then returned in a random order.Since `tf.config.experimental.enable_op_determinism()` is enabled and we setrandom seeds using `keras.utils.set_random_seed` in the beginning of thenotebook, the `shuffle()` method will produce same results in the sequentialruns.<jupyter_code># Prepare the datasets, batch-map --> vectorized operations
train_data = (
train_ds.shuffle(buffer_size=len(train_images))
.batch(batch_size=64)
.map(prepare_dataset, num_parallel_calls=tf.data.AUTOTUNE)
.prefetch(buffer_size=tf.data.AUTOTUNE)
)
test_data = (
test_ds.batch(batch_size=64)
.map(prepare_dataset, num_parallel_calls=tf.data.AUTOTUNE)
.prefetch(buffer_size=tf.data.AUTOTUNE)
)<jupyter_output><empty_output><jupyter_text>Train the model for the first time.<jupyter_code>history = train_model(train_data, test_data)<jupyter_output><empty_output><jupyter_text>Let's save our results into a JSON file, and restart the kernel. Afterrestarting the kernel, we should see the same results as the previous run,this includes metrics and loss values both on the training and test data.<jupyter_code># Save the history object into a json file
with open("history.json", "w") as fp:
json.dump(history, fp)<jupyter_output><empty_output><jupyter_text>Do not run the cell above in order not to overwrite the results. Execute themodel training cell again and compare the results.<jupyter_code>with open("history.json", "r") as fp:
history_loaded = json.load(fp)<jupyter_output><empty_output><jupyter_text>Compare the results one by one. You will see that they are equal.<jupyter_code>for key in history.keys():
for i in range(len(history[key])):
if not np.allclose(history[key][i], history_loaded[key][i]):
print(f"{key} not equal")<jupyter_output><empty_output> | keras-io/examples/keras_recipes/ipynb/reproducibility_recipes.ipynb/0 | {
"file_path": "keras-io/examples/keras_recipes/ipynb/reproducibility_recipes.ipynb",
"repo_id": "keras-io",
"token_count": 3132
} | 88 |
# Packaging Keras models for wide distribution using Functional Subclassing
**Author:** Martin Gรถrner<br>
**Date created:** 2023-12-13<br>
**Last modified:** 2023-12-13<br>
**Description:** When sharing your deep learning models, package them using the Functional Subclassing pattern.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/keras_recipes/ipynb/packaging_keras_models_for_wide_distribution.ipynb) <span class="k-dot">โข</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/keras_recipes/packaging_keras_models_for_wide_distribution.py)
---
## Introduction
Keras is the ideal framework for sharing your cutting-edge deep learning models, in a
library of pre-trained (or not) models. Millions of ML engineers are fluent in the
familiar Keras API, making your models accessible to a global community, whatever their
preferred backend (Jax, PyTorch or TensorFlow).
One of the benefits of the Keras API is that it lets users programmatically inspect or
edit a model, a feature that is necessary when creating new architectures or workflows
based on a pre-trained model.
When distributing models, the Keras team recommends packaging them using the **Functional
Subclassing** pattern. Models implemented in this way combine two benefits:
* They can be instantiated in the normal pythonic way:<br/>
`model = model_collection_xyz.AmazingModel()`
* They are Keras functional models which means that they have a programmatically
accessible graph of layers, for introspection or model surgery.
This guide explains [how to use](#functional-subclassing-model) the Functional
Subclassing pattern, and showcases its benefits for [programmatic model
introspection](#model-introspection) and [model surgery](#model-surgery). It also shows
two other best practices for sharable Keras models: [configuring
models](#unconstrained-inputs) for the widest range of supported inputs, for example
images of various sizes, and [using dictionary inputs](#model-with-dictionary-inputs) for
clarity in more complex models.
---
## Setup
```python
import keras
import tensorflow as tf # only for tf.data
print("Keras version", keras.version())
print("Keras is running on", keras.config.backend())
```
<div class="k-default-codeblock">
```
Keras version 3.0.1
Keras is running on tensorflow
```
</div>
---
## Dataset
Let's load an MNIST dataset so that we have something to train with.
```python
# tf.data is a great API for putting together a data stream.
# It works wether you use the TensorFlow, PyTorch or Jax backend,
# as long as you use it in the data stream only and not inside of a model.
BATCH_SIZE = 256
(x_train, train_labels), (x_test, test_labels) = keras.datasets.mnist.load_data()
train_data = tf.data.Dataset.from_tensor_slices((x_train, train_labels))
train_data = train_data.map(
lambda x, y: (tf.expand_dims(x, axis=-1), y)
) # 1-channel monochrome
train_data = train_data.batch(BATCH_SIZE)
train_data = train_data.cache()
train_data = train_data.shuffle(5000, reshuffle_each_iteration=True)
train_data = train_data.repeat()
test_data = tf.data.Dataset.from_tensor_slices((x_test, test_labels))
test_data = test_data.map(
lambda x, y: (tf.expand_dims(x, axis=-1), y)
) # 1-channel monochrome
test_data = test_data.batch(10000)
test_data = test_data.cache()
STEPS_PER_EPOCH = len(train_labels) // BATCH_SIZE
EPOCHS = 5
```
---
## Functional Subclassing Model
The model is wrapped in a class so that end users can instantiate it normally by calling
the constructor `MnistModel()` rather than calling a factory function.
```python
class MnistModel(keras.Model):
def __init__(self, **kwargs):
# Keras Functional model definition. This could have used Sequential as
# well. Sequential is just syntactic sugar for simple functional models.
# 1-channel monochrome input
inputs = keras.layers.Input(shape=(None, None, 1), dtype="uint8")
# pixel format conversion from uint8 to float32
y = keras.layers.Rescaling(1 / 255.0)(inputs)
# 3 convolutional layers
y = keras.layers.Conv2D(
filters=16, kernel_size=3, padding="same", activation="relu"
)(y)
y = keras.layers.Conv2D(
filters=32, kernel_size=6, padding="same", activation="relu", strides=2
)(y)
y = keras.layers.Conv2D(
filters=48, kernel_size=6, padding="same", activation="relu", strides=2
)(y)
# 2 dense layers
y = keras.layers.GlobalAveragePooling2D()(y)
y = keras.layers.Dense(48, activation="relu")(y)
y = keras.layers.Dropout(0.4)(y)
outputs = keras.layers.Dense(
10, activation="softmax", name="classification_head" # 10 classes
)(y)
# A Keras Functional model is created by calling keras.Model(inputs, outputs)
super().__init__(inputs=inputs, outputs=outputs, **kwargs)
```
Let's instantiate and train this model.
```python
model = MnistModel()
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
history = model.fit(
train_data,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=EPOCHS,
validation_data=test_data,
)
```
<div class="k-default-codeblock">
```
Epoch 1/5
234/234 โโโโโโโโโโโโโโโโโโโโ 9s 33ms/step - loss: 1.8916 - sparse_categorical_accuracy: 0.2933 - val_loss: 0.4278 - val_sparse_categorical_accuracy: 0.8864
Epoch 2/5
234/234 โโโโโโโโโโโโโโโโโโโโ 7s 31ms/step - loss: 0.5723 - sparse_categorical_accuracy: 0.8201 - val_loss: 0.2703 - val_sparse_categorical_accuracy: 0.9248
Epoch 3/5
234/234 โโโโโโโโโโโโโโโโโโโโ 7s 31ms/step - loss: 0.4063 - sparse_categorical_accuracy: 0.8772 - val_loss: 0.2010 - val_sparse_categorical_accuracy: 0.9400
Epoch 4/5
234/234 โโโโโโโโโโโโโโโโโโโโ 7s 31ms/step - loss: 0.3391 - sparse_categorical_accuracy: 0.8996 - val_loss: 0.1869 - val_sparse_categorical_accuracy: 0.9427
Epoch 5/5
234/234 โโโโโโโโโโโโโโโโโโโโ 7s 31ms/step - loss: 0.2989 - sparse_categorical_accuracy: 0.9120 - val_loss: 0.1513 - val_sparse_categorical_accuracy: 0.9557
```
</div>
---
## Unconstrained inputs
Notice, in the model definition above, that the input is specified with undefined
dimensions: `Input(shape=(None, None, 1)`
This allows the model to accept any image size as an input. However, this
only works if the loosely defined shape can be propagated through all the layers and
still determine the size of all weights.
* So if you have a model architecture that can handle different input sizes
with the same weights (like here), then your users will be able to instantiate it without
parameters:<br/> `model = MnistModel()`
* If on the other hand, the model must provision different weights for different input
sizes, you will have to ask your users to specify the size in the constructor:<br/>
`model = ModelXYZ(input_size=...)`
---
## Model introspection
Keras maintains a programmatically accessible graph of layers for every model. It can be
used for introspection and is accessed through the `model.layers` or `layer.layers`
attribute. The utility function `model.summary()` also uses this mechanism internally.
```python
model = MnistModel()
# Model summary works
model.summary()
# Recursively walking the layer graph works as well
def walk_layers(layer):
if hasattr(layer, "layers"):
for layer in layer.layers:
walk_layers(layer)
else:
print(layer.name)
print("\nWalking model layers:\n")
walk_layers(model)
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "mnist_model_1"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโ
โ<span style="font-weight: bold"> Layer (type) </span>โ<span style="font-weight: bold"> Output Shape </span>โ<span style="font-weight: bold"> Param # </span>โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ input_layer_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">0</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ rescaling_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Rescaling</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">0</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ conv2d_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">160</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ conv2d_4 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">18,464</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ conv2d_5 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">48</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">55,344</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ global_average_pooling2d_1 โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">48</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">0</span> โ
โ (<span style="color: #0087ff; text-decoration-color: #0087ff">GlobalAveragePooling2D</span>) โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ dense_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">48</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">2,352</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ dropout_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">48</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">0</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ classification_head (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">10</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">490</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโ
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">76,810</span> (300.04 KB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">76,810</span> (300.04 KB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">0</span> (0.00 B)
</pre>
<div class="k-default-codeblock">
```
Walking model layers:
```
</div>
<div class="k-default-codeblock">
```
input_layer_1
rescaling_1
conv2d_3
conv2d_4
conv2d_5
global_average_pooling2d_1
dense_1
dropout_1
classification_head
```
</div>
---
## Model surgery
End users might want to instantiate the model from your library but modify it before use.
Functional models have a programmatically accessible graph of layers. Edits are possible
by slicing and splicing the graph and creating a new functional model.
The alternative is to fork the model code and make the modifications but that forces
users to then maintain their fork indefinitely.
Example: instantiate the model but change the classification head to do a binary
classification, "0" or "not 0", instead of the original 10-way digits classification.
```python
model = MnistModel()
input = model.input
# cut before the classification head
y = model.get_layer("classification_head").input
# add a new classification head
output = keras.layers.Dense(
1, # single class for binary classification
activation="sigmoid",
name="binary_classification_head",
)(y)
# create a new functional model
binary_model = keras.Model(input, output)
binary_model.summary()
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "functional_1"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโ
โ<span style="font-weight: bold"> Layer (type) </span>โ<span style="font-weight: bold"> Output Shape </span>โ<span style="font-weight: bold"> Param # </span>โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ input_layer_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">0</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ rescaling_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Rescaling</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">0</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ conv2d_6 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">160</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ conv2d_7 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">18,464</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ conv2d_8 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">48</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">55,344</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ global_average_pooling2d_2 โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">48</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">0</span> โ
โ (<span style="color: #0087ff; text-decoration-color: #0087ff">GlobalAveragePooling2D</span>) โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ dense_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">48</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">2,352</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ dropout_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">48</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">0</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ binary_classification_head โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">49</span> โ
โ (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโ
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">76,369</span> (298.32 KB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">76,369</span> (298.32 KB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">0</span> (0.00 B)
</pre>
We can now train the new model as a binary classifier.
```python
# new dataset with 0 / 1 labels (1 = digit '0', 0 = all other digits)
bin_train_data = train_data.map(
lambda x, y: (x, tf.cast(tf.math.equal(y, tf.zeros_like(y)), dtype=tf.uint8))
)
bin_test_data = test_data.map(
lambda x, y: (x, tf.cast(tf.math.equal(y, tf.zeros_like(y)), dtype=tf.uint8))
)
# appropriate loss and metric for binary classification
binary_model.compile(
optimizer="adam", loss="binary_crossentropy", metrics=["binary_accuracy"]
)
history = binary_model.fit(
bin_train_data,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=EPOCHS,
validation_data=bin_test_data,
)
```
<div class="k-default-codeblock">
```
Epoch 1/5
234/234 โโโโโโโโโโโโโโโโโโโโ 9s 33ms/step - binary_accuracy: 0.8926 - loss: 0.3635 - val_binary_accuracy: 0.9235 - val_loss: 0.1777
Epoch 2/5
234/234 โโโโโโโโโโโโโโโโโโโโ 7s 31ms/step - binary_accuracy: 0.9411 - loss: 0.1620 - val_binary_accuracy: 0.9766 - val_loss: 0.0748
Epoch 3/5
234/234 โโโโโโโโโโโโโโโโโโโโ 7s 31ms/step - binary_accuracy: 0.9751 - loss: 0.0794 - val_binary_accuracy: 0.9884 - val_loss: 0.0414
Epoch 4/5
234/234 โโโโโโโโโโโโโโโโโโโโ 7s 31ms/step - binary_accuracy: 0.9848 - loss: 0.0480 - val_binary_accuracy: 0.9915 - val_loss: 0.0292
Epoch 5/5
234/234 โโโโโโโโโโโโโโโโโโโโ 7s 31ms/step - binary_accuracy: 0.9910 - loss: 0.0326 - val_binary_accuracy: 0.9917 - val_loss: 0.0286
```
</div>
---
## Model with dictionary inputs
In more complex models, with multiple inputs, structuring the inputs as a dictionary can
improve readability and usability. This is straightforward to do with a functional model:
```python
class MnistDictModel(keras.Model):
def __init__(self, **kwargs):
#
# The input is a dictionary
#
inputs = {
"image": keras.layers.Input(
shape=(None, None, 1), # 1-channel monochrome
dtype="uint8",
name="image",
)
}
# pixel format conversion from uint8 to float32
y = keras.layers.Rescaling(1 / 255.0)(inputs["image"])
# 3 conv layers
y = keras.layers.Conv2D(
filters=16, kernel_size=3, padding="same", activation="relu"
)(y)
y = keras.layers.Conv2D(
filters=32, kernel_size=6, padding="same", activation="relu", strides=2
)(y)
y = keras.layers.Conv2D(
filters=48, kernel_size=6, padding="same", activation="relu", strides=2
)(y)
# 2 dense layers
y = keras.layers.GlobalAveragePooling2D()(y)
y = keras.layers.Dense(48, activation="relu")(y)
y = keras.layers.Dropout(0.4)(y)
outputs = keras.layers.Dense(
10, activation="softmax", name="classification_head" # 10 classes
)(y)
# A Keras Functional model is created by calling keras.Model(inputs, outputs)
super().__init__(inputs=inputs, outputs=outputs, **kwargs)
```
We can now train the model on inputs structured as a dictionary.
```python
model = MnistDictModel()
# reformat the dataset as a dictionary
dict_train_data = train_data.map(lambda x, y: ({"image": x}, y))
dict_test_data = test_data.map(lambda x, y: ({"image": x}, y))
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
history = model.fit(
dict_train_data,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=EPOCHS,
validation_data=dict_test_data,
)
```
<div class="k-default-codeblock">
```
Epoch 1/5
234/234 โโโโโโโโโโโโโโโโโโโโ 9s 34ms/step - loss: 1.8702 - sparse_categorical_accuracy: 0.3175 - val_loss: 0.4505 - val_sparse_categorical_accuracy: 0.8779
Epoch 2/5
234/234 โโโโโโโโโโโโโโโโโโโโ 8s 32ms/step - loss: 0.5991 - sparse_categorical_accuracy: 0.8131 - val_loss: 0.2582 - val_sparse_categorical_accuracy: 0.9245
Epoch 3/5
234/234 โโโโโโโโโโโโโโโโโโโโ 7s 32ms/step - loss: 0.3916 - sparse_categorical_accuracy: 0.8846 - val_loss: 0.1938 - val_sparse_categorical_accuracy: 0.9422
Epoch 4/5
234/234 โโโโโโโโโโโโโโโโโโโโ 8s 33ms/step - loss: 0.3109 - sparse_categorical_accuracy: 0.9089 - val_loss: 0.1450 - val_sparse_categorical_accuracy: 0.9566
Epoch 5/5
234/234 โโโโโโโโโโโโโโโโโโโโ 8s 32ms/step - loss: 0.2775 - sparse_categorical_accuracy: 0.9197 - val_loss: 0.1316 - val_sparse_categorical_accuracy: 0.9608
```
</div> | keras-io/examples/keras_recipes/md/packaging_keras_models_for_wide_distribution.md/0 | {
"file_path": "keras-io/examples/keras_recipes/md/packaging_keras_models_for_wide_distribution.md",
"repo_id": "keras-io",
"token_count": 10407
} | 89 |
<jupyter_start><jupyter_text>Training a language model from scratch with ๐ค Transformers and TPUs**Authors:** [Matthew Carrigan](https://twitter.com/carrigmat), [Sayak Paul](https://twitter.com/RisingSayak)**Date created:** 2023/05/21**Last modified:** 2023/05/21**Description:** Train a masked language model on TPUs using ๐ค Transformers. IntroductionIn this example, we cover how to train a masked language model using TensorFlow,[๐ค Transformers](https://huggingface.co/transformers/index),and TPUs.TPU training is a useful skill to have: TPU pods are high-performance and extremelyscalable, making it easy to train models at any scale from a few tens of millions ofparameters up to truly enormous sizes: Google's PaLM model(over 500 billion parameters!) was trained entirely on TPU pods.We've previously written a[**tutorial**](https://huggingface.co/docs/transformers/main/perf_train_tpu_tf)and a[**Colab example**](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)showing small-scale TPU training with TensorFlow and introducing the core concepts youneed to understand to get your model working on TPU. However, our Colab example doesn'tcontain all the steps needed to train a language model from scratch such astraining the tokenizer. So, we wanted to provide a consolidated example ofwalking you through every critical step involved there.As in our Colab example, we're taking advantage of TensorFlow's very clean TPU supportvia XLA and `TPUStrategy`. We'll also be benefiting from the fact that the majority ofthe TensorFlow models in ๐ค Transformers are fully[XLA-compatible](https://huggingface.co/blog/tf-xla-generate).So surprisingly, little work is needed to get them to run on TPU.This example is designed to be **scalable** and much closer to a realistic training run-- although we only use a BERT-sized model by default, the code could be expanded to amuch larger model and a much more powerful TPU pod slice by changing a few configurationoptions.The following diagram gives you a pictorial overview of the steps involved in training alanguage model with ๐ค Transformers using TensorFlow and TPUs:*(Contents of this example overlap with[this blog post](https://huggingface.co/blog/tf_tpu)).* DataWe use the[WikiText dataset (v1)](https://huggingface.co/datasets/wikitext).You can head over to the[dataset page on the Hugging Face Hub](https://huggingface.co/datasets/wikitext)to explore the dataset.Since the dataset is already available on the Hub in a compatible format, we can easilyload and interact with it using[๐ค datasets](https://hf.co/docs/datasets).However, training a language model from scratch also requires a separatetokenizer training step. We skip that part in this example for brevity, but,here's a gist of what we can do to train a tokenizer from scratch:- Load the `train` split of the WikiText using ๐ค datasets.- Leverage[๐ค tokenizers](https://huggingface.co/docs/tokenizers/index)to train a[**Unigram model**](https://huggingface.co/course/chapter6/7?fw=pt).- Upload the trained tokenizer on the Hub.You can find the tokenizer trainingcode[**here**](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling-tputraining-a-tokenizer)and the tokenizer[**here**](https://huggingface.co/tf-tpu/unigram-tokenizer-wikitext).This script also allows you to run it with[**any compatible dataset**](https://huggingface.co/datasets?task_ids=task_ids:language-modeling)from the Hub. Tokenizing the data and creating TFRecordsOnce the tokenizer is trained, we can use it on all the dataset splits(`train`, `validation`, and `test` in this case) and create TFRecord shards out of them.Having the data splits spread across multiple TFRecord shards helps with massivelyparallel processing as opposed to having each split in single TFRecord files.We tokenize the samples individually. We then take a batch of samples, concatenate themtogether, and split them into several chunks of a fixed size (128 in our case). We followthis strategy rather than tokenizing a batch of samples with a fixed length to avoidaggressively discarding text content (because of truncation).We then take these tokenized samples in batches and serialize those batches as multipleTFRecord shards, where the total dataset length and individual shard size determine thenumber of shards. Finally, these shards are pushed to a[Google Cloud Storage (GCS) bucket](https://cloud.google.com/storage/docs/json_api/v1/buckets).If you're using a TPU node for training, then the data needs to be streamed from a GCSbucket since the node host memory is very small. But for TPU VMs, we can use datasetslocally or even attach persistent storage to those VMs. Since TPU nodes (which is what wehave in a Colab) are still quite heavily used, we based our example on using a GCS bucketfor data storage.You can see all of this in code in[this script](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling-tpu/prepare_tfrecord_shards.py).For convenience, we have also hosted the resultant TFRecord shards in[this repository](https://huggingface.co/datasets/tf-tpu/wikitext-v1-tfrecords)on the Hub.Once the data is tokenized and serialized into TFRecord shards, we can proceed towardtraining. Training Setup and importsLet's start by installing ๐ค Transformers.<jupyter_code>!pip install transformers -q<jupyter_output><empty_output><jupyter_text>Then, let's import the modules we need.<jupyter_code>import os
import re
import tensorflow as tf
import transformers<jupyter_output><empty_output><jupyter_text>Initialize TPUs Then let's connect to our TPU and determine the distribution strategy:<jupyter_code>tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.TPUStrategy(tpu)
print(f"Available number of replicas: {strategy.num_replicas_in_sync}")<jupyter_output><empty_output><jupyter_text>We then load the tokenizer. For more details on the tokenizer, check out[its repository](https://huggingface.co/tf-tpu/unigram-tokenizer-wikitext).For the model, we use RoBERTa (the base variant), introduced in[this paper](https://arxiv.org/abs/1907.11692). Initialize the tokenizer<jupyter_code>tokenizer = "tf-tpu/unigram-tokenizer-wikitext"
pretrained_model_config = "roberta-base"
tokenizer = transformers.AutoTokenizer.from_pretrained(tokenizer)
config = transformers.AutoConfig.from_pretrained(pretrained_model_config)
config.vocab_size = tokenizer.vocab_size<jupyter_output><empty_output><jupyter_text>Prepare the datasets We now load the TFRecord shards of the WikiText dataset (which the Hugging Face teamprepared beforehand for this example):<jupyter_code>train_dataset_path = "gs://tf-tpu-training-resources/train"
eval_dataset_path = "gs://tf-tpu-training-resources/validation"
training_records = tf.io.gfile.glob(os.path.join(train_dataset_path, "*.tfrecord"))
eval_records = tf.io.gfile.glob(os.path.join(eval_dataset_path, "*.tfrecord"))<jupyter_output><empty_output><jupyter_text>Now, we will write a utility to count the number of training samples we have. We need toknow this value in order properly initialize our optimizer later:<jupyter_code>def count_samples(file_list):
num_samples = 0
for file in file_list:
filename = file.split("/")[-1]
sample_count = re.search(r"-\d+-(\d+)\.tfrecord", filename).group(1)
sample_count = int(sample_count)
num_samples += sample_count
return num_samples
num_train_samples = count_samples(training_records)
print(f"Number of total training samples: {num_train_samples}")<jupyter_output><empty_output><jupyter_text>Let's now prepare our datasets for training and evaluation. We start by writing ourutilities. First, we need to be able to decode the TFRecords:<jupyter_code>max_sequence_length = 512
def decode_fn(example):
features = {
"input_ids": tf.io.FixedLenFeature(
dtype=tf.int64, shape=(max_sequence_length,)
),
"attention_mask": tf.io.FixedLenFeature(
dtype=tf.int64, shape=(max_sequence_length,)
),
}
return tf.io.parse_single_example(example, features)<jupyter_output><empty_output><jupyter_text>Here, `max_sequence_length` needs to be the same as the one used during preparing theTFRecord shards.Refer to[this script](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling-tpu/prepare_tfrecord_shards.py)for more details.Next up, we have our masking utility that is responsible for masking parts of the inputsand preparing labels for the masked language model to learn from. We leverage the[`DataCollatorForLanguageModeling`](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/data_collatortransformers.DataCollatorForLanguageModeling)for this purpose.<jupyter_code># We use a standard masking probability of 0.15. `mlm_probability` denotes
# probability with which we mask the input tokens in a sequence.
mlm_probability = 0.15
data_collator = transformers.DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm_probability=mlm_probability, mlm=True, return_tensors="tf"
)
def mask_with_collator(batch):
special_tokens_mask = (
~tf.cast(batch["attention_mask"], tf.bool)
| (batch["input_ids"] == tokenizer.cls_token_id)
| (batch["input_ids"] == tokenizer.sep_token_id)
)
batch["input_ids"], batch["labels"] = data_collator.tf_mask_tokens(
batch["input_ids"],
vocab_size=len(tokenizer),
mask_token_id=tokenizer.mask_token_id,
special_tokens_mask=special_tokens_mask,
)
return batch<jupyter_output><empty_output><jupyter_text>And now is the time to write the final data preparation utility to put it all together ina `tf.data.Dataset` object:<jupyter_code>auto = tf.data.AUTOTUNE
shuffle_buffer_size = 2**18
def prepare_dataset(
records, decode_fn, mask_fn, batch_size, shuffle, shuffle_buffer_size=None
):
num_samples = count_samples(records)
dataset = tf.data.Dataset.from_tensor_slices(records)
if shuffle:
dataset = dataset.shuffle(len(dataset))
dataset = tf.data.TFRecordDataset(dataset, num_parallel_reads=auto)
# TF can't infer the total sample count because it doesn't read
# all the records yet, so we assert it here.
dataset = dataset.apply(tf.data.experimental.assert_cardinality(num_samples))
dataset = dataset.map(decode_fn, num_parallel_calls=auto)
if shuffle:
assert shuffle_buffer_size is not None
dataset = dataset.shuffle(shuffle_buffer_size)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.map(mask_fn, num_parallel_calls=auto)
dataset = dataset.prefetch(auto)
return dataset<jupyter_output><empty_output><jupyter_text>Let's prepare our datasets with these utilities:<jupyter_code>per_replica_batch_size = 16 # Change as needed.
batch_size = per_replica_batch_size * strategy.num_replicas_in_sync
shuffle_buffer_size = 2**18 # Default corresponds to a 1GB buffer for seq_len 512
train_dataset = prepare_dataset(
training_records,
decode_fn=decode_fn,
mask_fn=mask_with_collator,
batch_size=batch_size,
shuffle=True,
shuffle_buffer_size=shuffle_buffer_size,
)
eval_dataset = prepare_dataset(
eval_records,
decode_fn=decode_fn,
mask_fn=mask_with_collator,
batch_size=batch_size,
shuffle=False,
)<jupyter_output><empty_output><jupyter_text>Let's now investigate how a single batch of dataset looks like.<jupyter_code>single_batch = next(iter(train_dataset))
print(single_batch.keys())<jupyter_output><empty_output><jupyter_text>* `input_ids` denotes the tokenized versions of the input samples containing the masktokens as well.* `attention_mask` denotes the mask to be used when performing attention operations.* `labels` denotes the actual values of masked tokens the model is supposed to learn from.<jupyter_code>for k in single_batch:
if k == "input_ids":
input_ids = single_batch[k]
print(f"Input shape: {input_ids.shape}")
if k == "labels":
labels = single_batch[k]
print(f"Label shape: {labels.shape}")<jupyter_output><empty_output><jupyter_text>Now, we can leverage our `tokenizer` to investigate the values of the tokens. Let's startwith `input_ids`:<jupyter_code>idx = 0
print("Taking the first sample:\n")
print(tokenizer.decode(input_ids[idx].numpy()))<jupyter_output><empty_output><jupyter_text>As expected, the decoded tokens contain the special tokens including the mask tokens aswell. Let's now investigate the mask tokens:<jupyter_code># Taking the first 30 tokens of the first sequence.
print(labels[0].numpy()[:30])<jupyter_output><empty_output><jupyter_text>Here, `-100` means that the corresponding tokens in the `input_ids` are NOT masked andnon `-100` values denote the actual values of the masked tokens. Initialize the mode and and the optimizer With the datasets prepared, we now initialize and compile our model and optimizer withinthe `strategy.scope()`:<jupyter_code># For this example, we keep this value to 10. But for a realistic run, start with 500.
num_epochs = 10
steps_per_epoch = num_train_samples // (
per_replica_batch_size * strategy.num_replicas_in_sync
)
total_train_steps = steps_per_epoch * num_epochs
learning_rate = 0.0001
weight_decay_rate = 1e-3
with strategy.scope():
model = transformers.TFAutoModelForMaskedLM.from_config(config)
model(
model.dummy_inputs
) # Pass some dummy inputs through the model to ensure all the weights are built
optimizer, schedule = transformers.create_optimizer(
num_train_steps=total_train_steps,
num_warmup_steps=total_train_steps // 20,
init_lr=learning_rate,
weight_decay_rate=weight_decay_rate,
)
model.compile(optimizer=optimizer, metrics=["accuracy"])<jupyter_output><empty_output><jupyter_text>A couple of things to note here:* The[`create_optimizer()`](https://huggingface.co/docs/transformers/main_classes/optimizer_schedulestransformers.create_optimizer)function creates an Adam optimizer with a learning rate schedule using a warmup phasefollowed by a linear decay. Since we're using weight decay here, under the hood,`create_optimizer()` instantiates[the right variant of Adam](https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/optimization_tf.pyL172)to enable weight decay.* While compiling the model, we're NOT using any `loss` argument. This is becausethe TensorFlow models internally compute the loss when expected labels are provided.Based on the model type and the labels being used, `transformers` will automaticallyinfer the loss to use. Start training! Next, we set up a handy callback to push the intermediate training checkpoints to theHugging Face Hub. To be able to operationalize this callback, we need to log in to ourHugging Face account (if you don't have one, you create one[here](https://huggingface.co/join) for free). Execute the code below for logging in:```pythonfrom huggingface_hub import notebook_loginnotebook_login()``` Let's now define the[`PushToHubCallback`](https://huggingface.co/docs/transformers/main_classes/keras_callbackstransformers.PushToHubCallback):<jupyter_code>hub_model_id = output_dir = "masked-lm-tpu"
callbacks = []
callbacks.append(
transformers.PushToHubCallback(
output_dir=output_dir, hub_model_id=hub_model_id, tokenizer=tokenizer
)
)<jupyter_output><empty_output><jupyter_text>And now, we're ready to chug the TPUs:<jupyter_code># In the interest of the runtime of this example,
# we limit the number of batches to just 2.
model.fit(
train_dataset.take(2),
validation_data=eval_dataset.take(2),
epochs=num_epochs,
callbacks=callbacks,
)
# After training we also serialize the final model.
model.save_pretrained(output_dir)<jupyter_output><empty_output><jupyter_text>Once your training is complete, you can easily perform inference like so:<jupyter_code>from transformers import pipeline
# Replace your `model_id` here.
# Here, we're using a model that the Hugging Face team trained for longer.
model_id = "tf-tpu/roberta-base-epochs-500-no-wd"
unmasker = pipeline("fill-mask", model=model_id, framework="tf")
print(unmasker("Goal of my life is to [MASK]."))<jupyter_output><empty_output> | keras-io/examples/nlp/ipynb/mlm_training_tpus.ipynb/0 | {
"file_path": "keras-io/examples/nlp/ipynb/mlm_training_tpus.ipynb",
"repo_id": "keras-io",
"token_count": 5307
} | 90 |
<jupyter_start><jupyter_text>Text classification with Switch Transformer**Author:** [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/)**Date created:** 2020/05/10**Last modified:** 2021/02/15**Description:** Implement a Switch Transformer for text classification. IntroductionThis example demonstrates the implementation of the[Switch Transformer](https://arxiv.org/abs/2101.03961) model for textclassification.The Switch Transformer replaces the feedforward network (FFN) layer in the standardTransformer with a Mixture of Expert (MoE) routing layer, where each expert operatesindependently on the tokens in the sequence. This allows increasing the model size withoutincreasing the computation needed to process each example.Note that, for training the Switch Transformer efficiently, data and model parallelismneed to be applied, so that expert modules can run simultaneously, each on its own accelerator.While the implementation described in the paper uses the[TensorFlow Mesh](https://github.com/tensorflow/mesh) framework for distributed training,this example presents a simple, non-distributed implementation of the Switch Transformermodel for demonstration purposes. Setup<jupyter_code>import keras
from keras import ops
from keras import layers<jupyter_output><empty_output><jupyter_text>Download and prepare dataset<jupyter_code>vocab_size = 20000 # Only consider the top 20k words
num_tokens_per_example = 200 # Only consider the first 200 words of each movie review
(x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(num_words=vocab_size)
print(len(x_train), "Training sequences")
print(len(x_val), "Validation sequences")
x_train = keras.utils.pad_sequences(x_train, maxlen=num_tokens_per_example)
x_val = keras.utils.pad_sequences(x_val, maxlen=num_tokens_per_example)<jupyter_output><empty_output><jupyter_text>Define hyperparameters<jupyter_code>embed_dim = 32 # Embedding size for each token.
num_heads = 2 # Number of attention heads
ff_dim = 32 # Hidden layer size in feedforward network.
num_experts = 10 # Number of experts used in the Switch Transformer.
batch_size = 50 # Batch size.
learning_rate = 0.001 # Learning rate.
dropout_rate = 0.25 # Dropout rate.
num_epochs = 3 # Number of epochs.
num_tokens_per_batch = (
batch_size * num_tokens_per_example
) # Total number of tokens per batch.
print(f"Number of tokens per batch: {num_tokens_per_batch}")<jupyter_output><empty_output><jupyter_text>Implement token & position embedding layerIt consists of two seperate embedding layers, one for tokens, one for token index (positions).<jupyter_code>class TokenAndPositionEmbedding(layers.Layer):
def __init__(self, maxlen, vocab_size, embed_dim):
super().__init__()
self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim)
self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim)
def call(self, x):
maxlen = ops.shape(x)[-1]
positions = ops.arange(start=0, stop=maxlen, step=1)
positions = self.pos_emb(positions)
x = self.token_emb(x)
return x + positions<jupyter_output><empty_output><jupyter_text>Implement the feedforward networkThis is used as the Mixture of Experts in the Switch Transformer.<jupyter_code>def create_feedforward_network(ff_dim, embed_dim, name=None):
return keras.Sequential(
[layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim)], name=name
)<jupyter_output><empty_output><jupyter_text>Implement the load-balanced lossThis is an auxiliary loss to encourage a balanced load across experts.<jupyter_code>def load_balanced_loss(router_probs, expert_mask):
# router_probs [tokens_per_batch, num_experts] is the probability assigned for
# each expert per token. expert_mask [tokens_per_batch, num_experts] contains
# the expert with the highest router probability in oneโhot format.
num_experts = ops.shape(expert_mask)[-1]
# Get the fraction of tokens routed to each expert.
# density is a vector of length num experts that sums to 1.
density = ops.mean(expert_mask, axis=0)
# Get fraction of probability mass assigned to each expert from the router
# across all tokens. density_proxy is a vector of length num experts that sums to 1.
density_proxy = ops.mean(router_probs, axis=0)
# Want both vectors to have uniform allocation (1/num experts) across all
# num_expert elements. The two vectors will be pushed towards uniform allocation
# when the dot product is minimized.
loss = ops.mean(density_proxy * density) * ops.cast((num_experts**2), "float32")
return loss<jupyter_output><empty_output><jupyter_text>Implement the router as a layer<jupyter_code>class Router(layers.Layer):
def __init__(self, num_experts, expert_capacity):
self.num_experts = num_experts
self.route = layers.Dense(units=num_experts)
self.expert_capacity = expert_capacity
super().__init__()
def call(self, inputs, training=False):
# inputs shape: [tokens_per_batch, embed_dim]
# router_logits shape: [tokens_per_batch, num_experts]
router_logits = self.route(inputs)
if training:
# Add noise for exploration across experts.
router_logits += keras.random.uniform(
shape=router_logits.shape, minval=0.9, maxval=1.1
)
# Probabilities for each token of what expert it should be sent to.
router_probs = keras.activations.softmax(router_logits, axis=-1)
# Get the topโ1 expert for each token. expert_gate is the topโ1 probability
# from the router for each token. expert_index is what expert each token
# is going to be routed to.
expert_gate, expert_index = ops.top_k(router_probs, k=1)
# expert_mask shape: [tokens_per_batch, num_experts]
expert_mask = ops.one_hot(expert_index, self.num_experts)
# Compute load balancing loss.
aux_loss = load_balanced_loss(router_probs, expert_mask)
self.add_loss(aux_loss)
# Experts have a fixed capacity, ensure we do not exceed it. Construct
# the batch indices, to each expert, with position in expert make sure that
# not more that expert capacity examples can be routed to each expert.
position_in_expert = ops.cast(
ops.cumsum(expert_mask, axis=0) * expert_mask, "int32"
)
# Keep only tokens that fit within expert capacity.
expert_mask *= ops.cast(
ops.less(ops.cast(position_in_expert, "int32"), self.expert_capacity),
"float32",
)
expert_mask_flat = ops.sum(expert_mask, axis=-1)
# Mask out the experts that have overflowed the expert capacity.
expert_gate *= expert_mask_flat
# Combine expert outputs and scaling with router probability.
# combine_tensor shape: [tokens_per_batch, num_experts, expert_capacity]
combined_tensor = ops.expand_dims(
expert_gate
* expert_mask_flat
* ops.squeeze(ops.one_hot(expert_index, self.num_experts), 1),
-1,
) * ops.squeeze(ops.one_hot(position_in_expert, self.expert_capacity), 1)
# Create binary dispatch_tensor [tokens_per_batch, num_experts, expert_capacity]
# that is 1 if the token gets routed to the corresponding expert.
dispatch_tensor = ops.cast(combined_tensor, "float32")
return dispatch_tensor, combined_tensor<jupyter_output><empty_output><jupyter_text>Implement a Switch layer<jupyter_code>class Switch(layers.Layer):
def __init__(
self, num_experts, embed_dim, ff_dim, num_tokens_per_batch, capacity_factor=1
):
self.num_experts = num_experts
self.embed_dim = embed_dim
self.experts = [
create_feedforward_network(ff_dim, embed_dim) for _ in range(num_experts)
]
self.expert_capacity = num_tokens_per_batch // self.num_experts
self.router = Router(self.num_experts, self.expert_capacity)
super().__init__()
def call(self, inputs):
batch_size = ops.shape(inputs)[0]
num_tokens_per_example = ops.shape(inputs)[1]
# inputs shape: [num_tokens_per_batch, embed_dim]
inputs = ops.reshape(inputs, [num_tokens_per_batch, self.embed_dim])
# dispatch_tensor shape: [expert_capacity, num_experts, tokens_per_batch]
# combine_tensor shape: [tokens_per_batch, num_experts, expert_capacity]
dispatch_tensor, combine_tensor = self.router(inputs)
# expert_inputs shape: [num_experts, expert_capacity, embed_dim]
expert_inputs = ops.einsum("ab,acd->cdb", inputs, dispatch_tensor)
expert_inputs = ops.reshape(
expert_inputs, [self.num_experts, self.expert_capacity, self.embed_dim]
)
# Dispatch to experts
expert_input_list = ops.unstack(expert_inputs, axis=0)
expert_output_list = [
self.experts[idx](expert_input)
for idx, expert_input in enumerate(expert_input_list)
]
# expert_outputs shape: [expert_capacity, num_experts, embed_dim]
expert_outputs = ops.stack(expert_output_list, axis=1)
# expert_outputs_combined shape: [tokens_per_batch, embed_dim]
expert_outputs_combined = ops.einsum(
"abc,xba->xc", expert_outputs, combine_tensor
)
# output shape: [batch_size, num_tokens_per_example, embed_dim]
outputs = ops.reshape(
expert_outputs_combined,
[batch_size, num_tokens_per_example, self.embed_dim],
)
return outputs<jupyter_output><empty_output><jupyter_text>Implement a Transformer block layer<jupyter_code>class TransformerBlock(layers.Layer):
def __init__(self, embed_dim, num_heads, ffn, dropout_rate=0.1):
super().__init__()
self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim)
# The ffn can be either a standard feedforward network or a switch
# layer with a Mixture of Experts.
self.ffn = ffn
self.layernorm1 = layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = layers.Dropout(dropout_rate)
self.dropout2 = layers.Dropout(dropout_rate)
def call(self, inputs, training=False):
attn_output = self.att(inputs, inputs)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(inputs + attn_output)
ffn_output = self.ffn(out1)
ffn_output = self.dropout2(ffn_output, training=training)
return self.layernorm2(out1 + ffn_output)<jupyter_output><empty_output><jupyter_text>Implement the classifierThe `TransformerBlock` layer outputs one vector for each time step of our input sequence.Here, we take the mean across all time steps and use a feedforward network on topof it to classify text.<jupyter_code>def create_classifier():
switch = Switch(num_experts, embed_dim, ff_dim, num_tokens_per_batch)
transformer_block = TransformerBlock(embed_dim // num_heads, num_heads, switch)
inputs = layers.Input(shape=(num_tokens_per_example,))
embedding_layer = TokenAndPositionEmbedding(
num_tokens_per_example, vocab_size, embed_dim
)
x = embedding_layer(inputs)
x = transformer_block(x)
x = layers.GlobalAveragePooling1D()(x)
x = layers.Dropout(dropout_rate)(x)
x = layers.Dense(ff_dim, activation="relu")(x)
x = layers.Dropout(dropout_rate)(x)
outputs = layers.Dense(2, activation="softmax")(x)
classifier = keras.Model(inputs=inputs, outputs=outputs)
return classifier<jupyter_output><empty_output><jupyter_text>Train and evaluate the model<jupyter_code>def run_experiment(classifier):
classifier.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
history = classifier.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_data=(x_val, y_val),
)
return history
classifier = create_classifier()
run_experiment(classifier)<jupyter_output><empty_output> | keras-io/examples/nlp/ipynb/text_classification_with_switch_transformer.ipynb/0 | {
"file_path": "keras-io/examples/nlp/ipynb/text_classification_with_switch_transformer.ipynb",
"repo_id": "keras-io",
"token_count": 4732
} | 91 |
# Multimodal entailment
**Author:** [Sayak Paul](https://twitter.com/RisingSayak)<br>
**Date created:** 2021/08/08<br>
**Last modified:** 2021/08/15<br>
**Description:** Training a multimodal model for predicting entailment.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/nlp/ipynb/multimodal_entailment.ipynb) <span class="k-dot">โข</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/nlp/multimodal_entailment.py)
---
## Introduction
In this example, we will build and train a model for predicting multimodal entailment. We will be
using the
[multimodal entailment dataset](https://github.com/google-research-datasets/recognizing-multimodal-entailment)
recently introduced by Google Research.
### What is multimodal entailment?
On social media platforms, to audit and moderate content
we may want to find answers to the
following questions in near real-time:
* Does a given piece of information contradict the other?
* Does a given piece of information imply the other?
In NLP, this task is called analyzing _textual entailment_. However, that's only
when the information comes from text content.
In practice, it's often the case the information available comes not just
from text content, but from a multimodal combination of text, images, audio, video, etc.
_Multimodal entailment_ is simply the extension of textual entailment to a variety
of new input modalities.
### Requirements
This example requires TensorFlow 2.5 or higher. In addition, TensorFlow Hub and
TensorFlow Text are required for the BERT model
([Devlin et al.](https://arxiv.org/abs/1810.04805)). These libraries can be installed
using the following command:
```python
!pip install -q tensorflow_text
```
---
## Imports
```python
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import os
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
from tensorflow import keras
```
---
## Define a label map
```python
label_map = {"Contradictory": 0, "Implies": 1, "NoEntailment": 2}
```
---
## Collect the dataset
The original dataset is available
[here](https://github.com/google-research-datasets/recognizing-multimodal-entailment).
It comes with URLs of images which are hosted on Twitter's photo storage system called
the
[Photo Blob Storage (PBS for short)](https://blog.twitter.com/engineering/en_us/a/2012/blobstore-twitter-s-in-house-photo-storage-system).
We will be working with the downloaded images along with additional data that comes with
the original dataset. Thanks to
[Nilabhra Roy Chowdhury](https://de.linkedin.com/in/nilabhraroychowdhury) who worked on
preparing the image data.
```python
image_base_path = keras.utils.get_file(
"tweet_images",
"https://github.com/sayakpaul/Multimodal-Entailment-Baseline/releases/download/v1.0.0/tweet_images.tar.gz",
untar=True,
)
```
---
## Read the dataset and apply basic preprocessing
```python
df = pd.read_csv(
"https://github.com/sayakpaul/Multimodal-Entailment-Baseline/raw/main/csvs/tweets.csv"
)
df.sample(10)
```
<div style="overflow-x: scroll; width: 100%;">
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
<div class="k-default-codeblock">
```
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
```
</div>
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id_1</th>
<th>text_1</th>
<th>image_1</th>
<th>id_2</th>
<th>text_2</th>
<th>image_2</th>
<th>label</th>
</tr>
</thead>
<tbody>
<tr>
<th>291</th>
<td>1330800194863190016</td>
<td>#KLM1167 (B738): #AMS (Amsterdam) to #HEL (Van...</td>
<td>http://pbs.twimg.com/media/EnfzuZAW4AE236p.png</td>
<td>1378695438480588802</td>
<td>#CKK205 (B77L): #PVG (Shanghai) to #AMS (Amste...</td>
<td>http://pbs.twimg.com/media/EyIcMexXEAE6gia.png</td>
<td>NoEntailment</td>
</tr>
<tr>
<th>37</th>
<td>1366581728312057856</td>
<td>Friends, interested all go to have a look!\n@j...</td>
<td>http://pbs.twimg.com/media/EvcS1v4UcAEEXPO.jpg</td>
<td>1373810535066570759</td>
<td>Friends, interested all go to have a look!\n@f...</td>
<td>http://pbs.twimg.com/media/ExDBZqwVIAQ4LWk.jpg</td>
<td>Contradictory</td>
</tr>
<tr>
<th>315</th>
<td>1352551603258052608</td>
<td>#WINk Drops I have earned today๐\n\nToday:1/22...</td>
<td>http://pbs.twimg.com/media/EsTdcLLVcAIiFKT.jpg</td>
<td>1354636016234098688</td>
<td>#WINk Drops I have earned todayโ\n\nToday:1/28...</td>
<td>http://pbs.twimg.com/media/EsyhK-qU0AgfMAH.jpg</td>
<td>NoEntailment</td>
</tr>
<tr>
<th>761</th>
<td>1379795999493853189</td>
<td>#buythedip Ready to FLY even HIGHER #pennysto...</td>
<td>http://pbs.twimg.com/media/EyYFJCzWgAMfTrT.jpg</td>
<td>1380190250144792576</td>
<td>#buythedip Ready to FLY even HIGHER #pennysto...</td>
<td>http://pbs.twimg.com/media/Eydrt0ZXAAMmbfv.jpg</td>
<td>NoEntailment</td>
</tr>
<tr>
<th>146</th>
<td>1340185132293099523</td>
<td>I know sometimes I am weird to you.\n\nBecause...</td>
<td>http://pbs.twimg.com/media/EplLRriWwAAJ2AE.jpg</td>
<td>1359755419883814913</td>
<td>I put my sword down and get on my knees to swe...</td>
<td>http://pbs.twimg.com/media/Et7SWWeWYAICK-c.jpg</td>
<td>NoEntailment</td>
</tr>
<tr>
<th>1351</th>
<td>1381256604926967813</td>
<td>Finally completed the skin rendering. Will sta...</td>
<td>http://pbs.twimg.com/media/Eys1j7NVIAgF-YF.jpg</td>
<td>1381630932092784641</td>
<td>Hair rendering. Will finish the hair by tomorr...</td>
<td>http://pbs.twimg.com/media/EyyKAoaUUAElm-e.jpg</td>
<td>NoEntailment</td>
</tr>
<tr>
<th>368</th>
<td>1371883298805403649</td>
<td>๐ $LINK Number of Receiving Addresses (7d MA) ...</td>
<td>http://pbs.twimg.com/media/EwnoltOWEAAS4mG.jpg</td>
<td>1373216720974979072</td>
<td>๐ $LINK Number of Receiving Addresses (7d MA) ...</td>
<td>http://pbs.twimg.com/media/Ew6lVGYXEAE6Ugi.jpg</td>
<td>NoEntailment</td>
</tr>
<tr>
<th>1112</th>
<td>1377679115159887873</td>
<td>April is National Distracted Driving Awareness...</td>
<td>http://pbs.twimg.com/media/Ex5_u7UVIAARjQ2.jpg</td>
<td>1379075258448281608</td>
<td>April is Distracted Driving Awareness Month. ...</td>
<td>http://pbs.twimg.com/media/EyN1YjpWUAMc5ak.jpg</td>
<td>NoEntailment</td>
</tr>
<tr>
<th>264</th>
<td>1330727515741167619</td>
<td>โฅ๏ธVerse Of The Dayโฅ๏ธ\n.\n#VerseOfTheDay #Quran...</td>
<td>http://pbs.twimg.com/media/EnexnydXIAYuI11.jpg</td>
<td>1332623263495819264</td>
<td>โฅ๏ธVerse Of The Dayโฅ๏ธ\n.\n#VerseOfTheDay #Quran...</td>
<td>http://pbs.twimg.com/media/En5ty1VXUAATALP.jpg</td>
<td>NoEntailment</td>
</tr>
<tr>
<th>865</th>
<td>1377784616275296261</td>
<td>No white picket fence can keep us in. #TBT 200...</td>
<td>http://pbs.twimg.com/media/Ex7fzouWQAITAq8.jpg</td>
<td>1380175915804672012</td>
<td>Sometimes you just need to change your altitud...</td>
<td>http://pbs.twimg.com/media/EydernQXIAk2g5v.jpg</td>
<td>NoEntailment</td>
</tr>
</tbody>
</table>
</div>
The columns we are interested in are the following:
* `text_1`
* `image_1`
* `text_2`
* `image_2`
* `label`
The entailment task is formulated as the following:
***Given the pairs of (`text_1`, `image_1`) and (`text_2`, `image_2`) do they entail (or
not entail or contradict) each other?***
We have the images already downloaded. `image_1` is downloaded as `id1` as its filename
and `image2` is downloaded as `id2` as its filename. In the next step, we will add two
more columns to `df` - filepaths of `image_1`s and `image_2`s.
```python
images_one_paths = []
images_two_paths = []
for idx in range(len(df)):
current_row = df.iloc[idx]
id_1 = current_row["id_1"]
id_2 = current_row["id_2"]
extentsion_one = current_row["image_1"].split(".")[-1]
extentsion_two = current_row["image_2"].split(".")[-1]
image_one_path = os.path.join(image_base_path, str(id_1) + f".{extentsion_one}")
image_two_path = os.path.join(image_base_path, str(id_2) + f".{extentsion_two}")
images_one_paths.append(image_one_path)
images_two_paths.append(image_two_path)
df["image_1_path"] = images_one_paths
df["image_2_path"] = images_two_paths
# Create another column containing the integer ids of
# the string labels.
df["label_idx"] = df["label"].apply(lambda x: label_map[x])
```
---
## Dataset visualization
```python
def visualize(idx):
current_row = df.iloc[idx]
image_1 = plt.imread(current_row["image_1_path"])
image_2 = plt.imread(current_row["image_2_path"])
text_1 = current_row["text_1"]
text_2 = current_row["text_2"]
label = current_row["label"]
plt.subplot(1, 2, 1)
plt.imshow(image_1)
plt.axis("off")
plt.title("Image One")
plt.subplot(1, 2, 2)
plt.imshow(image_1)
plt.axis("off")
plt.title("Image Two")
plt.show()
print(f"Text one: {text_1}")
print(f"Text two: {text_2}")
print(f"Label: {label}")
random_idx = np.random.choice(len(df))
visualize(random_idx)
random_idx = np.random.choice(len(df))
visualize(random_idx)
```

<div class="k-default-codeblock">
```
Text one: Friends, interested all go to have a look!
@ThePartyGoddess @OurLadyAngels @BJsWholesale @Richard_Jeni @FashionLavidaG @RapaRooski @DMVTHING @DeMarcoReports @LobidaFo @DeMarcoMorgan https://t.co/cStULl7y7G
Text two: Friends, interested all go to have a look!
@smittyses @CYosabel @crum_7 @CrumDarrell @ElymalikU @jenloarn @SoCodiePrevost @roblowry82 @Crummy_14 @CSchmelzenbach https://t.co/IZphLTNzgl
Label: Contradictory
```
</div>

<div class="k-default-codeblock">
```
Text one: ๐ KICK OFF @ MARDEN SPORTS COMPLEX
```
</div>
<div class="k-default-codeblock">
```
We're underway in the Round 6 opener!
```
</div>
<div class="k-default-codeblock">
```
๐บ: @Foxtel, @kayosports
๐ฑ: My Football Live app https://t.co/wHSpvQaoGC
```
</div>
<div class="k-default-codeblock">
```
#WLeague #ADLvMVC #AUFC #MVFC https://t.co/3Smp8KXm8W
Text two: ๐ KICK OFF @ MARSDEN SPORTS COMPLEX
```
</div>
<div class="k-default-codeblock">
```
We're underway in sunny Adelaide!
```
</div>
<div class="k-default-codeblock">
```
๐บ: @Foxtel, @kayosports
๐ฑ: My Football Live app https://t.co/wHSpvQaoGC
```
</div>
<div class="k-default-codeblock">
```
#ADLvCBR #WLeague #AUFC #UnitedAlways https://t.co/fG1PyLQXM4
Label: NoEntailment
```
</div>
---
## Train/test split
The dataset suffers from
[class imbalance problem](https://developers.google.com/machine-learning/glossary#class-imbalanced-dataset).
We can confirm that in the following cell.
```python
df["label"].value_counts()
```
<div class="k-default-codeblock">
```
NoEntailment 1182
Implies 109
Contradictory 109
Name: label, dtype: int64
```
</div>
To account for that we will go for a stratified split.
```python
# 10% for test
train_df, test_df = train_test_split(
df, test_size=0.1, stratify=df["label"].values, random_state=42
)
# 5% for validation
train_df, val_df = train_test_split(
train_df, test_size=0.05, stratify=train_df["label"].values, random_state=42
)
print(f"Total training examples: {len(train_df)}")
print(f"Total validation examples: {len(val_df)}")
print(f"Total test examples: {len(test_df)}")
```
<div class="k-default-codeblock">
```
Total training examples: 1197
Total validation examples: 63
Total test examples: 140
```
</div>
---
## Data input pipeline
TensorFlow Hub provides
[variety of BERT family of models](https://www.tensorflow.org/text/tutorials/bert_glue#loading_models_from_tensorflow_hub).
Each of those models comes with a
corresponding preprocessing layer. You can learn more about these models and their
preprocessing layers from
[this resource](https://www.tensorflow.org/text/tutorials/bert_glue#loading_models_from_tensorflow_hub).
To keep the runtime of this example relatively short, we will use a smaller variant of
the original BERT model.
```python
# Define TF Hub paths to the BERT encoder and its preprocessor
bert_model_path = (
"https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1"
)
bert_preprocess_path = "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3"
```
Our text preprocessing code mostly comes from
[this tutorial](https://www.tensorflow.org/text/tutorials/bert_glue).
You are highly encouraged to check out the tutorial to learn more about the input
preprocessing.
```python
def make_bert_preprocessing_model(sentence_features, seq_length=128):
"""Returns Model mapping string features to BERT inputs.
Args:
sentence_features: A list with the names of string-valued features.
seq_length: An integer that defines the sequence length of BERT inputs.
Returns:
A Keras Model that can be called on a list or dict of string Tensors
(with the order or names, resp., given by sentence_features) and
returns a dict of tensors for input to BERT.
"""
input_segments = [
tf.keras.layers.Input(shape=(), dtype=tf.string, name=ft)
for ft in sentence_features
]
# Tokenize the text to word pieces.
bert_preprocess = hub.load(bert_preprocess_path)
tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name="tokenizer")
segments = [tokenizer(s) for s in input_segments]
# Optional: Trim segments in a smart way to fit seq_length.
# Simple cases (like this example) can skip this step and let
# the next step apply a default truncation to approximately equal lengths.
truncated_segments = segments
# Pack inputs. The details (start/end token ids, dict of output tensors)
# are model-dependent, so this gets loaded from the SavedModel.
packer = hub.KerasLayer(
bert_preprocess.bert_pack_inputs,
arguments=dict(seq_length=seq_length),
name="packer",
)
model_inputs = packer(truncated_segments)
return keras.Model(input_segments, model_inputs)
bert_preprocess_model = make_bert_preprocessing_model(["text_1", "text_2"])
keras.utils.plot_model(bert_preprocess_model, show_shapes=True, show_dtype=True)
```

### Run the preprocessor on a sample input
```python
idx = np.random.choice(len(train_df))
row = train_df.iloc[idx]
sample_text_1, sample_text_2 = row["text_1"], row["text_2"]
print(f"Text 1: {sample_text_1}")
print(f"Text 2: {sample_text_2}")
test_text = [np.array([sample_text_1]), np.array([sample_text_2])]
text_preprocessed = bert_preprocess_model(test_text)
print("Keys : ", list(text_preprocessed.keys()))
print("Shape Word Ids : ", text_preprocessed["input_word_ids"].shape)
print("Word Ids : ", text_preprocessed["input_word_ids"][0, :16])
print("Shape Mask : ", text_preprocessed["input_mask"].shape)
print("Input Mask : ", text_preprocessed["input_mask"][0, :16])
print("Shape Type Ids : ", text_preprocessed["input_type_ids"].shape)
print("Type Ids : ", text_preprocessed["input_type_ids"][0, :16])
```
<div class="k-default-codeblock">
```
Text 1: Renewables met 97% of Scotland's electricity demand in 2020!!!!
https://t.co/wi5c9UFAUF https://t.co/arcuBgh0BP
Text 2: Renewables met 97% of Scotland's electricity demand in 2020 https://t.co/SrhyqPnIkU https://t.co/LORgvTM7Sn
Keys : ['input_mask', 'input_word_ids', 'input_type_ids']
Shape Word Ids : (1, 128)
Word Ids : tf.Tensor(
[ 101 13918 2015 2777 5989 1003 1997 3885 1005 1055 6451 5157
1999 12609 999 999], shape=(16,), dtype=int32)
Shape Mask : (1, 128)
Input Mask : tf.Tensor([1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1], shape=(16,), dtype=int32)
Shape Type Ids : (1, 128)
Type Ids : tf.Tensor([0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0], shape=(16,), dtype=int32)
```
</div>
We will now create `tf.data.Dataset` objects from the dataframes.
Note that the text inputs will be preprocessed as a part of the data input pipeline. But
the preprocessing modules can also be a part of their corresponding BERT models. This
helps reduce the training/serving skew and lets our models operate with raw text inputs.
Follow [this tutorial](https://www.tensorflow.org/text/tutorials/classify_text_with_bert)
to learn more about how to incorporate the preprocessing modules directly inside the
models.
```python
def dataframe_to_dataset(dataframe):
columns = ["image_1_path", "image_2_path", "text_1", "text_2", "label_idx"]
dataframe = dataframe[columns].copy()
labels = dataframe.pop("label_idx")
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
ds = ds.shuffle(buffer_size=len(dataframe))
return ds
```
### Preprocessing utilities
```python
resize = (128, 128)
bert_input_features = ["input_word_ids", "input_type_ids", "input_mask"]
def preprocess_image(image_path):
extension = tf.strings.split(image_path)[-1]
image = tf.io.read_file(image_path)
if extension == b"jpg":
image = tf.image.decode_jpeg(image, 3)
else:
image = tf.image.decode_png(image, 3)
image = tf.image.resize(image, resize)
return image
def preprocess_text(text_1, text_2):
text_1 = tf.convert_to_tensor([text_1])
text_2 = tf.convert_to_tensor([text_2])
output = bert_preprocess_model([text_1, text_2])
output = {feature: tf.squeeze(output[feature]) for feature in bert_input_features}
return output
def preprocess_text_and_image(sample):
image_1 = preprocess_image(sample["image_1_path"])
image_2 = preprocess_image(sample["image_2_path"])
text = preprocess_text(sample["text_1"], sample["text_2"])
return {"image_1": image_1, "image_2": image_2, "text": text}
```
### Create the final datasets
```python
batch_size = 32
auto = tf.data.AUTOTUNE
def prepare_dataset(dataframe, training=True):
ds = dataframe_to_dataset(dataframe)
if training:
ds = ds.shuffle(len(train_df))
ds = ds.map(lambda x, y: (preprocess_text_and_image(x), y)).cache()
ds = ds.batch(batch_size).prefetch(auto)
return ds
train_ds = prepare_dataset(train_df)
validation_ds = prepare_dataset(val_df, False)
test_ds = prepare_dataset(test_df, False)
```
---
## Model building utilities
Our final model will accept two images along with their text counterparts. While the
images will be directly fed to the model the text inputs will first be preprocessed and
then will make it into the model. Below is a visual illustration of this approach:

The model consists of the following elements:
* A standalone encoder for the images. We will use a
[ResNet50V2](https://arxiv.org/abs/1603.05027) pre-trained on the ImageNet-1k dataset for
this.
* A standalone encoder for the images. A pre-trained BERT will be used for this.
After extracting the individual embeddings, they will be projected in an identical space.
Finally, their projections will be concatenated and be fed to the final classification
layer.
This is a multi-class classification problem involving the following classes:
* NoEntailment
* Implies
* Contradictory
`project_embeddings()`, `create_vision_encoder()`, and `create_text_encoder()` utilities
are referred from [this example](https://keras.io/examples/nlp/nl_image_search/).
Projection utilities
```python
def project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
):
projected_embeddings = keras.layers.Dense(units=projection_dims)(embeddings)
for _ in range(num_projection_layers):
x = tf.nn.gelu(projected_embeddings)
x = keras.layers.Dense(projection_dims)(x)
x = keras.layers.Dropout(dropout_rate)(x)
x = keras.layers.Add()([projected_embeddings, x])
projected_embeddings = keras.layers.LayerNormalization()(x)
return projected_embeddings
```
Vision encoder utilities
```python
def create_vision_encoder(
num_projection_layers, projection_dims, dropout_rate, trainable=False
):
# Load the pre-trained ResNet50V2 model to be used as the base encoder.
resnet_v2 = keras.applications.ResNet50V2(
include_top=False, weights="imagenet", pooling="avg"
)
# Set the trainability of the base encoder.
for layer in resnet_v2.layers:
layer.trainable = trainable
# Receive the images as inputs.
image_1 = keras.Input(shape=(128, 128, 3), name="image_1")
image_2 = keras.Input(shape=(128, 128, 3), name="image_2")
# Preprocess the input image.
preprocessed_1 = keras.applications.resnet_v2.preprocess_input(image_1)
preprocessed_2 = keras.applications.resnet_v2.preprocess_input(image_2)
# Generate the embeddings for the images using the resnet_v2 model
# concatenate them.
embeddings_1 = resnet_v2(preprocessed_1)
embeddings_2 = resnet_v2(preprocessed_2)
embeddings = keras.layers.Concatenate()([embeddings_1, embeddings_2])
# Project the embeddings produced by the model.
outputs = project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
)
# Create the vision encoder model.
return keras.Model([image_1, image_2], outputs, name="vision_encoder")
```
Text encoder utilities
```python
def create_text_encoder(
num_projection_layers, projection_dims, dropout_rate, trainable=False
):
# Load the pre-trained BERT model to be used as the base encoder.
bert = hub.KerasLayer(bert_model_path, name="bert",)
# Set the trainability of the base encoder.
bert.trainable = trainable
# Receive the text as inputs.
bert_input_features = ["input_type_ids", "input_mask", "input_word_ids"]
inputs = {
feature: keras.Input(shape=(128,), dtype=tf.int32, name=feature)
for feature in bert_input_features
}
# Generate embeddings for the preprocessed text using the BERT model.
embeddings = bert(inputs)["pooled_output"]
# Project the embeddings produced by the model.
outputs = project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
)
# Create the text encoder model.
return keras.Model(inputs, outputs, name="text_encoder")
```
Multimodal model utilities
```python
def create_multimodal_model(
num_projection_layers=1,
projection_dims=256,
dropout_rate=0.1,
vision_trainable=False,
text_trainable=False,
):
# Receive the images as inputs.
image_1 = keras.Input(shape=(128, 128, 3), name="image_1")
image_2 = keras.Input(shape=(128, 128, 3), name="image_2")
# Receive the text as inputs.
bert_input_features = ["input_type_ids", "input_mask", "input_word_ids"]
text_inputs = {
feature: keras.Input(shape=(128,), dtype=tf.int32, name=feature)
for feature in bert_input_features
}
# Create the encoders.
vision_encoder = create_vision_encoder(
num_projection_layers, projection_dims, dropout_rate, vision_trainable
)
text_encoder = create_text_encoder(
num_projection_layers, projection_dims, dropout_rate, text_trainable
)
# Fetch the embedding projections.
vision_projections = vision_encoder([image_1, image_2])
text_projections = text_encoder(text_inputs)
# Concatenate the projections and pass through the classification layer.
concatenated = keras.layers.Concatenate()([vision_projections, text_projections])
outputs = keras.layers.Dense(3, activation="softmax")(concatenated)
return keras.Model([image_1, image_2, text_inputs], outputs)
multimodal_model = create_multimodal_model()
keras.utils.plot_model(multimodal_model, show_shapes=True)
```

You can inspect the structure of the individual encoders as well by setting the
`expand_nested` argument of `plot_model()` to `True`. You are encouraged
to play with the different hyperparameters involved in building this model and
observe how the final performance is affected.
---
## Compile and train the model
```python
multimodal_model.compile(
optimizer="adam", loss="sparse_categorical_crossentropy", metrics="accuracy"
)
history = multimodal_model.fit(train_ds, validation_data=validation_ds, epochs=10)
```
<div class="k-default-codeblock">
```
Epoch 1/10
38/38 [==============================] - 49s 789ms/step - loss: 1.0014 - accuracy: 0.8229 - val_loss: 0.5514 - val_accuracy: 0.8571
Epoch 2/10
38/38 [==============================] - 3s 90ms/step - loss: 0.4019 - accuracy: 0.8814 - val_loss: 0.5866 - val_accuracy: 0.8571
Epoch 3/10
38/38 [==============================] - 3s 90ms/step - loss: 0.3557 - accuracy: 0.8897 - val_loss: 0.5929 - val_accuracy: 0.8571
Epoch 4/10
38/38 [==============================] - 3s 91ms/step - loss: 0.2877 - accuracy: 0.9006 - val_loss: 0.6272 - val_accuracy: 0.8571
Epoch 5/10
38/38 [==============================] - 3s 91ms/step - loss: 0.1796 - accuracy: 0.9398 - val_loss: 0.8545 - val_accuracy: 0.8254
Epoch 6/10
38/38 [==============================] - 3s 91ms/step - loss: 0.1292 - accuracy: 0.9566 - val_loss: 1.2276 - val_accuracy: 0.8413
Epoch 7/10
38/38 [==============================] - 3s 91ms/step - loss: 0.1015 - accuracy: 0.9666 - val_loss: 1.2914 - val_accuracy: 0.7778
Epoch 8/10
38/38 [==============================] - 3s 92ms/step - loss: 0.1253 - accuracy: 0.9524 - val_loss: 1.1944 - val_accuracy: 0.8413
Epoch 9/10
38/38 [==============================] - 3s 92ms/step - loss: 0.3064 - accuracy: 0.9131 - val_loss: 1.2162 - val_accuracy: 0.8095
Epoch 10/10
38/38 [==============================] - 3s 92ms/step - loss: 0.2212 - accuracy: 0.9248 - val_loss: 1.1080 - val_accuracy: 0.8413
```
</div>
---
## Evaluate the model
```python
_, acc = multimodal_model.evaluate(test_ds)
print(f"Accuracy on the test set: {round(acc * 100, 2)}%.")
```
<div class="k-default-codeblock">
```
5/5 [==============================] - 6s 1s/step - loss: 0.8390 - accuracy: 0.8429
Accuracy on the test set: 84.29%.
```
</div>
---
## Additional notes regarding training
**Incorporating regularization**:
The training logs suggest that the model is starting to overfit and may have benefitted
from regularization. Dropout ([Srivastava et al.](https://jmlr.org/papers/v15/srivastava14a.html))
is a simple yet powerful regularization technique that we can use in our model.
But how should we apply it here?
We could always introduce Dropout (`keras.layers.Dropout`) in between different layers of the model.
But here is another recipe. Our model expects inputs from two different data modalities.
What if either of the modalities is not present during inference? To account for this,
we can introduce Dropout to the individual projections just before they get concatenated:
```python
vision_projections = keras.layers.Dropout(rate)(vision_projections)
text_projections = keras.layers.Dropout(rate)(text_projections)
concatenated = keras.layers.Concatenate()([vision_projections, text_projections])
```
**Attending to what matters**:
Do all parts of the images correspond equally to their textual counterparts? It's likely
not the case. To make our model only focus on the most important bits of the images that relate
well to their corresponding textual parts we can use "cross-attention":
```python
# Embeddings.
vision_projections = vision_encoder([image_1, image_2])
text_projections = text_encoder(text_inputs)
# Cross-attention (Luong-style).
query_value_attention_seq = keras.layers.Attention(use_scale=True, dropout=0.2)(
[vision_projections, text_projections]
)
# Concatenate.
concatenated = keras.layers.Concatenate()([vision_projections, text_projections])
contextual = keras.layers.Concatenate()([concatenated, query_value_attention_seq])
```
To see this in action, refer to
[this notebook](https://github.com/sayakpaul/Multimodal-Entailment-Baseline/blob/main/multimodal_entailment_attn.ipynb).
**Handling class imbalance**:
The dataset suffers from class imbalance. Investigating the confusion matrix of the
above model reveals that it performs poorly on the minority classes. If we had used a
weighted loss then the training would have been more guided. You can check out
[this notebook](https://github.com/sayakpaul/Multimodal-Entailment-Baseline/blob/main/multimodal_entailment.ipynb)
that takes class-imbalance into account during model training.
**Using only text inputs**:
Also, what if we had only incorporated text inputs for the entailment task? Because of
the nature of the text inputs encountered on social media platforms, text inputs alone
would have hurt the final performance. Under a similar training setup, by only using
text inputs we get to 67.14% top-1 accuracy on the same test set. Refer to
[this notebook](https://github.com/sayakpaul/Multimodal-Entailment-Baseline/blob/main/text_entailment.ipynb)
for details.
Finally, here is a table comparing different approaches taken for the entailment task:
| Type | Standard<br>Cross-entropy | Loss-weighted<br>Cross-entropy | Focal Loss |
|:---: |:---: |:---: |:---: |
| Multimodal | 77.86% | 67.86% | 86.43% |
| Only text | 67.14% | 11.43% | 37.86% |
You can check out [this repository](https://git.io/JR0HU) to learn more about how the
experiments were conducted to obtain these numbers.
---
## Final remarks
* The architecture we used in this example is too large for the number of data points
available for training. It's going to benefit from more data.
* We used a smaller variant of the original BERT model. Chances are high that with a
larger variant, this performance will be improved. TensorFlow Hub
[provides](https://www.tensorflow.org/text/tutorials/bert_glue#loading_models_from_tensorflow_hub)
a number of different BERT models that you can experiment with.
* We kept the pre-trained models frozen. Fine-tuning them on the multimodal entailment
task would could resulted in better performance.
* We built a simple baseline model for the multimodal entailment task. There are various
approaches that have been proposed to tackle the entailment problem.
[This presentation deck](https://docs.google.com/presentation/d/1mAB31BCmqzfedreNZYn4hsKPFmgHA9Kxz219DzyRY3c/edit?usp=sharing)
from the
[Recognizing Multimodal Entailment](https://multimodal-entailment.github.io/)
tutorial provides a comprehensive overview.
You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/keras-io/multimodal-entailment) and try the demo on [Hugging Face Spaces](https://huggingface.co/spaces/keras-io/multimodal_entailment) | keras-io/examples/nlp/md/multimodal_entailment.md/0 | {
"file_path": "keras-io/examples/nlp/md/multimodal_entailment.md",
"repo_id": "keras-io",
"token_count": 12262
} | 92 |
# Text Extraction with BERT
**Author:** [Apoorv Nandan](https://twitter.com/NandanApoorv)<br>
**Date created:** 2020/05/23<br>
**Last modified:** 2020/05/23<br>
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/nlp/ipynb/text_extraction_with_bert.ipynb) <span class="k-dot">โข</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/nlp/text_extraction_with_bert.py)
**Description:** Fine tune pretrained BERT from HuggingFace Transformers on SQuAD.
---
## Introduction
This demonstration uses SQuAD (Stanford Question-Answering Dataset).
In SQuAD, an input consists of a question, and a paragraph for context.
The goal is to find the span of text in the paragraph that answers the question.
We evaluate our performance on this data with the "Exact Match" metric,
which measures the percentage of predictions that exactly match any one of the
ground-truth answers.
We fine-tune a BERT model to perform this task as follows:
1. Feed the context and the question as inputs to BERT.
2. Take two vectors S and T with dimensions equal to that of
hidden states in BERT.
3. Compute the probability of each token being the start and end of
the answer span. The probability of a token being the start of
the answer is given by a dot product between S and the representation
of the token in the last layer of BERT, followed by a softmax over all tokens.
The probability of a token being the end of the answer is computed
similarly with the vector T.
4. Fine-tune BERT and learn S and T along the way.
**References:**
- [BERT](https://arxiv.org/pdf/1810.04805.pdf)
- [SQuAD](https://arxiv.org/abs/1606.05250)
## Setup
```python
import os
import re
import json
import string
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tokenizers import BertWordPieceTokenizer
from transformers import BertTokenizer, TFBertModel, BertConfig
max_len = 384
configuration = BertConfig() # default parameters and configuration for BERT
```
---
## Set-up BERT tokenizer
```python
# Save the slow pretrained tokenizer
slow_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
save_path = "bert_base_uncased/"
if not os.path.exists(save_path):
os.makedirs(save_path)
slow_tokenizer.save_pretrained(save_path)
# Load the fast tokenizer from saved file
tokenizer = BertWordPieceTokenizer("bert_base_uncased/vocab.txt", lowercase=True)
```
---
## Load the data
```python
train_data_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json"
train_path = keras.utils.get_file("train.json", train_data_url)
eval_data_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json"
eval_path = keras.utils.get_file("eval.json", eval_data_url)
```
---
## Preprocess the data
1. Go through the JSON file and store every record as a `SquadExample` object.
2. Go through each `SquadExample` and create `x_train, y_train, x_eval, y_eval`.
```python
class SquadExample:
def __init__(self, question, context, start_char_idx, answer_text, all_answers):
self.question = question
self.context = context
self.start_char_idx = start_char_idx
self.answer_text = answer_text
self.all_answers = all_answers
self.skip = False
def preprocess(self):
context = self.context
question = self.question
answer_text = self.answer_text
start_char_idx = self.start_char_idx
# Clean context, answer and question
context = " ".join(str(context).split())
question = " ".join(str(question).split())
answer = " ".join(str(answer_text).split())
# Find end character index of answer in context
end_char_idx = start_char_idx + len(answer)
if end_char_idx >= len(context):
self.skip = True
return
# Mark the character indexes in context that are in answer
is_char_in_ans = [0] * len(context)
for idx in range(start_char_idx, end_char_idx):
is_char_in_ans[idx] = 1
# Tokenize context
tokenized_context = tokenizer.encode(context)
# Find tokens that were created from answer characters
ans_token_idx = []
for idx, (start, end) in enumerate(tokenized_context.offsets):
if sum(is_char_in_ans[start:end]) > 0:
ans_token_idx.append(idx)
if len(ans_token_idx) == 0:
self.skip = True
return
# Find start and end token index for tokens from answer
start_token_idx = ans_token_idx[0]
end_token_idx = ans_token_idx[-1]
# Tokenize question
tokenized_question = tokenizer.encode(question)
# Create inputs
input_ids = tokenized_context.ids + tokenized_question.ids[1:]
token_type_ids = [0] * len(tokenized_context.ids) + [1] * len(
tokenized_question.ids[1:]
)
attention_mask = [1] * len(input_ids)
# Pad and create attention masks.
# Skip if truncation is needed
padding_length = max_len - len(input_ids)
if padding_length > 0: # pad
input_ids = input_ids + ([0] * padding_length)
attention_mask = attention_mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
elif padding_length < 0: # skip
self.skip = True
return
self.input_ids = input_ids
self.token_type_ids = token_type_ids
self.attention_mask = attention_mask
self.start_token_idx = start_token_idx
self.end_token_idx = end_token_idx
self.context_token_to_char = tokenized_context.offsets
with open(train_path) as f:
raw_train_data = json.load(f)
with open(eval_path) as f:
raw_eval_data = json.load(f)
def create_squad_examples(raw_data):
squad_examples = []
for item in raw_data["data"]:
for para in item["paragraphs"]:
context = para["context"]
for qa in para["qas"]:
question = qa["question"]
answer_text = qa["answers"][0]["text"]
all_answers = [_["text"] for _ in qa["answers"]]
start_char_idx = qa["answers"][0]["answer_start"]
squad_eg = SquadExample(
question, context, start_char_idx, answer_text, all_answers
)
squad_eg.preprocess()
squad_examples.append(squad_eg)
return squad_examples
def create_inputs_targets(squad_examples):
dataset_dict = {
"input_ids": [],
"token_type_ids": [],
"attention_mask": [],
"start_token_idx": [],
"end_token_idx": [],
}
for item in squad_examples:
if item.skip == False:
for key in dataset_dict:
dataset_dict[key].append(getattr(item, key))
for key in dataset_dict:
dataset_dict[key] = np.array(dataset_dict[key])
x = [
dataset_dict["input_ids"],
dataset_dict["token_type_ids"],
dataset_dict["attention_mask"],
]
y = [dataset_dict["start_token_idx"], dataset_dict["end_token_idx"]]
return x, y
train_squad_examples = create_squad_examples(raw_train_data)
x_train, y_train = create_inputs_targets(train_squad_examples)
print(f"{len(train_squad_examples)} training points created.")
eval_squad_examples = create_squad_examples(raw_eval_data)
x_eval, y_eval = create_inputs_targets(eval_squad_examples)
print(f"{len(eval_squad_examples)} evaluation points created.")
```
<div class="k-default-codeblock">
```
87599 training points created.
10570 evaluation points created.
```
</div>
Create the Question-Answering Model using BERT and Functional API
```python
def create_model():
## BERT encoder
encoder = TFBertModel.from_pretrained("bert-base-uncased")
## QA Model
input_ids = layers.Input(shape=(max_len,), dtype=tf.int32)
token_type_ids = layers.Input(shape=(max_len,), dtype=tf.int32)
attention_mask = layers.Input(shape=(max_len,), dtype=tf.int32)
embedding = encoder(
input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask
)[0]
start_logits = layers.Dense(1, name="start_logit", use_bias=False)(embedding)
start_logits = layers.Flatten()(start_logits)
end_logits = layers.Dense(1, name="end_logit", use_bias=False)(embedding)
end_logits = layers.Flatten()(end_logits)
start_probs = layers.Activation(keras.activations.softmax)(start_logits)
end_probs = layers.Activation(keras.activations.softmax)(end_logits)
model = keras.Model(
inputs=[input_ids, token_type_ids, attention_mask],
outputs=[start_probs, end_probs],
)
loss = keras.losses.SparseCategoricalCrossentropy(from_logits=False)
optimizer = keras.optimizers.Adam(lr=5e-5)
model.compile(optimizer=optimizer, loss=[loss, loss])
return model
```
This code should preferably be run on Google Colab TPU runtime.
With Colab TPUs, each epoch will take 5-6 minutes.
```python
use_tpu = True
if use_tpu:
# Create distribution strategy
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
strategy = tf.distribute.TPUStrategy(tpu)
# Create model
with strategy.scope():
model = create_model()
else:
model = create_model()
model.summary()
```
<div class="k-default-codeblock">
```
INFO:absl:Entering into master device scope: /job:worker/replica:0/task:0/device:CPU:0
INFO:tensorflow:Initializing the TPU system: grpc://10.48.159.170:8470
INFO:tensorflow:Clearing out eager caches
INFO:tensorflow:Finished initializing TPU system.
INFO:tensorflow:Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 384)] 0
__________________________________________________________________________________________________
input_3 (InputLayer) [(None, 384)] 0
__________________________________________________________________________________________________
input_2 (InputLayer) [(None, 384)] 0
__________________________________________________________________________________________________
tf_bert_model (TFBertModel) ((None, 384, 768), ( 109482240 input_1[0][0]
__________________________________________________________________________________________________
start_logit (Dense) (None, 384, 1) 768 tf_bert_model[0][0]
__________________________________________________________________________________________________
end_logit (Dense) (None, 384, 1) 768 tf_bert_model[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (None, 384) 0 start_logit[0][0]
__________________________________________________________________________________________________
flatten_1 (Flatten) (None, 384) 0 end_logit[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, 384) 0 flatten[0][0]
__________________________________________________________________________________________________
activation_8 (Activation) (None, 384) 0 flatten_1[0][0]
==================================================================================================
Total params: 109,483,776
Trainable params: 109,483,776
Non-trainable params: 0
__________________________________________________________________________________________________
```
</div>
---
## Create evaluation Callback
This callback will compute the exact match score using the validation data
after every epoch.
```python
def normalize_text(text):
text = text.lower()
# Remove punctuations
exclude = set(string.punctuation)
text = "".join(ch for ch in text if ch not in exclude)
# Remove articles
regex = re.compile(r"\b(a|an|the)\b", re.UNICODE)
text = re.sub(regex, " ", text)
# Remove extra white space
text = " ".join(text.split())
return text
class ExactMatch(keras.callbacks.Callback):
"""
Each `SquadExample` object contains the character level offsets for each token
in its input paragraph. We use them to get back the span of text corresponding
to the tokens between our predicted start and end tokens.
All the ground-truth answers are also present in each `SquadExample` object.
We calculate the percentage of data points where the span of text obtained
from model predictions matches one of the ground-truth answers.
"""
def __init__(self, x_eval, y_eval):
self.x_eval = x_eval
self.y_eval = y_eval
def on_epoch_end(self, epoch, logs=None):
pred_start, pred_end = self.model.predict(self.x_eval)
count = 0
eval_examples_no_skip = [_ for _ in eval_squad_examples if _.skip == False]
for idx, (start, end) in enumerate(zip(pred_start, pred_end)):
squad_eg = eval_examples_no_skip[idx]
offsets = squad_eg.context_token_to_char
start = np.argmax(start)
end = np.argmax(end)
if start >= len(offsets):
continue
pred_char_start = offsets[start][0]
if end < len(offsets):
pred_char_end = offsets[end][1]
pred_ans = squad_eg.context[pred_char_start:pred_char_end]
else:
pred_ans = squad_eg.context[pred_char_start:]
normalized_pred_ans = normalize_text(pred_ans)
normalized_true_ans = [normalize_text(_) for _ in squad_eg.all_answers]
if normalized_pred_ans in normalized_true_ans:
count += 1
acc = count / len(self.y_eval[0])
print(f"\nepoch={epoch+1}, exact match score={acc:.2f}")
```
---
## Train and Evaluate
```python
exact_match_callback = ExactMatch(x_eval, y_eval)
model.fit(
x_train,
y_train,
epochs=1, # For demonstration, 3 epochs are recommended
verbose=2,
batch_size=64,
callbacks=[exact_match_callback],
)
```
<div class="k-default-codeblock">
```
epoch=1, exact match score=0.78
1346/1346 - 350s - activation_7_loss: 1.3488 - loss: 2.5905 - activation_8_loss: 1.2417
<tensorflow.python.keras.callbacks.History at 0x7fc78b4458d0>
```
</div>
| keras-io/examples/nlp/md/text_extraction_with_bert.md/0 | {
"file_path": "keras-io/examples/nlp/md/text_extraction_with_bert.md",
"repo_id": "keras-io",
"token_count": 6093
} | 93 |
"""
Title: Abstractive Summarization with Hugging Face Transformers
Author: Sreyan Ghosh
Date created: 2022/07/04
Last modified: 2022/08/28
Description: Training T5 using Hugging Face Transformers for Abstractive Summarization.
Accelerator: GPU
"""
"""
## Introduction
Automatic summarization is one of the central problems in
Natural Language Processing (NLP). It poses several challenges relating to language
understanding (e.g. identifying important content)
and generation (e.g. aggregating and rewording the identified content into a summary).
In this tutorial, we tackle the single-document summarization task
with an abstractive modeling approach. The primary idea here is to generate a short,
single-sentence news summary answering the question โWhat is the news article about?โ.
This approach to summarization is also known as *Abstractive Summarization* and has
seen growing interest among researchers in various disciplines.
Following prior work, we aim to tackle this problem using a
sequence-to-sequence model. [Text-to-Text Transfer Transformer (`T5`)](https://arxiv.org/abs/1910.10683)
is a [Transformer-based](https://arxiv.org/abs/1706.03762) model built on the encoder-decoder
architecture, pretrained on a multi-task mixture of unsupervised and supervised tasks where each task
is converted into a text-to-text format. T5 shows impressive results in a variety of sequence-to-sequence
(sequence in this notebook refers to text) like summarization, translation, etc.
In this notebook, we will fine-tune the pretrained T5 on the Abstractive Summarization
task using Hugging Face Transformers on the `XSum` dataset loaded from Hugging Face Datasets.
"""
"""
## Setup
"""
"""
### Installing the requirements
"""
"""shell
!pip install transformers==4.20.0
!pip install keras_nlp==0.3.0
!pip install datasets
!pip install huggingface-hub
!pip install nltk
!pip install rouge-score
"""
"""
### Importing the necessary libraries
"""
import os
import logging
import nltk
import numpy as np
import tensorflow as tf
from tensorflow import keras
# Only log error messages
tf.get_logger().setLevel(logging.ERROR)
os.environ["TOKENIZERS_PARALLELISM"] = "false"
"""
### Define certain variables
"""
# The percentage of the dataset you want to split as train and test
TRAIN_TEST_SPLIT = 0.1
MAX_INPUT_LENGTH = 1024 # Maximum length of the input to the model
MIN_TARGET_LENGTH = 5 # Minimum length of the output by the model
MAX_TARGET_LENGTH = 128 # Maximum length of the output by the model
BATCH_SIZE = 8 # Batch-size for training our model
LEARNING_RATE = 2e-5 # Learning-rate for training our model
MAX_EPOCHS = 1 # Maximum number of epochs we will train the model for
# This notebook is built on the t5-small checkpoint from the Hugging Face Model Hub
MODEL_CHECKPOINT = "t5-small"
"""
## Load the dataset
We will now download the [Extreme Summarization (XSum)](https://arxiv.org/abs/1808.08745).
The dataset consists of BBC articles and accompanying single sentence summaries.
Specifically, each article is prefaced with an introductory sentence (aka summary) which is
professionally written, typically by the author of the article. That dataset has 226,711 articles
divided into training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) sets.
Following much of literature, we use the Recall-Oriented Understudy for Gisting Evaluation
(ROUGE) metric to evaluate our sequence-to-sequence abstrative summarization approach.
We will use the [Hugging Face Datasets](https://github.com/huggingface/datasets) library to download
the data we need to use for training and evaluation. This can be easily done with the
`load_dataset` function.
"""
from datasets import load_dataset
raw_datasets = load_dataset("xsum", split="train")
"""
The dataset has the following fields:
- **document**: the original BBC article to be summarized
- **summary**: the single sentence summary of the BBC article
- **id**: ID of the document-summary pair
"""
print(raw_datasets)
"""
We will now see how the data looks like:
"""
print(raw_datasets[0])
"""
For the sake of demonstrating the workflow, in this notebook we will only take
small stratified balanced splits (10%) of the train as our training and test sets.
We can easily split the dataset using the `train_test_split` method which expects
the split size and the name of the column relative to which you want to stratify.
"""
raw_datasets = raw_datasets.train_test_split(
train_size=TRAIN_TEST_SPLIT, test_size=TRAIN_TEST_SPLIT
)
"""
## Data Pre-processing
Before we can feed those texts to our model, we need to pre-process them and get them
ready for the task. This is done by a Hugging Face Transformers `Tokenizer` which will tokenize
the inputs (including converting the tokens to their corresponding IDs in the pretrained
vocabulary) and put it in a format the model expects, as well as generate the other inputs
that model requires.
The `from_pretrained()` method expects the name of a model from the Hugging Face Model Hub. This is
exactly similar to MODEL_CHECKPOINT declared earlier and we will just pass that.
"""
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_CHECKPOINT)
"""
If you are using one of the five T5 checkpoints we have to prefix the inputs with
"summarize:" (the model can also translate and it needs the prefix to know which task it
has to perform).
"""
if MODEL_CHECKPOINT in ["t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b"]:
prefix = "summarize: "
else:
prefix = ""
"""
We will write a simple function that helps us in the pre-processing that is compatible
with Hugging Face Datasets. To summarize, our pre-processing function should:
- Tokenize the text dataset (input and targets) into it's corresponding token ids that
will be used for embedding look-up in BERT
- Add the prefix to the tokens
- Create additional inputs for the model like `token_type_ids`, `attention_mask`, etc.
"""
def preprocess_function(examples):
inputs = [prefix + doc for doc in examples["document"]]
model_inputs = tokenizer(inputs, max_length=MAX_INPUT_LENGTH, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(
examples["summary"], max_length=MAX_TARGET_LENGTH, truncation=True
)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
"""
To apply this function on all the pairs of sentences in our dataset, we just use the
`map` method of our `dataset` object we created earlier. This will apply the function on
all the elements of all the splits in `dataset`, so our training and testing
data will be preprocessed in one single command.
"""
tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)
"""
## Defining the model
Now we can download the pretrained model and fine-tune it. Since our task is
sequence-to-sequence (both the input and output are text sequences), we use the
`TFAutoModelForSeq2SeqLM` class from the Hugging Face Transformers library. Like with the
tokenizer, the `from_pretrained` method will download and cache the model for us.
The `from_pretrained()` method expects the name of a model from the Hugging Face Model Hub. As
mentioned earlier, we will use the `t5-small` model checkpoint.
"""
from transformers import TFAutoModelForSeq2SeqLM, DataCollatorForSeq2Seq
model = TFAutoModelForSeq2SeqLM.from_pretrained(MODEL_CHECKPOINT)
"""
For training Sequence to Sequence models, we need a special kind of data collator,
which will not only pad the inputs to the maximum length in the batch, but also the
labels. Thus, we use the `DataCollatorForSeq2Seq` provided by the Hugging Face Transformers
library on our dataset. The `return_tensors='tf'` ensures that we get `tf.Tensor`
objects back.
"""
from transformers import DataCollatorForSeq2Seq
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, return_tensors="tf")
"""
Next we define our training and testing sets with which we will train our model. Again, Hugging Face
Datasets provides us with the `to_tf_dataset` method which will help us integrate our
dataset with the `collator` defined above. The method expects certain parameters:
- **columns**: the columns which will serve as our independant variables
- **batch_size**: our batch size for training
- **shuffle**: whether we want to shuffle our dataset
- **collate_fn**: our collator function
Additionally, we also define a relatively smaller `generation_dataset` to calculate
`ROUGE` scores on the fly while training.
"""
train_dataset = tokenized_datasets["train"].to_tf_dataset(
batch_size=BATCH_SIZE,
columns=["input_ids", "attention_mask", "labels"],
shuffle=True,
collate_fn=data_collator,
)
test_dataset = tokenized_datasets["test"].to_tf_dataset(
batch_size=BATCH_SIZE,
columns=["input_ids", "attention_mask", "labels"],
shuffle=False,
collate_fn=data_collator,
)
generation_dataset = (
tokenized_datasets["test"]
.shuffle()
.select(list(range(200)))
.to_tf_dataset(
batch_size=BATCH_SIZE,
columns=["input_ids", "attention_mask", "labels"],
shuffle=False,
collate_fn=data_collator,
)
)
"""
## Building and Compiling the the model
Now we will define our optimizer and compile the model. The loss calculation is handled
internally and so we need not worry about that!
"""
optimizer = keras.optimizers.Adam(learning_rate=LEARNING_RATE)
model.compile(optimizer=optimizer)
"""
## Training and Evaluating the model
To evaluate our model on-the-fly while training, we will define `metric_fn` which will
calculate the `ROUGE` score between the groud-truth and predictions.
"""
import keras_nlp
rouge_l = keras_nlp.metrics.RougeL()
def metric_fn(eval_predictions):
predictions, labels = eval_predictions
decoded_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True)
for label in labels:
label[label < 0] = tokenizer.pad_token_id # Replace masked label tokens
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
result = rouge_l(decoded_labels, decoded_predictions)
# We will print only the F1 score, you can use other aggregation metrics as well
result = {"RougeL": result["f1_score"]}
return result
"""
Now we can finally start training our model!
"""
from transformers.keras_callbacks import KerasMetricCallback
metric_callback = KerasMetricCallback(
metric_fn, eval_dataset=generation_dataset, predict_with_generate=True
)
callbacks = [metric_callback]
# For now we will use our test set as our validation_data
model.fit(
train_dataset, validation_data=test_dataset, epochs=MAX_EPOCHS, callbacks=callbacks
)
"""
For best results, we recommend training the model for atleast 5 epochs on the entire
training dataset!
"""
"""
## Inference
Now we will try to infer the model we trained on an arbitary article. To do so,
we will use the `pipeline` method from Hugging Face Transformers. Hugging Face Transformers provides
us with a variety of pipelines to choose from. For our task, we use the `summarization`
pipeline.
The `pipeline` method takes in the trained model and tokenizer as arguments. The
`framework="tf"` argument ensures that you are passing a model that was trained with TF.
"""
from transformers import pipeline
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, framework="tf")
summarizer(
raw_datasets["test"][0]["document"],
min_length=MIN_TARGET_LENGTH,
max_length=MAX_TARGET_LENGTH,
)
"""
Now you can push this model to Hugging Face Model Hub and also share it with with all your friends,
family, favorite pets: they can all load it with the identifier
`"your-username/the-name-you-picked"` so for instance:
```python
model.push_to_hub("transformers-qa", organization="keras-io")
tokenizer.push_to_hub("transformers-qa", organization="keras-io")
```
And after you push your model this is how you can load it in the future!
```python
from transformers import TFAutoModelForSeq2SeqLM
model = TFAutoModelForSeq2SeqLM.from_pretrained("your-username/my-awesome-model")
```
"""
| keras-io/examples/nlp/t5_hf_summarization.py/0 | {
"file_path": "keras-io/examples/nlp/t5_hf_summarization.py",
"repo_id": "keras-io",
"token_count": 3699
} | 94 |
# Classification with Gated Residual and Variable Selection Networks
**Author:** [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/)<br>
**Date created:** 2021/02/10<br>
**Last modified:** 2021/02/10<br>
**Description:** Using Gated Residual and Variable Selection Networks for income level prediction.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/structured_data/ipynb/classification_with_grn_and_vsn.ipynb) <span class="k-dot">โข</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/structured_data/classification_with_grn_and_vsn.py)
---
## Introduction
This example demonstrates the use of Gated
Residual Networks (GRN) and Variable Selection Networks (VSN), proposed by
Bryan Lim et al. in
[Temporal Fusion Transformers (TFT) for Interpretable Multi-horizon Time Series Forecasting](https://arxiv.org/abs/1912.09363),
for structured data classification. GRNs give the flexibility to the model to apply
non-linear processing only where needed. VSNs allow the model to softly remove any
unnecessary noisy inputs which could negatively impact performance.
Together, those techniques help improving the learning capacity of deep neural
network models.
Note that this example implements only the GRN and VSN components described in
in the paper, rather than the whole TFT model, as GRN and VSN can be useful on
their own for structured data learning tasks.
To run the code you need to use TensorFlow 2.3 or higher.
---
## The dataset
This example uses the
[United States Census Income Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29)
provided by the
[UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php).
The task is binary classification to determine whether a person makes over 50K a year.
The dataset includes ~300K instances with 41 input features: 7 numerical features
and 34 categorical features.
---
## Setup
```python
import math
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
```
---
## Prepare the data
First we load the data from the UCI Machine Learning Repository into a Pandas DataFrame.
```python
# Column names.
CSV_HEADER = [
"age",
"class_of_worker",
"detailed_industry_recode",
"detailed_occupation_recode",
"education",
"wage_per_hour",
"enroll_in_edu_inst_last_wk",
"marital_stat",
"major_industry_code",
"major_occupation_code",
"race",
"hispanic_origin",
"sex",
"member_of_a_labor_union",
"reason_for_unemployment",
"full_or_part_time_employment_stat",
"capital_gains",
"capital_losses",
"dividends_from_stocks",
"tax_filer_stat",
"region_of_previous_residence",
"state_of_previous_residence",
"detailed_household_and_family_stat",
"detailed_household_summary_in_household",
"instance_weight",
"migration_code-change_in_msa",
"migration_code-change_in_reg",
"migration_code-move_within_reg",
"live_in_this_house_1_year_ago",
"migration_prev_res_in_sunbelt",
"num_persons_worked_for_employer",
"family_members_under_18",
"country_of_birth_father",
"country_of_birth_mother",
"country_of_birth_self",
"citizenship",
"own_business_or_self_employed",
"fill_inc_questionnaire_for_veterans_admin",
"veterans_benefits",
"weeks_worked_in_year",
"year",
"income_level",
]
data_url = "https://archive.ics.uci.edu/ml/machine-learning-databases/census-income-mld/census-income.data.gz"
data = pd.read_csv(data_url, header=None, names=CSV_HEADER)
test_data_url = "https://archive.ics.uci.edu/ml/machine-learning-databases/census-income-mld/census-income.test.gz"
test_data = pd.read_csv(test_data_url, header=None, names=CSV_HEADER)
print(f"Data shape: {data.shape}")
print(f"Test data shape: {test_data.shape}")
```
<div class="k-default-codeblock">
```
Data shape: (199523, 42)
Test data shape: (99762, 42)
```
</div>
We convert the target column from string to integer.
```python
data["income_level"] = data["income_level"].apply(
lambda x: 0 if x == " - 50000." else 1
)
test_data["income_level"] = test_data["income_level"].apply(
lambda x: 0 if x == " - 50000." else 1
)
```
Then, We split the dataset into train and validation sets.
```python
random_selection = np.random.rand(len(data.index)) <= 0.85
train_data = data[random_selection]
valid_data = data[~random_selection]
```
Finally we store the train and test data splits locally to CSV files.
```python
train_data_file = "train_data.csv"
valid_data_file = "valid_data.csv"
test_data_file = "test_data.csv"
train_data.to_csv(train_data_file, index=False, header=False)
valid_data.to_csv(valid_data_file, index=False, header=False)
test_data.to_csv(test_data_file, index=False, header=False)
```
---
## Define dataset metadata
Here, we define the metadata of the dataset that will be useful for reading and
parsing the data into input features, and encoding the input features with respect
to their types.
```python
# Target feature name.
TARGET_FEATURE_NAME = "income_level"
# Weight column name.
WEIGHT_COLUMN_NAME = "instance_weight"
# Numeric feature names.
NUMERIC_FEATURE_NAMES = [
"age",
"wage_per_hour",
"capital_gains",
"capital_losses",
"dividends_from_stocks",
"num_persons_worked_for_employer",
"weeks_worked_in_year",
]
# Categorical features and their vocabulary lists.
# Note that we add 'v=' as a prefix to all categorical feature values to make
# sure that they are treated as strings.
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
feature_name: sorted([str(value) for value in list(data[feature_name].unique())])
for feature_name in CSV_HEADER
if feature_name
not in list(NUMERIC_FEATURE_NAMES + [WEIGHT_COLUMN_NAME, TARGET_FEATURE_NAME])
}
# All features names.
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + list(
CATEGORICAL_FEATURES_WITH_VOCABULARY.keys()
)
# Feature default values.
COLUMN_DEFAULTS = [
[0.0]
if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME, WEIGHT_COLUMN_NAME]
else ["NA"]
for feature_name in CSV_HEADER
]
```
---
## Create a `tf.data.Dataset` for training and evaluation
We create an input function to read and parse the file, and convert features and
labels into a [`tf.data.Dataset`](https://www.tensorflow.org/guide/datasets) for
training and evaluation.
```python
from tensorflow.keras.layers import StringLookup
def process(features, target):
for feature_name in features:
if feature_name in CATEGORICAL_FEATURES_WITH_VOCABULARY:
# Cast categorical feature values to string.
features[feature_name] = tf.cast(features[feature_name], tf.dtypes.string)
# Get the instance weight.
weight = features.pop(WEIGHT_COLUMN_NAME)
return features, target, weight
def get_dataset_from_csv(csv_file_path, shuffle=False, batch_size=128):
dataset = tf.data.experimental.make_csv_dataset(
csv_file_path,
batch_size=batch_size,
column_names=CSV_HEADER,
column_defaults=COLUMN_DEFAULTS,
label_name=TARGET_FEATURE_NAME,
num_epochs=1,
header=False,
shuffle=shuffle,
).map(process)
return dataset
```
---
## Create model inputs
```python
def create_model_inputs():
inputs = {}
for feature_name in FEATURE_NAMES:
if feature_name in NUMERIC_FEATURE_NAMES:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype=tf.float32
)
else:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype=tf.string
)
return inputs
```
---
## Encode input features
For categorical features, we encode them using `layers.Embedding` using the
`encoding_size` as the embedding dimensions. For the numerical features,
we apply linear transformation using `layers.Dense` to project each feature into
`encoding_size`-dimensional vector. Thus, all the encoded features will have the
same dimensionality.
```python
def encode_inputs(inputs, encoding_size):
encoded_features = []
for feature_name in inputs:
if feature_name in CATEGORICAL_FEATURES_WITH_VOCABULARY:
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
# Create a lookup to convert a string values to an integer indices.
# Since we are not using a mask token nor expecting any out of vocabulary
# (oov) token, we set mask_token to None and num_oov_indices to 0.
index = StringLookup(
vocabulary=vocabulary, mask_token=None, num_oov_indices=0
)
# Convert the string input values into integer indices.
value_index = index(inputs[feature_name])
# Create an embedding layer with the specified dimensions
embedding_ecoder = layers.Embedding(
input_dim=len(vocabulary), output_dim=encoding_size
)
# Convert the index values to embedding representations.
encoded_feature = embedding_ecoder(value_index)
else:
# Project the numeric feature to encoding_size using linear transformation.
encoded_feature = tf.expand_dims(inputs[feature_name], -1)
encoded_feature = layers.Dense(units=encoding_size)(encoded_feature)
encoded_features.append(encoded_feature)
return encoded_features
```
---
## Implement the Gated Linear Unit
[Gated Linear Units (GLUs)](https://arxiv.org/abs/1612.08083) provide the
flexibility to suppress input that are not relevant for a given task.
```python
class GatedLinearUnit(layers.Layer):
def __init__(self, units):
super().__init__()
self.linear = layers.Dense(units)
self.sigmoid = layers.Dense(units, activation="sigmoid")
def call(self, inputs):
return self.linear(inputs) * self.sigmoid(inputs)
```
---
## Implement the Gated Residual Network
The Gated Residual Network (GRN) works as follows:
1. Applies the nonlinear ELU transformation to the inputs.
2. Applies linear transformation followed by dropout.
4. Applies GLU and adds the original inputs to the output of the GLU to perform skip
(residual) connection.
6. Applies layer normalization and produces the output.
```python
class GatedResidualNetwork(layers.Layer):
def __init__(self, units, dropout_rate):
super().__init__()
self.units = units
self.elu_dense = layers.Dense(units, activation="elu")
self.linear_dense = layers.Dense(units)
self.dropout = layers.Dropout(dropout_rate)
self.gated_linear_unit = GatedLinearUnit(units)
self.layer_norm = layers.LayerNormalization()
self.project = layers.Dense(units)
def call(self, inputs):
x = self.elu_dense(inputs)
x = self.linear_dense(x)
x = self.dropout(x)
if inputs.shape[-1] != self.units:
inputs = self.project(inputs)
x = inputs + self.gated_linear_unit(x)
x = self.layer_norm(x)
return x
```
---
## Implement the Variable Selection Network
The Variable Selection Network (VSN) works as follows:
1. Applies a GRN to each feature individually.
2. Applies a GRN on the concatenation of all the features, followed by a softmax to
produce feature weights.
3. Produces a weighted sum of the output of the individual GRN.
Note that the output of the VSN is [batch_size, encoding_size], regardless of the
number of the input features.
```python
class VariableSelection(layers.Layer):
def __init__(self, num_features, units, dropout_rate):
super().__init__()
self.grns = list()
# Create a GRN for each feature independently
for idx in range(num_features):
grn = GatedResidualNetwork(units, dropout_rate)
self.grns.append(grn)
# Create a GRN for the concatenation of all the features
self.grn_concat = GatedResidualNetwork(units, dropout_rate)
self.softmax = layers.Dense(units=num_features, activation="softmax")
def call(self, inputs):
v = layers.concatenate(inputs)
v = self.grn_concat(v)
v = tf.expand_dims(self.softmax(v), axis=-1)
x = []
for idx, input in enumerate(inputs):
x.append(self.grns[idx](input))
x = tf.stack(x, axis=1)
outputs = tf.squeeze(tf.matmul(v, x, transpose_a=True), axis=1)
return outputs
```
---
## Create Gated Residual and Variable Selection Networks model
```python
def create_model(encoding_size):
inputs = create_model_inputs()
feature_list = encode_inputs(inputs, encoding_size)
num_features = len(feature_list)
features = VariableSelection(num_features, encoding_size, dropout_rate)(
feature_list
)
outputs = layers.Dense(units=1, activation="sigmoid")(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
```
---
## Compile, train, and evaluate the model
```python
learning_rate = 0.001
dropout_rate = 0.15
batch_size = 265
num_epochs = 20
encoding_size = 16
model = create_model(encoding_size)
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss=keras.losses.BinaryCrossentropy(),
metrics=[keras.metrics.BinaryAccuracy(name="accuracy")],
)
# Create an early stopping callback.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor="val_loss", patience=5, restore_best_weights=True
)
print("Start training the model...")
train_dataset = get_dataset_from_csv(
train_data_file, shuffle=True, batch_size=batch_size
)
valid_dataset = get_dataset_from_csv(valid_data_file, batch_size=batch_size)
model.fit(
train_dataset,
epochs=num_epochs,
validation_data=valid_dataset,
callbacks=[early_stopping],
)
print("Model training finished.")
print("Evaluating model performance...")
test_dataset = get_dataset_from_csv(test_data_file, batch_size=batch_size)
_, accuracy = model.evaluate(test_dataset)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
```
<div class="k-default-codeblock">
```
Start training the model...
Epoch 1/20
640/640 [==============================] - 31s 29ms/step - loss: 253.8570 - accuracy: 0.9468 - val_loss: 229.4024 - val_accuracy: 0.9495
Epoch 2/20
640/640 [==============================] - 17s 25ms/step - loss: 229.9359 - accuracy: 0.9497 - val_loss: 223.4970 - val_accuracy: 0.9505
Epoch 3/20
640/640 [==============================] - 17s 25ms/step - loss: 225.5644 - accuracy: 0.9504 - val_loss: 222.0078 - val_accuracy: 0.9515
Epoch 4/20
640/640 [==============================] - 16s 25ms/step - loss: 222.2086 - accuracy: 0.9512 - val_loss: 218.2707 - val_accuracy: 0.9522
Epoch 5/20
640/640 [==============================] - 17s 25ms/step - loss: 218.0359 - accuracy: 0.9523 - val_loss: 217.3721 - val_accuracy: 0.9528
Epoch 6/20
640/640 [==============================] - 17s 26ms/step - loss: 214.8348 - accuracy: 0.9529 - val_loss: 210.3546 - val_accuracy: 0.9543
Epoch 7/20
640/640 [==============================] - 17s 26ms/step - loss: 213.0984 - accuracy: 0.9534 - val_loss: 210.2881 - val_accuracy: 0.9544
Epoch 8/20
640/640 [==============================] - 17s 26ms/step - loss: 211.6379 - accuracy: 0.9538 - val_loss: 209.3327 - val_accuracy: 0.9550
Epoch 9/20
640/640 [==============================] - 17s 26ms/step - loss: 210.7283 - accuracy: 0.9541 - val_loss: 209.5862 - val_accuracy: 0.9543
Epoch 10/20
640/640 [==============================] - 17s 26ms/step - loss: 209.9062 - accuracy: 0.9538 - val_loss: 210.1662 - val_accuracy: 0.9537
Epoch 11/20
640/640 [==============================] - 16s 25ms/step - loss: 209.6323 - accuracy: 0.9540 - val_loss: 207.9528 - val_accuracy: 0.9552
Epoch 12/20
640/640 [==============================] - 16s 25ms/step - loss: 208.7843 - accuracy: 0.9544 - val_loss: 207.5303 - val_accuracy: 0.9550
Epoch 13/20
640/640 [==============================] - 21s 32ms/step - loss: 207.9983 - accuracy: 0.9544 - val_loss: 206.8800 - val_accuracy: 0.9557
Epoch 14/20
640/640 [==============================] - 18s 28ms/step - loss: 207.2104 - accuracy: 0.9544 - val_loss: 216.0859 - val_accuracy: 0.9535
Epoch 15/20
640/640 [==============================] - 16s 25ms/step - loss: 207.2254 - accuracy: 0.9543 - val_loss: 206.7765 - val_accuracy: 0.9555
Epoch 16/20
640/640 [==============================] - 16s 25ms/step - loss: 206.6704 - accuracy: 0.9546 - val_loss: 206.7508 - val_accuracy: 0.9560
Epoch 17/20
640/640 [==============================] - 19s 30ms/step - loss: 206.1322 - accuracy: 0.9545 - val_loss: 205.9638 - val_accuracy: 0.9562
Epoch 18/20
640/640 [==============================] - 21s 31ms/step - loss: 205.4764 - accuracy: 0.9545 - val_loss: 206.0258 - val_accuracy: 0.9561
Epoch 19/20
640/640 [==============================] - 16s 25ms/step - loss: 204.3614 - accuracy: 0.9550 - val_loss: 207.1424 - val_accuracy: 0.9560
Epoch 20/20
640/640 [==============================] - 16s 25ms/step - loss: 203.9543 - accuracy: 0.9550 - val_loss: 206.4697 - val_accuracy: 0.9554
Model training finished.
Evaluating model performance...
377/377 [==============================] - 4s 11ms/step - loss: 204.5099 - accuracy: 0.9547
Test accuracy: 95.47%
```
</div>
You should achieve more than 95% accuracy on the test set.
To increase the learning capacity of the model, you can try increasing the
`encoding_size` value, or stacking multiple GRN layers on top of the VSN layer.
This may require to also increase the `dropout_rate` value to avoid overfitting.
**Example available on HuggingFace**
| Trained Model | Demo |
| :--: | :--: |
| [](https://huggingface.co/keras-io/structured-data-classification-grn-vsn) | [](https://huggingface.co/spaces/keras-io/structured-data-classification-grn-vsn) |
| keras-io/examples/structured_data/md/classification_with_grn_and_vsn.md/0 | {
"file_path": "keras-io/examples/structured_data/md/classification_with_grn_and_vsn.md",
"repo_id": "keras-io",
"token_count": 6884
} | 95 |
"""
Title: Electroencephalogram Signal Classification for action identification
Author: [Suvaditya Mukherjee](https://github.com/suvadityamuk)
Date created: 2022/11/03
Last modified: 2022/11/05
Description: Training a Convolutional model to classify EEG signals produced by exposure to certain stimuli.
Accelerator: GPU
"""
"""
## Introduction
The following example explores how we can make a Convolution-based Neural Network to
perform classification on Electroencephalogram signals captured when subjects were
exposed to different stimuli.
We train a model from scratch since such signal-classification models are fairly scarce
in pre-trained format.
The data we use is sourced from the UC Berkeley-Biosense Lab where the data was collected
from 15 subjects at the same time.
Our process is as follows:
- Load the [UC Berkeley-Biosense Synchronized Brainwave Dataset](https://www.kaggle.com/datasets/berkeley-biosense/synchronized-brainwave-dataset)
- Visualize random samples from the data
- Pre-process, collate and scale the data to finally make a `tf.data.Dataset`
- Prepare class weights in order to tackle major imbalances
- Create a Conv1D and Dense-based model to perform classification
- Define callbacks and hyperparameters
- Train the model
- Plot metrics from History and perform evaluation
This example needs the following external dependencies (Gdown, Scikit-learn, Pandas,
Numpy, Matplotlib). You can install it via the following commands.
Gdown is an external package used to download large files from Google Drive. To know
more, you can refer to its [PyPi page here](https://pypi.org/project/gdown)
"""
"""
## Setup and Data Downloads
First, lets install our dependencies:
"""
"""shell
pip install gdown -q
pip install scikit-learn -q
pip install pandas -q
pip install numpy -q
pip install matplotlib -q
"""
"""
Next, lets download our dataset.
The gdown package makes it easy to download the data from Google Drive:
"""
"""shell
gdown 1V5B7Bt6aJm0UHbR7cRKBEK8jx7lYPVuX
# gdown will download eeg-data.csv onto the local drive for use. Total size of
# eeg-data.csv is 105.7 MB
"""
import pandas as pd
import matplotlib.pyplot as plt
import json
import numpy as np
import keras
from keras import layers
import tensorflow as tf
from sklearn import preprocessing, model_selection
import random
QUALITY_THRESHOLD = 128
BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = BATCH_SIZE * 2
"""
## Read data from `eeg-data.csv`
We use the Pandas library to read the `eeg-data.csv` file and display the first 5 rows
using the `.head()` command
"""
eeg = pd.read_csv("eeg-data.csv")
"""
We remove unlabeled samples from our dataset as they do not contribute to the model. We
also perform a `.drop()` operation on the columns that are not required for training data
preparation
"""
unlabeled_eeg = eeg[eeg["label"] == "unlabeled"]
eeg = eeg.loc[eeg["label"] != "unlabeled"]
eeg = eeg.loc[eeg["label"] != "everyone paired"]
eeg.drop(
[
"indra_time",
"Unnamed: 0",
"browser_latency",
"reading_time",
"attention_esense",
"meditation_esense",
"updatedAt",
"createdAt",
],
axis=1,
inplace=True,
)
eeg.reset_index(drop=True, inplace=True)
eeg.head()
"""
In the data, the samples recorded are given a score from 0 to 128 based on how
well-calibrated the sensor was (0 being best, 200 being worst). We filter the values
based on an arbitrary cutoff limit of 128.
"""
def convert_string_data_to_values(value_string):
str_list = json.loads(value_string)
return str_list
eeg["raw_values"] = eeg["raw_values"].apply(convert_string_data_to_values)
eeg = eeg.loc[eeg["signal_quality"] < QUALITY_THRESHOLD]
eeg.head()
"""
## Visualize one random sample from the data
"""
"""
We visualize one sample from the data to understand how the stimulus-induced signal looks
like
"""
def view_eeg_plot(idx):
data = eeg.loc[idx, "raw_values"]
plt.plot(data)
plt.title(f"Sample random plot")
plt.show()
view_eeg_plot(7)
"""
## Pre-process and collate data
"""
"""
There are a total of 67 different labels present in the data, where there are numbered
sub-labels. We collate them under a single label as per their numbering and replace them
in the data itself. Following this process, we perform simple Label encoding to get them
in an integer format.
"""
print("Before replacing labels")
print(eeg["label"].unique(), "\n")
print(len(eeg["label"].unique()), "\n")
eeg.replace(
{
"label": {
"blink1": "blink",
"blink2": "blink",
"blink3": "blink",
"blink4": "blink",
"blink5": "blink",
"math1": "math",
"math2": "math",
"math3": "math",
"math4": "math",
"math5": "math",
"math6": "math",
"math7": "math",
"math8": "math",
"math9": "math",
"math10": "math",
"math11": "math",
"math12": "math",
"thinkOfItems-ver1": "thinkOfItems",
"thinkOfItems-ver2": "thinkOfItems",
"video-ver1": "video",
"video-ver2": "video",
"thinkOfItemsInstruction-ver1": "thinkOfItemsInstruction",
"thinkOfItemsInstruction-ver2": "thinkOfItemsInstruction",
"colorRound1-1": "colorRound1",
"colorRound1-2": "colorRound1",
"colorRound1-3": "colorRound1",
"colorRound1-4": "colorRound1",
"colorRound1-5": "colorRound1",
"colorRound1-6": "colorRound1",
"colorRound2-1": "colorRound2",
"colorRound2-2": "colorRound2",
"colorRound2-3": "colorRound2",
"colorRound2-4": "colorRound2",
"colorRound2-5": "colorRound2",
"colorRound2-6": "colorRound2",
"colorRound3-1": "colorRound3",
"colorRound3-2": "colorRound3",
"colorRound3-3": "colorRound3",
"colorRound3-4": "colorRound3",
"colorRound3-5": "colorRound3",
"colorRound3-6": "colorRound3",
"colorRound4-1": "colorRound4",
"colorRound4-2": "colorRound4",
"colorRound4-3": "colorRound4",
"colorRound4-4": "colorRound4",
"colorRound4-5": "colorRound4",
"colorRound4-6": "colorRound4",
"colorRound5-1": "colorRound5",
"colorRound5-2": "colorRound5",
"colorRound5-3": "colorRound5",
"colorRound5-4": "colorRound5",
"colorRound5-5": "colorRound5",
"colorRound5-6": "colorRound5",
"colorInstruction1": "colorInstruction",
"colorInstruction2": "colorInstruction",
"readyRound1": "readyRound",
"readyRound2": "readyRound",
"readyRound3": "readyRound",
"readyRound4": "readyRound",
"readyRound5": "readyRound",
"colorRound1": "colorRound",
"colorRound2": "colorRound",
"colorRound3": "colorRound",
"colorRound4": "colorRound",
"colorRound5": "colorRound",
}
},
inplace=True,
)
print("After replacing labels")
print(eeg["label"].unique())
print(len(eeg["label"].unique()))
le = preprocessing.LabelEncoder() # Generates a look-up table
le.fit(eeg["label"])
eeg["label"] = le.transform(eeg["label"])
"""
We extract the number of unique classes present in the data
"""
num_classes = len(eeg["label"].unique())
print(num_classes)
"""
We now visualize the number of samples present in each class using a Bar plot.
"""
plt.bar(range(num_classes), eeg["label"].value_counts())
plt.title("Number of samples per class")
plt.show()
"""
## Scale and split data
"""
"""
We perform a simple Min-Max scaling to bring the value-range between 0 and 1. We do not
use Standard Scaling as the data does not follow a Gaussian distribution.
"""
scaler = preprocessing.MinMaxScaler()
series_list = [
scaler.fit_transform(np.asarray(i).reshape(-1, 1)) for i in eeg["raw_values"]
]
labels_list = [i for i in eeg["label"]]
"""
We now create a Train-test split with a 15% holdout set. Following this, we reshape the
data to create a sequence of length 512. We also convert the labels from their current
label-encoded form to a one-hot encoding to enable use of several different
`keras.metrics` functions.
"""
x_train, x_test, y_train, y_test = model_selection.train_test_split(
series_list, labels_list, test_size=0.15, random_state=42, shuffle=True
)
print(
f"Length of x_train : {len(x_train)}\nLength of x_test : {len(x_test)}\nLength of y_train : {len(y_train)}\nLength of y_test : {len(y_test)}"
)
x_train = np.asarray(x_train).astype(np.float32).reshape(-1, 512, 1)
y_train = np.asarray(y_train).astype(np.float32).reshape(-1, 1)
y_train = keras.utils.to_categorical(y_train)
x_test = np.asarray(x_test).astype(np.float32).reshape(-1, 512, 1)
y_test = np.asarray(y_test).astype(np.float32).reshape(-1, 1)
y_test = keras.utils.to_categorical(y_test)
"""
## Prepare `tf.data.Dataset`
"""
"""
We now create a `tf.data.Dataset` from this data to prepare it for training. We also
shuffle and batch the data for use later.
"""
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
"""
## Make Class Weights using Naive method
"""
"""
As we can see from the plot of number of samples per class, the dataset is imbalanced.
Hence, we **calculate weights for each class** to make sure that the model is trained in
a fair manner without preference to any specific class due to greater number of samples.
We use a naive method to calculate these weights, finding an **inverse proportion** of
each class and using that as the weight.
"""
vals_dict = {}
for i in eeg["label"]:
if i in vals_dict.keys():
vals_dict[i] += 1
else:
vals_dict[i] = 1
total = sum(vals_dict.values())
# Formula used - Naive method where
# weight = 1 - (no. of samples present / total no. of samples)
# So more the samples, lower the weight
weight_dict = {k: (1 - (v / total)) for k, v in vals_dict.items()}
print(weight_dict)
"""
## Define simple function to plot all the metrics present in a `keras.callbacks.History`
object
"""
def plot_history_metrics(history: keras.callbacks.History):
total_plots = len(history.history)
cols = total_plots // 2
rows = total_plots // cols
if total_plots % cols != 0:
rows += 1
pos = range(1, total_plots + 1)
plt.figure(figsize=(15, 10))
for i, (key, value) in enumerate(history.history.items()):
plt.subplot(rows, cols, pos[i])
plt.plot(range(len(value)), value)
plt.title(str(key))
plt.show()
"""
## Define function to generate Convolutional model
"""
def create_model():
input_layer = keras.Input(shape=(512, 1))
x = layers.Conv1D(
filters=32, kernel_size=3, strides=2, activation="relu", padding="same"
)(input_layer)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(
filters=64, kernel_size=3, strides=2, activation="relu", padding="same"
)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(
filters=128, kernel_size=5, strides=2, activation="relu", padding="same"
)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(
filters=256, kernel_size=5, strides=2, activation="relu", padding="same"
)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(
filters=512, kernel_size=7, strides=2, activation="relu", padding="same"
)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(
filters=1024,
kernel_size=7,
strides=2,
activation="relu",
padding="same",
)(x)
x = layers.BatchNormalization()(x)
x = layers.Dropout(0.2)(x)
x = layers.Flatten()(x)
x = layers.Dense(4096, activation="relu")(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(
2048, activation="relu", kernel_regularizer=keras.regularizers.L2()
)(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(
1024, activation="relu", kernel_regularizer=keras.regularizers.L2()
)(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(
128, activation="relu", kernel_regularizer=keras.regularizers.L2()
)(x)
output_layer = layers.Dense(num_classes, activation="softmax")(x)
return keras.Model(inputs=input_layer, outputs=output_layer)
"""
## Get Model summary
"""
conv_model = create_model()
conv_model.summary()
"""
## Define callbacks, optimizer, loss and metrics
"""
"""
We set the number of epochs at 30 after performing extensive experimentation. It was seen
that this was the optimal number, after performing Early-Stopping analysis as well.
We define a Model Checkpoint callback to make sure that we only get the best model
weights.
We also define a ReduceLROnPlateau as there were several cases found during
experimentation where the loss stagnated after a certain point. On the other hand, a
direct LRScheduler was found to be too aggressive in its decay.
"""
epochs = 30
callbacks = [
keras.callbacks.ModelCheckpoint(
"best_model.keras", save_best_only=True, monitor="loss"
),
keras.callbacks.ReduceLROnPlateau(
monitor="val_top_k_categorical_accuracy",
factor=0.2,
patience=2,
min_lr=0.000001,
),
]
optimizer = keras.optimizers.Adam(amsgrad=True, learning_rate=0.001)
loss = keras.losses.CategoricalCrossentropy()
"""
## Compile model and call `model.fit()`
"""
"""
We use the `Adam` optimizer since it is commonly considered the best choice for
preliminary training, and was found to be the best optimizer.
We use `CategoricalCrossentropy` as the loss as our labels are in a one-hot-encoded form.
We define the `TopKCategoricalAccuracy(k=3)`, `AUC`, `Precision` and `Recall` metrics to
further aid in understanding the model better.
"""
conv_model.compile(
optimizer=optimizer,
loss=loss,
metrics=[
keras.metrics.TopKCategoricalAccuracy(k=3),
keras.metrics.AUC(),
keras.metrics.Precision(),
keras.metrics.Recall(),
],
)
conv_model_history = conv_model.fit(
train_dataset,
epochs=epochs,
callbacks=callbacks,
validation_data=test_dataset,
class_weight=weight_dict,
)
"""
## Visualize model metrics during training
"""
"""
We use the function defined above to see model metrics during training.
"""
plot_history_metrics(conv_model_history)
"""
## Evaluate model on test data
"""
loss, accuracy, auc, precision, recall = conv_model.evaluate(test_dataset)
print(f"Loss : {loss}")
print(f"Top 3 Categorical Accuracy : {accuracy}")
print(f"Area under the Curve (ROC) : {auc}")
print(f"Precision : {precision}")
print(f"Recall : {recall}")
def view_evaluated_eeg_plots(model):
start_index = random.randint(10, len(eeg))
end_index = start_index + 11
data = eeg.loc[start_index:end_index, "raw_values"]
data_array = [scaler.fit_transform(np.asarray(i).reshape(-1, 1)) for i in data]
data_array = [np.asarray(data_array).astype(np.float32).reshape(-1, 512, 1)]
original_labels = eeg.loc[start_index:end_index, "label"]
predicted_labels = np.argmax(model.predict(data_array, verbose=0), axis=1)
original_labels = [
le.inverse_transform(np.array(label).reshape(-1))[0]
for label in original_labels
]
predicted_labels = [
le.inverse_transform(np.array(label).reshape(-1))[0]
for label in predicted_labels
]
total_plots = 12
cols = total_plots // 3
rows = total_plots // cols
if total_plots % cols != 0:
rows += 1
pos = range(1, total_plots + 1)
fig = plt.figure(figsize=(20, 10))
for i, (plot_data, og_label, pred_label) in enumerate(
zip(data, original_labels, predicted_labels)
):
plt.subplot(rows, cols, pos[i])
plt.plot(plot_data)
plt.title(f"Actual Label : {og_label}\nPredicted Label : {pred_label}")
fig.subplots_adjust(hspace=0.5)
plt.show()
view_evaluated_eeg_plots(conv_model)
| keras-io/examples/timeseries/eeg_signal_classification.py/0 | {
"file_path": "keras-io/examples/timeseries/eeg_signal_classification.py",
"repo_id": "keras-io",
"token_count": 6517
} | 96 |
<jupyter_start><jupyter_text>Timeseries forecasting for weather prediction**Authors:** [Prabhanshu Attri](https://prabhanshu.com/github), [Yashika Sharma](https://github.com/yashika51), [Kristi Takach](https://github.com/ktakattack), [Falak Shah](https://github.com/falaktheoptimist)**Date created:** 2020/06/23**Last modified:** 2023/11/22**Description:** This notebook demonstrates how to do timeseries forecasting using a LSTM model. Setup<jupyter_code>import pandas as pd
import matplotlib.pyplot as plt
import keras<jupyter_output><empty_output><jupyter_text>Climate Data Time-SeriesWe will be using Jena Climate dataset recorded by the[Max Planck Institute for Biogeochemistry](https://www.bgc-jena.mpg.de/wetter/).The dataset consists of 14 features such as temperature, pressure, humidity etc, recorded once per10 minutes.**Location**: Weather Station, Max Planck Institute for Biogeochemistryin Jena, Germany**Time-frame Considered**: Jan 10, 2009 - December 31, 2016The table below shows the column names, their value formats, and their description.Index| Features |Format |Description-----|---------------|-------------------|-----------------------1 |Date Time |01.01.2009 00:10:00|Date-time reference2 |p (mbar) |996.52 |The pascal SI derived unit of pressure used to quantify internal pressure. Meteorological reports typically state atmospheric pressure in millibars.3 |T (degC) |-8.02 |Temperature in Celsius4 |Tpot (K) |265.4 |Temperature in Kelvin5 |Tdew (degC) |-8.9 |Temperature in Celsius relative to humidity. Dew Point is a measure of the absolute amount of water in the air, the DP is the temperature at which the air cannot hold all the moisture in it and water condenses.6 |rh (%) |93.3 |Relative Humidity is a measure of how saturated the air is with water vapor, the %RH determines the amount of water contained within collection objects.7 |VPmax (mbar) |3.33 |Saturation vapor pressure8 |VPact (mbar) |3.11 |Vapor pressure9 |VPdef (mbar) |0.22 |Vapor pressure deficit10 |sh (g/kg) |1.94 |Specific humidity11 |H2OC (mmol/mol)|3.12 |Water vapor concentration12 |rho (g/m ** 3) |1307.75 |Airtight13 |wv (m/s) |1.03 |Wind speed14 |max. wv (m/s) |1.75 |Maximum wind speed15 |wd (deg) |152.3 |Wind direction in degrees<jupyter_code>from zipfile import ZipFile
uri = "https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip"
zip_path = keras.utils.get_file(origin=uri, fname="jena_climate_2009_2016.csv.zip")
zip_file = ZipFile(zip_path)
zip_file.extractall()
csv_path = "jena_climate_2009_2016.csv"
df = pd.read_csv(csv_path)<jupyter_output><empty_output><jupyter_text>Raw Data VisualizationTo give us a sense of the data we are working with, each feature has been plotted below.This shows the distinct pattern of each feature over the time period from 2009 to 2016.It also shows where anomalies are present, which will be addressed during normalization.<jupyter_code>titles = [
"Pressure",
"Temperature",
"Temperature in Kelvin",
"Temperature (dew point)",
"Relative Humidity",
"Saturation vapor pressure",
"Vapor pressure",
"Vapor pressure deficit",
"Specific humidity",
"Water vapor concentration",
"Airtight",
"Wind speed",
"Maximum wind speed",
"Wind direction in degrees",
]
feature_keys = [
"p (mbar)",
"T (degC)",
"Tpot (K)",
"Tdew (degC)",
"rh (%)",
"VPmax (mbar)",
"VPact (mbar)",
"VPdef (mbar)",
"sh (g/kg)",
"H2OC (mmol/mol)",
"rho (g/m**3)",
"wv (m/s)",
"max. wv (m/s)",
"wd (deg)",
]
colors = [
"blue",
"orange",
"green",
"red",
"purple",
"brown",
"pink",
"gray",
"olive",
"cyan",
]
date_time_key = "Date Time"
def show_raw_visualization(data):
time_data = data[date_time_key]
fig, axes = plt.subplots(
nrows=7, ncols=2, figsize=(15, 20), dpi=80, facecolor="w", edgecolor="k"
)
for i in range(len(feature_keys)):
key = feature_keys[i]
c = colors[i % (len(colors))]
t_data = data[key]
t_data.index = time_data
t_data.head()
ax = t_data.plot(
ax=axes[i // 2, i % 2],
color=c,
title="{} - {}".format(titles[i], key),
rot=25,
)
ax.legend([titles[i]])
plt.tight_layout()
show_raw_visualization(df)<jupyter_output><empty_output><jupyter_text>Data PreprocessingHere we are picking ~300,000 data points for training. Observation is recorded every10 mins, that means 6 times per hour. We will resample one point per hour since nodrastic change is expected within 60 minutes. We do this via the `sampling_rate`argument in `timeseries_dataset_from_array` utility.We are tracking data from past 720 timestamps (720/6=120 hours). This data will beused to predict the temperature after 72 timestamps (72/6=12 hours).Since every feature has values withvarying ranges, we do normalization to confine feature values to a range of `[0, 1]` beforetraining a neural network.We do this by subtracting the mean and dividing by the standard deviation of each feature.71.5 % of the data will be used to train the model, i.e. 300,693 rows. `split_fraction` canbe changed to alter this percentage.The model is shown data for first 5 days i.e. 720 observations, that are sampled everyhour. The temperature after 72 (12 hours * 6 observation per hour) observation will beused as a label.<jupyter_code>split_fraction = 0.715
train_split = int(split_fraction * int(df.shape[0]))
step = 6
past = 720
future = 72
learning_rate = 0.001
batch_size = 256
epochs = 10
def normalize(data, train_split):
data_mean = data[:train_split].mean(axis=0)
data_std = data[:train_split].std(axis=0)
return (data - data_mean) / data_std<jupyter_output><empty_output><jupyter_text>We can see from the correlation heatmap, few parameters like Relative Humidity andSpecific Humidity are redundant. Hence we will be using select features, not all.<jupyter_code>print(
"The selected parameters are:",
", ".join([titles[i] for i in [0, 1, 5, 7, 8, 10, 11]]),
)
selected_features = [feature_keys[i] for i in [0, 1, 5, 7, 8, 10, 11]]
features = df[selected_features]
features.index = df[date_time_key]
features.head()
features = normalize(features.values, train_split)
features = pd.DataFrame(features)
features.head()
train_data = features.loc[0 : train_split - 1]
val_data = features.loc[train_split:]<jupyter_output><empty_output><jupyter_text>Training datasetThe training dataset labels starts from the 792nd observation (720 + 72).<jupyter_code>start = past + future
end = start + train_split
x_train = train_data[[i for i in range(7)]].values
y_train = features.iloc[start:end][[1]]
sequence_length = int(past / step)<jupyter_output><empty_output><jupyter_text>The `timeseries_dataset_from_array` function takes in a sequence of data-points gathered atequal intervals, along with time series parameters such as length of thesequences/windows, spacing between two sequence/windows, etc., to produce batches ofsub-timeseries inputs and targets sampled from the main timeseries.<jupyter_code>dataset_train = keras.preprocessing.timeseries_dataset_from_array(
x_train,
y_train,
sequence_length=sequence_length,
sampling_rate=step,
batch_size=batch_size,
)<jupyter_output><empty_output><jupyter_text>Validation datasetThe validation dataset must not contain the last 792 rows as we won't have label data forthose records, hence 792 must be subtracted from the end of the data.The validation label dataset must start from 792 after train_split, hence we must addpast + future (792) to label_start.<jupyter_code>x_end = len(val_data) - past - future
label_start = train_split + past + future
x_val = val_data.iloc[:x_end][[i for i in range(7)]].values
y_val = features.iloc[label_start:][[1]]
dataset_val = keras.preprocessing.timeseries_dataset_from_array(
x_val,
y_val,
sequence_length=sequence_length,
sampling_rate=step,
batch_size=batch_size,
)
for batch in dataset_train.take(1):
inputs, targets = batch
print("Input shape:", inputs.numpy().shape)
print("Target shape:", targets.numpy().shape)<jupyter_output><empty_output><jupyter_text>Training<jupyter_code>inputs = keras.layers.Input(shape=(inputs.shape[1], inputs.shape[2]))
lstm_out = keras.layers.LSTM(32)(inputs)
outputs = keras.layers.Dense(1)(lstm_out)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate), loss="mse")
model.summary()<jupyter_output><empty_output><jupyter_text>We'll use the `ModelCheckpoint` callback to regularly save checkpoints, andthe `EarlyStopping` callback to interrupt training when the validation lossis not longer improving.<jupyter_code>path_checkpoint = "model_checkpoint.weights.h5"
es_callback = keras.callbacks.EarlyStopping(monitor="val_loss", min_delta=0, patience=5)
modelckpt_callback = keras.callbacks.ModelCheckpoint(
monitor="val_loss",
filepath=path_checkpoint,
verbose=1,
save_weights_only=True,
save_best_only=True,
)
history = model.fit(
dataset_train,
epochs=epochs,
validation_data=dataset_val,
callbacks=[es_callback, modelckpt_callback],
)<jupyter_output><empty_output><jupyter_text>We can visualize the loss with the function below. After one point, the loss stopsdecreasing.<jupyter_code>def visualize_loss(history, title):
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, "b", label="Training loss")
plt.plot(epochs, val_loss, "r", label="Validation loss")
plt.title(title)
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
visualize_loss(history, "Training and Validation Loss")<jupyter_output><empty_output><jupyter_text>PredictionThe trained model above is now able to make predictions for 5 sets of values fromvalidation set.<jupyter_code>def show_plot(plot_data, delta, title):
labels = ["History", "True Future", "Model Prediction"]
marker = [".-", "rx", "go"]
time_steps = list(range(-(plot_data[0].shape[0]), 0))
if delta:
future = delta
else:
future = 0
plt.title(title)
for i, val in enumerate(plot_data):
if i:
plt.plot(future, plot_data[i], marker[i], markersize=10, label=labels[i])
else:
plt.plot(time_steps, plot_data[i].flatten(), marker[i], label=labels[i])
plt.legend()
plt.xlim([time_steps[0], (future + 5) * 2])
plt.xlabel("Time-Step")
plt.show()
return
for x, y in dataset_val.take(5):
show_plot(
[x[0][:, 1].numpy(), y[0].numpy(), model.predict(x)[0]],
12,
"Single Step Prediction",
)<jupyter_output><empty_output> | keras-io/examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb/0 | {
"file_path": "keras-io/examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb",
"repo_id": "keras-io",
"token_count": 4202
} | 97 |
"""
Title: Convolutional autoencoder for image denoising
Author: [Santiago L. Valdarrama](https://twitter.com/svpino)
Date created: 2021/03/01
Last modified: 2021/03/01
Description: How to train a deep convolutional autoencoder for image denoising.
Accelerator: GPU
"""
"""
## Introduction
This example demonstrates how to implement a deep convolutional autoencoder
for image denoising, mapping noisy digits images from the MNIST dataset to
clean digits images. This implementation is based on an original blog post
titled [Building Autoencoders in Keras](https://blog.keras.io/building-autoencoders-in-keras.html)
by [Franรงois Chollet](https://twitter.com/fchollet).
"""
"""
## Setup
"""
import numpy as np
import matplotlib.pyplot as plt
from keras import layers
from keras.datasets import mnist
from keras.models import Model
def preprocess(array):
"""Normalizes the supplied array and reshapes it."""
array = array.astype("float32") / 255.0
array = np.reshape(array, (len(array), 28, 28, 1))
return array
def noise(array):
"""Adds random noise to each image in the supplied array."""
noise_factor = 0.4
noisy_array = array + noise_factor * np.random.normal(
loc=0.0, scale=1.0, size=array.shape
)
return np.clip(noisy_array, 0.0, 1.0)
def display(array1, array2):
"""Displays ten random images from each array."""
n = 10
indices = np.random.randint(len(array1), size=n)
images1 = array1[indices, :]
images2 = array2[indices, :]
plt.figure(figsize=(20, 4))
for i, (image1, image2) in enumerate(zip(images1, images2)):
ax = plt.subplot(2, n, i + 1)
plt.imshow(image1.reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(image2.reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
## Prepare the data
"""
# Since we only need images from the dataset to encode and decode, we
# won't use the labels.
(train_data, _), (test_data, _) = mnist.load_data()
# Normalize and reshape the data
train_data = preprocess(train_data)
test_data = preprocess(test_data)
# Create a copy of the data with added noise
noisy_train_data = noise(train_data)
noisy_test_data = noise(test_data)
# Display the train data and a version of it with added noise
display(train_data, noisy_train_data)
"""
## Build the autoencoder
We are going to use the Functional API to build our convolutional autoencoder.
"""
input = layers.Input(shape=(28, 28, 1))
# Encoder
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(input)
x = layers.MaxPooling2D((2, 2), padding="same")(x)
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(x)
x = layers.MaxPooling2D((2, 2), padding="same")(x)
# Decoder
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x)
x = layers.Conv2D(1, (3, 3), activation="sigmoid", padding="same")(x)
# Autoencoder
autoencoder = Model(input, x)
autoencoder.compile(optimizer="adam", loss="binary_crossentropy")
autoencoder.summary()
"""
Now we can train our autoencoder using `train_data` as both our input data
and target. Notice we are setting up the validation data using the same
format.
"""
autoencoder.fit(
x=train_data,
y=train_data,
epochs=50,
batch_size=128,
shuffle=True,
validation_data=(test_data, test_data),
)
"""
Let's predict on our test dataset and display the original image together with
the prediction from our autoencoder.
Notice how the predictions are pretty close to the original images, although
not quite the same.
"""
predictions = autoencoder.predict(test_data)
display(test_data, predictions)
"""
Now that we know that our autoencoder works, let's retrain it using the noisy
data as our input and the clean data as our target. We want our autoencoder to
learn how to denoise the images.
"""
autoencoder.fit(
x=noisy_train_data,
y=train_data,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(noisy_test_data, test_data),
)
"""
Let's now predict on the noisy data and display the results of our autoencoder.
Notice how the autoencoder does an amazing job at removing the noise from the
input images.
"""
predictions = autoencoder.predict(noisy_test_data)
display(noisy_test_data, predictions)
| keras-io/examples/vision/autoencoder.py/0 | {
"file_path": "keras-io/examples/vision/autoencoder.py",
"repo_id": "keras-io",
"token_count": 1660
} | 98 |
"""
Title: FixRes: Fixing train-test resolution discrepancy
Author: [Sayak Paul](https://twitter.com/RisingSayak)
Date created: 2021/10/08
Last modified: 2021/10/10
Description: Mitigating resolution discrepancy between training and test sets.
Accelerator: GPU
"""
"""
## Introduction
It is a common practice to use the same input image resolution while training and testing
vision models. However, as investigated in
[Fixing the train-test resolution discrepancy](https://arxiv.org/abs/1906.06423)
(Touvron et al.), this practice leads to suboptimal performance. Data augmentation
is an indispensable part of the training process of deep neural networks. For vision models, we
typically use random resized crops during training and center crops during inference.
This introduces a discrepancy in the object sizes seen during training and inference.
As shown by Touvron et al., if we can fix this discrepancy, we can significantly
boost model performance.
In this example, we implement the **FixRes** techniques introduced by Touvron et al.
to fix this discrepancy.
"""
"""
## Imports
"""
import keras
from keras import layers
import tensorflow as tf # just for image processing and pipeline
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
import matplotlib.pyplot as plt
"""
## Load the `tf_flowers` dataset
"""
train_dataset, val_dataset = tfds.load(
"tf_flowers", split=["train[:90%]", "train[90%:]"], as_supervised=True
)
num_train = train_dataset.cardinality()
num_val = val_dataset.cardinality()
print(f"Number of training examples: {num_train}")
print(f"Number of validation examples: {num_val}")
"""
## Data preprocessing utilities
"""
"""
We create three datasets:
1. A dataset with a smaller resolution - 128x128.
2. Two datasets with a larger resolution - 224x224.
We will apply different augmentation transforms to the larger-resolution datasets.
The idea of FixRes is to first train a model on a smaller resolution dataset and then fine-tune
it on a larger resolution dataset. This simple yet effective recipe leads to non-trivial performance
improvements. Please refer to the [original paper](https://arxiv.org/abs/1906.06423) for
results.
"""
# Reference: https://github.com/facebookresearch/FixRes/blob/main/transforms_v2.py.
batch_size = 32
auto = tf.data.AUTOTUNE
smaller_size = 128
bigger_size = 224
size_for_resizing = int((bigger_size / smaller_size) * bigger_size)
central_crop_layer = layers.CenterCrop(bigger_size, bigger_size)
def preprocess_initial(train, image_size):
"""Initial preprocessing function for training on smaller resolution.
For training, do random_horizontal_flip -> random_crop.
For validation, just resize.
No color-jittering has been used.
"""
def _pp(image, label, train):
if train:
channels = image.shape[-1]
begin, size, _ = tf.image.sample_distorted_bounding_box(
tf.shape(image),
tf.zeros([0, 0, 4], tf.float32),
area_range=(0.05, 1.0),
min_object_covered=0,
use_image_if_no_bounding_boxes=True,
)
image = tf.slice(image, begin, size)
image.set_shape([None, None, channels])
image = tf.image.resize(image, [image_size, image_size])
image = tf.image.random_flip_left_right(image)
else:
image = tf.image.resize(image, [image_size, image_size])
return image, label
return _pp
def preprocess_finetune(image, label, train):
"""Preprocessing function for fine-tuning on a higher resolution.
For training, resize to a bigger resolution to maintain the ratio ->
random_horizontal_flip -> center_crop.
For validation, do the same without any horizontal flipping.
No color-jittering has been used.
"""
image = tf.image.resize(image, [size_for_resizing, size_for_resizing])
if train:
image = tf.image.random_flip_left_right(image)
image = central_crop_layer(image[None, ...])[0]
return image, label
def make_dataset(
dataset: tf.data.Dataset,
train: bool,
image_size: int = smaller_size,
fixres: bool = True,
num_parallel_calls=auto,
):
if image_size not in [smaller_size, bigger_size]:
raise ValueError(f"{image_size} resolution is not supported.")
# Determine which preprocessing function we are using.
if image_size == smaller_size:
preprocess_func = preprocess_initial(train, image_size)
elif not fixres and image_size == bigger_size:
preprocess_func = preprocess_initial(train, image_size)
else:
preprocess_func = preprocess_finetune
dataset = dataset.map(
lambda x, y: preprocess_func(x, y, train),
num_parallel_calls=num_parallel_calls,
)
dataset = dataset.batch(batch_size)
if train:
dataset = dataset.shuffle(batch_size * 10)
return dataset.prefetch(num_parallel_calls)
"""
Notice how the augmentation transforms vary for the kind of dataset we are preparing.
"""
"""
## Prepare datasets
"""
initial_train_dataset = make_dataset(train_dataset, train=True, image_size=smaller_size)
initial_val_dataset = make_dataset(val_dataset, train=False, image_size=smaller_size)
finetune_train_dataset = make_dataset(train_dataset, train=True, image_size=bigger_size)
finetune_val_dataset = make_dataset(val_dataset, train=False, image_size=bigger_size)
vanilla_train_dataset = make_dataset(
train_dataset, train=True, image_size=bigger_size, fixres=False
)
vanilla_val_dataset = make_dataset(
val_dataset, train=False, image_size=bigger_size, fixres=False
)
"""
## Visualize the datasets
"""
def visualize_dataset(batch_images):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(batch_images[n].numpy().astype("int"))
plt.axis("off")
plt.show()
print(f"Batch shape: {batch_images.shape}.")
# Smaller resolution.
initial_sample_images, _ = next(iter(initial_train_dataset))
visualize_dataset(initial_sample_images)
# Bigger resolution, only for fine-tuning.
finetune_sample_images, _ = next(iter(finetune_train_dataset))
visualize_dataset(finetune_sample_images)
# Bigger resolution, with the same augmentation transforms as
# the smaller resolution dataset.
vanilla_sample_images, _ = next(iter(vanilla_train_dataset))
visualize_dataset(vanilla_sample_images)
"""
## Model training utilities
We train multiple variants of ResNet50V2
([He et al.](https://arxiv.org/abs/1603.05027)):
1. On the smaller resolution dataset (128x128). It will be trained from scratch.
2. Then fine-tune the model from 1 on the larger resolution (224x224) dataset.
3. Train another ResNet50V2 from scratch on the larger resolution dataset.
As a reminder, the larger resolution datasets differ in terms of their augmentation
transforms.
"""
def get_training_model(num_classes=5):
inputs = layers.Input((None, None, 3))
resnet_base = keras.applications.ResNet50V2(
include_top=False, weights=None, pooling="avg"
)
resnet_base.trainable = True
x = layers.Rescaling(scale=1.0 / 127.5, offset=-1)(inputs)
x = resnet_base(x)
outputs = layers.Dense(num_classes, activation="softmax")(x)
return keras.Model(inputs, outputs)
def train_and_evaluate(
model,
train_ds,
val_ds,
epochs,
learning_rate=1e-3,
use_early_stopping=False,
):
optimizer = keras.optimizers.Adam(learning_rate=learning_rate)
model.compile(
optimizer=optimizer,
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
if use_early_stopping:
es_callback = keras.callbacks.EarlyStopping(patience=5)
callbacks = [es_callback]
else:
callbacks = None
model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs,
callbacks=callbacks,
)
_, accuracy = model.evaluate(val_ds)
print(f"Top-1 accuracy on the validation set: {accuracy*100:.2f}%.")
return model
"""
## Experiment 1: Train on 128x128 and then fine-tune on 224x224
"""
epochs = 30
smaller_res_model = get_training_model()
smaller_res_model = train_and_evaluate(
smaller_res_model, initial_train_dataset, initial_val_dataset, epochs
)
"""
### Freeze all the layers except for the final Batch Normalization layer
For fine-tuning, we train only two layers:
* The final Batch Normalization ([Ioffe et al.](https://arxiv.org/abs/1502.03167)) layer.
* The classification layer.
We are unfreezing the final Batch Normalization layer to compensate for the change in
activation statistics before the global average pooling layer. As shown in
[the paper](https://arxiv.org/abs/1906.06423), unfreezing the final Batch
Normalization layer is enough.
For a comprehensive guide on fine-tuning models in Keras, refer to
[this tutorial](https://keras.io/guides/transfer_learning/).
"""
for layer in smaller_res_model.layers[2].layers:
layer.trainable = False
smaller_res_model.layers[2].get_layer("post_bn").trainable = True
epochs = 10
# Use a lower learning rate during fine-tuning.
bigger_res_model = train_and_evaluate(
smaller_res_model,
finetune_train_dataset,
finetune_val_dataset,
epochs,
learning_rate=1e-4,
)
"""
## Experiment 2: Train a model on 224x224 resolution from scratch
Now, we train another model from scratch on the larger resolution dataset. Recall that
the augmentation transforms used in this dataset are different from before.
"""
epochs = 30
vanilla_bigger_res_model = get_training_model()
vanilla_bigger_res_model = train_and_evaluate(
vanilla_bigger_res_model, vanilla_train_dataset, vanilla_val_dataset, epochs
)
"""
As we can notice from the above cells, FixRes leads to a better performance. Another
advantage of FixRes is the improved total training time and reduction in GPU memory usage.
FixRes is model-agnostic, you can use it on any image classification model
to potentially boost performance.
You can find more results
[here](https://tensorboard.dev/experiment/BQOg28w0TlmvuJYeqsVntw)
that were gathered by running the same code with different random seeds.
"""
| keras-io/examples/vision/fixres.py/0 | {
"file_path": "keras-io/examples/vision/fixres.py",
"repo_id": "keras-io",
"token_count": 3636
} | 99 |
<jupyter_start><jupyter_text>Image classification with Vision Transformer**Author:** [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/)**Date created:** 2021/01/18**Last modified:** 2021/01/18**Description:** Implementing the Vision Transformer (ViT) model for image classification. IntroductionThis example implements the [Vision Transformer (ViT)](https://arxiv.org/abs/2010.11929)model by Alexey Dosovitskiy et al. for image classification,and demonstrates it on the CIFAR-100 dataset.The ViT model applies the Transformer architecture with self-attention to sequences ofimage patches, without using convolution layers. Setup<jupyter_code>import os
os.environ["KERAS_BACKEND"] = "jax" # @param ["tensorflow", "jax", "torch"]
import keras
from keras import layers
from keras import ops
import numpy as np
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>Prepare the data<jupyter_code>num_classes = 100
input_shape = (32, 32, 3)
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()
print(f"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}")
print(f"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}")<jupyter_output><empty_output><jupyter_text>Configure the hyperparameters<jupyter_code>learning_rate = 0.001
weight_decay = 0.0001
batch_size = 256
num_epochs = 10 # For real training, use num_epochs=100. 10 is a test value
image_size = 72 # We'll resize input images to this size
patch_size = 6 # Size of the patches to be extract from the input images
num_patches = (image_size // patch_size) ** 2
projection_dim = 64
num_heads = 4
transformer_units = [
projection_dim * 2,
projection_dim,
] # Size of the transformer layers
transformer_layers = 8
mlp_head_units = [
2048,
1024,
] # Size of the dense layers of the final classifier<jupyter_output><empty_output><jupyter_text>Use data augmentation<jupyter_code>data_augmentation = keras.Sequential(
[
layers.Normalization(),
layers.Resizing(image_size, image_size),
layers.RandomFlip("horizontal"),
layers.RandomRotation(factor=0.02),
layers.RandomZoom(height_factor=0.2, width_factor=0.2),
],
name="data_augmentation",
)
# Compute the mean and the variance of the training data for normalization.
data_augmentation.layers[0].adapt(x_train)<jupyter_output><empty_output><jupyter_text>Implement multilayer perceptron (MLP)<jupyter_code>def mlp(x, hidden_units, dropout_rate):
for units in hidden_units:
x = layers.Dense(units, activation=keras.activations.gelu)(x)
x = layers.Dropout(dropout_rate)(x)
return x<jupyter_output><empty_output><jupyter_text>Implement patch creation as a layer<jupyter_code>class Patches(layers.Layer):
def __init__(self, patch_size):
super().__init__()
self.patch_size = patch_size
def call(self, images):
input_shape = ops.shape(images)
batch_size = input_shape[0]
height = input_shape[1]
width = input_shape[2]
channels = input_shape[3]
num_patches_h = height // self.patch_size
num_patches_w = width // self.patch_size
patches = keras.ops.image.extract_patches(images, size=self.patch_size)
patches = ops.reshape(
patches,
(
batch_size,
num_patches_h * num_patches_w,
self.patch_size * self.patch_size * channels,
),
)
return patches
def get_config(self):
config = super().get_config()
config.update({"patch_size": self.patch_size})
return config<jupyter_output><empty_output><jupyter_text>Let's display patches for a sample image<jupyter_code>plt.figure(figsize=(4, 4))
image = x_train[np.random.choice(range(x_train.shape[0]))]
plt.imshow(image.astype("uint8"))
plt.axis("off")
resized_image = ops.image.resize(
ops.convert_to_tensor([image]), size=(image_size, image_size)
)
patches = Patches(patch_size)(resized_image)
print(f"Image size: {image_size} X {image_size}")
print(f"Patch size: {patch_size} X {patch_size}")
print(f"Patches per image: {patches.shape[1]}")
print(f"Elements per patch: {patches.shape[-1]}")
n = int(np.sqrt(patches.shape[1]))
plt.figure(figsize=(4, 4))
for i, patch in enumerate(patches[0]):
ax = plt.subplot(n, n, i + 1)
patch_img = ops.reshape(patch, (patch_size, patch_size, 3))
plt.imshow(ops.convert_to_numpy(patch_img).astype("uint8"))
plt.axis("off")<jupyter_output><empty_output><jupyter_text>Implement the patch encoding layerThe `PatchEncoder` layer will linearly transform a patch by projecting it into avector of size `projection_dim`. In addition, it adds a learnable positionembedding to the projected vector.<jupyter_code>class PatchEncoder(layers.Layer):
def __init__(self, num_patches, projection_dim):
super().__init__()
self.num_patches = num_patches
self.projection = layers.Dense(units=projection_dim)
self.position_embedding = layers.Embedding(
input_dim=num_patches, output_dim=projection_dim
)
def call(self, patch):
positions = ops.expand_dims(
ops.arange(start=0, stop=self.num_patches, step=1), axis=0
)
projected_patches = self.projection(patch)
encoded = projected_patches + self.position_embedding(positions)
return encoded
def get_config(self):
config = super().get_config()
config.update({"num_patches": self.num_patches})
return config<jupyter_output><empty_output><jupyter_text>Build the ViT modelThe ViT model consists of multiple Transformer blocks,which use the `layers.MultiHeadAttention` layer as a self-attention mechanismapplied to the sequence of patches. The Transformer blocks produce a`[batch_size, num_patches, projection_dim]` tensor, which is processed via anclassifier head with softmax to produce the final class probabilities output.Unlike the technique described in the [paper](https://arxiv.org/abs/2010.11929),which prepends a learnable embedding to the sequence of encoded patches to serveas the image representation, all the outputs of the final Transformer block arereshaped with `layers.Flatten()` and used as the imagerepresentation input to the classifier head.Note that the `layers.GlobalAveragePooling1D` layercould also be used instead to aggregate the outputs of the Transformer block,especially when the number of patches and the projection dimensions are large.<jupyter_code>def create_vit_classifier():
inputs = keras.Input(shape=input_shape)
# Augment data.
augmented = data_augmentation(inputs)
# Create patches.
patches = Patches(patch_size)(augmented)
# Encode patches.
encoded_patches = PatchEncoder(num_patches, projection_dim)(patches)
# Create multiple layers of the Transformer block.
for _ in range(transformer_layers):
# Layer normalization 1.
x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
# Create a multi-head attention layer.
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=projection_dim, dropout=0.1
)(x1, x1)
# Skip connection 1.
x2 = layers.Add()([attention_output, encoded_patches])
# Layer normalization 2.
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
# MLP.
x3 = mlp(x3, hidden_units=transformer_units, dropout_rate=0.1)
# Skip connection 2.
encoded_patches = layers.Add()([x3, x2])
# Create a [batch_size, projection_dim] tensor.
representation = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
representation = layers.Flatten()(representation)
representation = layers.Dropout(0.5)(representation)
# Add MLP.
features = mlp(representation, hidden_units=mlp_head_units, dropout_rate=0.5)
# Classify outputs.
logits = layers.Dense(num_classes)(features)
# Create the Keras model.
model = keras.Model(inputs=inputs, outputs=logits)
return model<jupyter_output><empty_output><jupyter_text>Compile, train, and evaluate the mode<jupyter_code>def run_experiment(model):
optimizer = keras.optimizers.AdamW(
learning_rate=learning_rate, weight_decay=weight_decay
)
model.compile(
optimizer=optimizer,
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[
keras.metrics.SparseCategoricalAccuracy(name="accuracy"),
keras.metrics.SparseTopKCategoricalAccuracy(5, name="top-5-accuracy"),
],
)
checkpoint_filepath = "/tmp/checkpoint.weights.h5"
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath,
monitor="val_accuracy",
save_best_only=True,
save_weights_only=True,
)
history = model.fit(
x=x_train,
y=y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=0.1,
callbacks=[checkpoint_callback],
)
model.load_weights(checkpoint_filepath)
_, accuracy, top_5_accuracy = model.evaluate(x_test, y_test)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
return history
vit_classifier = create_vit_classifier()
history = run_experiment(vit_classifier)
def plot_history(item):
plt.plot(history.history[item], label=item)
plt.plot(history.history["val_" + item], label="val_" + item)
plt.xlabel("Epochs")
plt.ylabel(item)
plt.title("Train and Validation {} Over Epochs".format(item), fontsize=14)
plt.legend()
plt.grid()
plt.show()
plot_history("loss")
plot_history("top-5-accuracy")<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/image_classification_with_vision_transformer.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/image_classification_with_vision_transformer.ipynb",
"repo_id": "keras-io",
"token_count": 3819
} | 100 |
<jupyter_start><jupyter_text>Natural language image search with a Dual Encoder**Author:** [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/)**Date created:** 2021/01/30**Last modified:** 2021/01/30**Description:** Implementation of a dual encoder model for retrieving images that match natural language queries. IntroductionThe example demonstrates how to build a dual encoder (also known as two-tower) neural networkmodel to search for images using natural language. The model is inspired bythe [CLIP](https://openai.com/blog/clip/)approach, introduced by Alec Radford et al. The idea is to train a vision encoder and a textencoder jointly to project the representation of images and their captions into the same embeddingspace, such that the caption embeddings are located near the embeddings of the images they describe.This example requires TensorFlow 2.4 or higher.In addition, [TensorFlow Hub](https://www.tensorflow.org/hub)and [TensorFlow Text](https://www.tensorflow.org/tutorials/tensorflow_text/intro)are required for the BERT model, and [TensorFlow Addons](https://www.tensorflow.org/addons)is required for the AdamW optimizer. These libraries can be installed using thefollowing command:```pythonpip install -q -U tensorflow-hub tensorflow-text tensorflow-addons``` Setup<jupyter_code>import os
import collections
import json
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_hub as hub
import tensorflow_text as text
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from tqdm import tqdm
# Suppressing tf.hub warnings
tf.get_logger().setLevel("ERROR")<jupyter_output><empty_output><jupyter_text>Prepare the dataWe will use the [MS-COCO](https://cocodataset.org/home) dataset to train ourdual encoder model. MS-COCO contains over 82,000 images, each of which has at least5 different caption annotations. The dataset is usually used for[image captioning](https://www.tensorflow.org/tutorials/text/image_captioning)tasks, but we can repurpose the image-caption pairs to train our dual encodermodel for image search.Download and extract the dataFirst, let's download the dataset, which consists of two compressed folders:one with images, and the otherโwith associated image captions.Note that the compressed images folder is 13GB in size.<jupyter_code>root_dir = "datasets"
annotations_dir = os.path.join(root_dir, "annotations")
images_dir = os.path.join(root_dir, "train2014")
tfrecords_dir = os.path.join(root_dir, "tfrecords")
annotation_file = os.path.join(annotations_dir, "captions_train2014.json")
# Download caption annotation files
if not os.path.exists(annotations_dir):
annotation_zip = tf.keras.utils.get_file(
"captions.zip",
cache_dir=os.path.abspath("."),
origin="http://images.cocodataset.org/annotations/annotations_trainval2014.zip",
extract=True,
)
os.remove(annotation_zip)
# Download image files
if not os.path.exists(images_dir):
image_zip = tf.keras.utils.get_file(
"train2014.zip",
cache_dir=os.path.abspath("."),
origin="http://images.cocodataset.org/zips/train2014.zip",
extract=True,
)
os.remove(image_zip)
print("Dataset is downloaded and extracted successfully.")
with open(annotation_file, "r") as f:
annotations = json.load(f)["annotations"]
image_path_to_caption = collections.defaultdict(list)
for element in annotations:
caption = f"{element['caption'].lower().rstrip('.')}"
image_path = images_dir + "/COCO_train2014_" + "%012d.jpg" % (element["image_id"])
image_path_to_caption[image_path].append(caption)
image_paths = list(image_path_to_caption.keys())
print(f"Number of images: {len(image_paths)}")<jupyter_output><empty_output><jupyter_text>Process and save the data to TFRecord filesYou can change the `sample_size` parameter to control many image-caption pairswill be used for training the dual encoder model.In this example we set `train_size` to 30,000 images,which is about 35% of the dataset. We use 2 captions for eachimage, thus producing 60,000 image-caption pairs. The size of the training setaffects the quality of the produced encoders, but more examples would lead tolonger training time.<jupyter_code>train_size = 30000
valid_size = 5000
captions_per_image = 2
images_per_file = 2000
train_image_paths = image_paths[:train_size]
num_train_files = int(np.ceil(train_size / images_per_file))
train_files_prefix = os.path.join(tfrecords_dir, "train")
valid_image_paths = image_paths[-valid_size:]
num_valid_files = int(np.ceil(valid_size / images_per_file))
valid_files_prefix = os.path.join(tfrecords_dir, "valid")
tf.io.gfile.makedirs(tfrecords_dir)
def bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def create_example(image_path, caption):
feature = {
"caption": bytes_feature(caption.encode()),
"raw_image": bytes_feature(tf.io.read_file(image_path).numpy()),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
def write_tfrecords(file_name, image_paths):
caption_list = []
image_path_list = []
for image_path in image_paths:
captions = image_path_to_caption[image_path][:captions_per_image]
caption_list.extend(captions)
image_path_list.extend([image_path] * len(captions))
with tf.io.TFRecordWriter(file_name) as writer:
for example_idx in range(len(image_path_list)):
example = create_example(
image_path_list[example_idx], caption_list[example_idx]
)
writer.write(example.SerializeToString())
return example_idx + 1
def write_data(image_paths, num_files, files_prefix):
example_counter = 0
for file_idx in tqdm(range(num_files)):
file_name = files_prefix + "-%02d.tfrecord" % (file_idx)
start_idx = images_per_file * file_idx
end_idx = start_idx + images_per_file
example_counter += write_tfrecords(file_name, image_paths[start_idx:end_idx])
return example_counter
train_example_count = write_data(train_image_paths, num_train_files, train_files_prefix)
print(f"{train_example_count} training examples were written to tfrecord files.")
valid_example_count = write_data(valid_image_paths, num_valid_files, valid_files_prefix)
print(f"{valid_example_count} evaluation examples were written to tfrecord files.")<jupyter_output><empty_output><jupyter_text>Create `tf.data.Dataset` for training and evaluation<jupyter_code>feature_description = {
"caption": tf.io.FixedLenFeature([], tf.string),
"raw_image": tf.io.FixedLenFeature([], tf.string),
}
def read_example(example):
features = tf.io.parse_single_example(example, feature_description)
raw_image = features.pop("raw_image")
features["image"] = tf.image.resize(
tf.image.decode_jpeg(raw_image, channels=3), size=(299, 299)
)
return features
def get_dataset(file_pattern, batch_size):
return (
tf.data.TFRecordDataset(tf.data.Dataset.list_files(file_pattern))
.map(
read_example,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False,
)
.shuffle(batch_size * 10)
.prefetch(buffer_size=tf.data.AUTOTUNE)
.batch(batch_size)
)<jupyter_output><empty_output><jupyter_text>Implement the projection headThe projection head is used to transform the image and the text embeddings tothe same embedding space with the same dimensionality.<jupyter_code>def project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
):
projected_embeddings = layers.Dense(units=projection_dims)(embeddings)
for _ in range(num_projection_layers):
x = tf.nn.gelu(projected_embeddings)
x = layers.Dense(projection_dims)(x)
x = layers.Dropout(dropout_rate)(x)
x = layers.Add()([projected_embeddings, x])
projected_embeddings = layers.LayerNormalization()(x)
return projected_embeddings<jupyter_output><empty_output><jupyter_text>Implement the vision encoderIn this example, we use [Xception](https://keras.io/api/applications/xception/)from [Keras Applications](https://keras.io/api/applications/) as the base for thevision encoder.<jupyter_code>def create_vision_encoder(
num_projection_layers, projection_dims, dropout_rate, trainable=False
):
# Load the pre-trained Xception model to be used as the base encoder.
xception = keras.applications.Xception(
include_top=False, weights="imagenet", pooling="avg"
)
# Set the trainability of the base encoder.
for layer in xception.layers:
layer.trainable = trainable
# Receive the images as inputs.
inputs = layers.Input(shape=(299, 299, 3), name="image_input")
# Preprocess the input image.
xception_input = tf.keras.applications.xception.preprocess_input(inputs)
# Generate the embeddings for the images using the xception model.
embeddings = xception(xception_input)
# Project the embeddings produced by the model.
outputs = project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
)
# Create the vision encoder model.
return keras.Model(inputs, outputs, name="vision_encoder")<jupyter_output><empty_output><jupyter_text>Implement the text encoderWe use [BERT](https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1)from [TensorFlow Hub](https://tfhub.dev) as the text encoder<jupyter_code>def create_text_encoder(
num_projection_layers, projection_dims, dropout_rate, trainable=False
):
# Load the BERT preprocessing module.
preprocess = hub.KerasLayer(
"https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/2",
name="text_preprocessing",
)
# Load the pre-trained BERT model to be used as the base encoder.
bert = hub.KerasLayer(
"https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1",
"bert",
)
# Set the trainability of the base encoder.
bert.trainable = trainable
# Receive the text as inputs.
inputs = layers.Input(shape=(), dtype=tf.string, name="text_input")
# Preprocess the text.
bert_inputs = preprocess(inputs)
# Generate embeddings for the preprocessed text using the BERT model.
embeddings = bert(bert_inputs)["pooled_output"]
# Project the embeddings produced by the model.
outputs = project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
)
# Create the text encoder model.
return keras.Model(inputs, outputs, name="text_encoder")<jupyter_output><empty_output><jupyter_text>Implement the dual encoderTo calculate the loss, we compute the pairwise dot-product similarity betweeneach `caption_i` and `images_j` in the batch as the predictions.The target similarity between `caption_i` and `image_j` is computed asthe average of the (dot-product similarity between `caption_i` and `caption_j`)and (the dot-product similarity between `image_i` and `image_j`).Then, we use crossentropy to compute the loss between the targets and the predictions.<jupyter_code>class DualEncoder(keras.Model):
def __init__(self, text_encoder, image_encoder, temperature=1.0, **kwargs):
super().__init__(**kwargs)
self.text_encoder = text_encoder
self.image_encoder = image_encoder
self.temperature = temperature
self.loss_tracker = keras.metrics.Mean(name="loss")
@property
def metrics(self):
return [self.loss_tracker]
def call(self, features, training=False):
# Place each encoder on a separate GPU (if available).
# TF will fallback on available devices if there are fewer than 2 GPUs.
with tf.device("/gpu:0"):
# Get the embeddings for the captions.
caption_embeddings = text_encoder(features["caption"], training=training)
with tf.device("/gpu:1"):
# Get the embeddings for the images.
image_embeddings = vision_encoder(features["image"], training=training)
return caption_embeddings, image_embeddings
def compute_loss(self, caption_embeddings, image_embeddings):
# logits[i][j] is the dot_similarity(caption_i, image_j).
logits = (
tf.matmul(caption_embeddings, image_embeddings, transpose_b=True)
/ self.temperature
)
# images_similarity[i][j] is the dot_similarity(image_i, image_j).
images_similarity = tf.matmul(
image_embeddings, image_embeddings, transpose_b=True
)
# captions_similarity[i][j] is the dot_similarity(caption_i, caption_j).
captions_similarity = tf.matmul(
caption_embeddings, caption_embeddings, transpose_b=True
)
# targets[i][j] = avarage dot_similarity(caption_i, caption_j) and dot_similarity(image_i, image_j).
targets = keras.activations.softmax(
(captions_similarity + images_similarity) / (2 * self.temperature)
)
# Compute the loss for the captions using crossentropy
captions_loss = keras.losses.categorical_crossentropy(
y_true=targets, y_pred=logits, from_logits=True
)
# Compute the loss for the images using crossentropy
images_loss = keras.losses.categorical_crossentropy(
y_true=tf.transpose(targets), y_pred=tf.transpose(logits), from_logits=True
)
# Return the mean of the loss over the batch.
return (captions_loss + images_loss) / 2
def train_step(self, features):
with tf.GradientTape() as tape:
# Forward pass
caption_embeddings, image_embeddings = self(features, training=True)
loss = self.compute_loss(caption_embeddings, image_embeddings)
# Backward pass
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
# Monitor loss
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def test_step(self, features):
caption_embeddings, image_embeddings = self(features, training=False)
loss = self.compute_loss(caption_embeddings, image_embeddings)
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}<jupyter_output><empty_output><jupyter_text>Train the dual encoder modelIn this experiment, we freeze the base encoders for text and images, and make onlythe projection head trainable.<jupyter_code>num_epochs = 5 # In practice, train for at least 30 epochs
batch_size = 256
vision_encoder = create_vision_encoder(
num_projection_layers=1, projection_dims=256, dropout_rate=0.1
)
text_encoder = create_text_encoder(
num_projection_layers=1, projection_dims=256, dropout_rate=0.1
)
dual_encoder = DualEncoder(text_encoder, vision_encoder, temperature=0.05)
dual_encoder.compile(
optimizer=tfa.optimizers.AdamW(learning_rate=0.001, weight_decay=0.001)
)<jupyter_output><empty_output><jupyter_text>Note that training the model with 60,000 image-caption pairs, with a batch size of 256,takes around 12 minutes per epoch using a V100 GPU accelerator. If 2 GPUs are available,the epoch takes around 8 minutes.<jupyter_code>print(f"Number of GPUs: {len(tf.config.list_physical_devices('GPU'))}")
print(f"Number of examples (caption-image pairs): {train_example_count}")
print(f"Batch size: {batch_size}")
print(f"Steps per epoch: {int(np.ceil(train_example_count / batch_size))}")
train_dataset = get_dataset(os.path.join(tfrecords_dir, "train-*.tfrecord"), batch_size)
valid_dataset = get_dataset(os.path.join(tfrecords_dir, "valid-*.tfrecord"), batch_size)
# Create a learning rate scheduler callback.
reduce_lr = keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=3
)
# Create an early stopping callback.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor="val_loss", patience=5, restore_best_weights=True
)
history = dual_encoder.fit(
train_dataset,
epochs=num_epochs,
validation_data=valid_dataset,
callbacks=[reduce_lr, early_stopping],
)
print("Training completed. Saving vision and text encoders...")
vision_encoder.save("vision_encoder")
text_encoder.save("text_encoder")
print("Models are saved.")<jupyter_output><empty_output><jupyter_text>Plotting the training loss:<jupyter_code>plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.ylabel("Loss")
plt.xlabel("Epoch")
plt.legend(["train", "valid"], loc="upper right")
plt.show()<jupyter_output><empty_output><jupyter_text>Search for images using natural language queriesWe can then retrieve images corresponding to natural language queries viathe following steps:1. Generate embeddings for the images by feeding them into the `vision_encoder`.2. Feed the natural language query to the `text_encoder` to generate a query embedding.3. Compute the similarity between the query embedding and the image embeddingsin the index to retrieve the indices of the top matches.4. Look up the paths of the top matching images to display them.Note that, after training the `dual encoder`, only the fine-tuned `vision_encoder`and `text_encoder` models will be used, while the `dual_encoder` model will be discarded. Generate embeddings for the imagesWe load the images and feed them into the `vision_encoder` to generate their embeddings.In large scale systems, this step is performed using a parallel data processing framework,such as [Apache Spark](https://spark.apache.org) or [Apache Beam](https://beam.apache.org).Generating the image embeddings may take several minutes.<jupyter_code>print("Loading vision and text encoders...")
vision_encoder = keras.models.load_model("vision_encoder")
text_encoder = keras.models.load_model("text_encoder")
print("Models are loaded.")
def read_image(image_path):
image_array = tf.image.decode_jpeg(tf.io.read_file(image_path), channels=3)
return tf.image.resize(image_array, (299, 299))
print(f"Generating embeddings for {len(image_paths)} images...")
image_embeddings = vision_encoder.predict(
tf.data.Dataset.from_tensor_slices(image_paths).map(read_image).batch(batch_size),
verbose=1,
)
print(f"Image embeddings shape: {image_embeddings.shape}.")<jupyter_output><empty_output><jupyter_text>Retrieve relevant imagesIn this example, we use exact matching by computing the dot product similaritybetween the input query embedding and the image embeddings, and retrieve the top kmatches. However, *approximate* similarity matching, using frameworks like[ScaNN](https://github.com/google-research/google-research/tree/master/scann),[Annoy](https://github.com/spotify/annoy), or [Faiss](https://github.com/facebookresearch/faiss)is preferred in real-time use cases to scale with a large number of images.<jupyter_code>def find_matches(image_embeddings, queries, k=9, normalize=True):
# Get the embedding for the query.
query_embedding = text_encoder(tf.convert_to_tensor(queries))
# Normalize the query and the image embeddings.
if normalize:
image_embeddings = tf.math.l2_normalize(image_embeddings, axis=1)
query_embedding = tf.math.l2_normalize(query_embedding, axis=1)
# Compute the dot product between the query and the image embeddings.
dot_similarity = tf.matmul(query_embedding, image_embeddings, transpose_b=True)
# Retrieve top k indices.
results = tf.math.top_k(dot_similarity, k).indices.numpy()
# Return matching image paths.
return [[image_paths[idx] for idx in indices] for indices in results]<jupyter_output><empty_output><jupyter_text>Set the `query` variable to the type of images you want to search for.Try things like: 'a plate of healthy food','a woman wearing a hat is walking down a sidewalk','a bird sits near to the water', or 'wild animals are standing in a field'.<jupyter_code>query = "a family standing next to the ocean on a sandy beach with a surf board"
matches = find_matches(image_embeddings, [query], normalize=True)[0]
plt.figure(figsize=(20, 20))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(mpimg.imread(matches[i]))
plt.axis("off")<jupyter_output><empty_output><jupyter_text>Evaluate the retrieval qualityTo evaluate the dual encoder model, we use the captions as queries.We use the out-of-training-sample images and captions to evaluate the retrieval quality,using top k accuracy. A true prediction is counted if, for a given caption, its associated imageis retrieved within the top k matches.<jupyter_code>def compute_top_k_accuracy(image_paths, k=100):
hits = 0
num_batches = int(np.ceil(len(image_paths) / batch_size))
for idx in tqdm(range(num_batches)):
start_idx = idx * batch_size
end_idx = start_idx + batch_size
current_image_paths = image_paths[start_idx:end_idx]
queries = [
image_path_to_caption[image_path][0] for image_path in current_image_paths
]
result = find_matches(image_embeddings, queries, k)
hits += sum(
[
image_path in matches
for (image_path, matches) in list(zip(current_image_paths, result))
]
)
return hits / len(image_paths)
print("Scoring training data...")
train_accuracy = compute_top_k_accuracy(train_image_paths)
print(f"Train accuracy: {round(train_accuracy * 100, 3)}%")
print("Scoring evaluation data...")
eval_accuracy = compute_top_k_accuracy(image_paths[train_size:])
print(f"Eval accuracy: {round(eval_accuracy * 100, 3)}%")<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/nl_image_search.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/nl_image_search.ipynb",
"repo_id": "keras-io",
"token_count": 7940
} | 101 |
<jupyter_start><jupyter_text>Zero-DCE for low-light image enhancement**Author:** [Soumik Rakshit](http://github.com/soumik12345)**Date created:** 2021/09/18**Last modified:** 2023/07/15**Description:** Implementing Zero-Reference Deep Curve Estimation for low-light image enhancement. Introduction**Zero-Reference Deep Curve Estimation** or **Zero-DCE** formulates low-light imageenhancement as the task of estimating an image-specific[*tonal curve*](https://en.wikipedia.org/wiki/Curve_(tonality)) with a deep neural network.In this example, we train a lightweight deep network, **DCE-Net**, to estimatepixel-wise and high-order tonal curves for dynamic range adjustment of a given image.Zero-DCE takes a low-light image as input and produces high-order tonal curves as its output.These curves are then used for pixel-wise adjustment on the dynamic range of the input toobtain an enhanced image. The curve estimation process is done in such a way that it maintainsthe range of the enhanced image and preserves the contrast of neighboring pixels. Thiscurve estimation is inspired by curves adjustment used in photo editing software such asAdobe Photoshop where users can adjust points throughout an imageโs tonal range.Zero-DCE is appealing because of its relaxed assumptions with regard to reference images:it does not require any input/output image pairs during training.This is achieved through a set of carefully formulated non-reference loss functions,which implicitly measure the enhancement quality and guide the training of the network. References- [Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement](https://arxiv.org/abs/2001.06826)- [Curves adjustment in Adobe Photoshop](https://helpx.adobe.com/photoshop/using/curves-adjustment.html) Downloading LOLDatasetThe **LoL Dataset** has been created for low-light image enhancement. It provides 485images for training and 15 for testing. Each image pair in the dataset consists of alow-light input image and its corresponding well-exposed reference image.<jupyter_code>import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import random
import numpy as np
from glob import glob
from PIL import Image, ImageOps
import matplotlib.pyplot as plt
import keras
from keras import layers
import tensorflow as tf
!wget https://huggingface.co/datasets/geekyrakshit/LoL-Dataset/resolve/main/lol_dataset.zip
!unzip -q lol_dataset.zip && rm lol_dataset.zip<jupyter_output><empty_output><jupyter_text>Creating a TensorFlow DatasetWe use 300 low-light images from the LoL Dataset training set for training, and we usethe remaining 185 low-light images for validation. We resize the images to size `256 x256` to be used for both training and validation. Note that in order to train the DCE-Net,we will not require the corresponding enhanced images.<jupyter_code>IMAGE_SIZE = 256
BATCH_SIZE = 16
MAX_TRAIN_IMAGES = 400
def load_data(image_path):
image = tf.io.read_file(image_path)
image = tf.image.decode_png(image, channels=3)
image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE])
image = image / 255.0
return image
def data_generator(low_light_images):
dataset = tf.data.Dataset.from_tensor_slices((low_light_images))
dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
return dataset
train_low_light_images = sorted(glob("./lol_dataset/our485/low/*"))[:MAX_TRAIN_IMAGES]
val_low_light_images = sorted(glob("./lol_dataset/our485/low/*"))[MAX_TRAIN_IMAGES:]
test_low_light_images = sorted(glob("./lol_dataset/eval15/low/*"))
train_dataset = data_generator(train_low_light_images)
val_dataset = data_generator(val_low_light_images)
print("Train Dataset:", train_dataset)
print("Validation Dataset:", val_dataset)<jupyter_output><empty_output><jupyter_text>The Zero-DCE FrameworkThe goal of DCE-Net is to estimate a set of best-fitting light-enhancement curves(LE-curves) given an input image. The framework then maps all pixels of the inputโs RGBchannels by applying the curves iteratively to obtain the final enhanced image. Understanding light-enhancement curvesA ligh-enhancement curve is a kind of curve that can map a low-light imageto its enhanced version automatically,where the self-adaptive curve parameters are solely dependent on the input image.When designing such a curve, three objectives should be taken into account:- Each pixel value of the enhanced image should be in the normalized range `[0,1]`, in order toavoid information loss induced by overflow truncation.- It should be monotonous, to preserve the contrast between neighboring pixels.- The shape of this curve should be as simple as possible,and the curve should be differentiable to allow backpropagation.The light-enhancement curve is separately applied to three RGB channels instead of solely on theillumination channel. The three-channel adjustment can better preserve the inherent color and reducethe risk of over-saturation. DCE-NetThe DCE-Net is a lightweight deep neural network that learns the mapping between an inputimage and its best-fitting curve parameter maps. The input to the DCE-Net is a low-lightimage while the outputs are a set of pixel-wise curve parameter maps for correspondinghigher-order curves. It is a plain CNN of seven convolutional layers with symmetricalconcatenation. Each layer consists of 32 convolutional kernels of size 3ร3 and stride 1followed by the ReLU activation function. The last convolutional layer is followed by theTanh activation function, which produces 24 parameter maps for 8 iterations, where eachiteration requires three curve parameter maps for the three channels.<jupyter_code>def build_dce_net():
input_img = keras.Input(shape=[None, None, 3])
conv1 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(input_img)
conv2 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv1)
conv3 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv2)
conv4 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv3)
int_con1 = layers.Concatenate(axis=-1)([conv4, conv3])
conv5 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(int_con1)
int_con2 = layers.Concatenate(axis=-1)([conv5, conv2])
conv6 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(int_con2)
int_con3 = layers.Concatenate(axis=-1)([conv6, conv1])
x_r = layers.Conv2D(24, (3, 3), strides=(1, 1), activation="tanh", padding="same")(
int_con3
)
return keras.Model(inputs=input_img, outputs=x_r)<jupyter_output><empty_output><jupyter_text>Loss functionsTo enable zero-reference learning in DCE-Net, we use a set of differentiablezero-reference losses that allow us to evaluate the quality of enhanced images. Color constancy lossThe *color constancy loss* is used to correct the potential color deviations in theenhanced image.<jupyter_code>def color_constancy_loss(x):
mean_rgb = tf.reduce_mean(x, axis=(1, 2), keepdims=True)
mr, mg, mb = (
mean_rgb[:, :, :, 0],
mean_rgb[:, :, :, 1],
mean_rgb[:, :, :, 2],
)
d_rg = tf.square(mr - mg)
d_rb = tf.square(mr - mb)
d_gb = tf.square(mb - mg)
return tf.sqrt(tf.square(d_rg) + tf.square(d_rb) + tf.square(d_gb))<jupyter_output><empty_output><jupyter_text>Exposure lossTo restrain under-/over-exposed regions, we use the *exposure control loss*.It measures the distance between the average intensity value of a local regionand a preset well-exposedness level (set to `0.6`).<jupyter_code>def exposure_loss(x, mean_val=0.6):
x = tf.reduce_mean(x, axis=3, keepdims=True)
mean = tf.nn.avg_pool2d(x, ksize=16, strides=16, padding="VALID")
return tf.reduce_mean(tf.square(mean - mean_val))<jupyter_output><empty_output><jupyter_text>Illumination smoothness lossTo preserve the monotonicity relations between neighboring pixels, the*illumination smoothness loss* is added to each curve parameter map.<jupyter_code>def illumination_smoothness_loss(x):
batch_size = tf.shape(x)[0]
h_x = tf.shape(x)[1]
w_x = tf.shape(x)[2]
count_h = (tf.shape(x)[2] - 1) * tf.shape(x)[3]
count_w = tf.shape(x)[2] * (tf.shape(x)[3] - 1)
h_tv = tf.reduce_sum(tf.square((x[:, 1:, :, :] - x[:, : h_x - 1, :, :])))
w_tv = tf.reduce_sum(tf.square((x[:, :, 1:, :] - x[:, :, : w_x - 1, :])))
batch_size = tf.cast(batch_size, dtype=tf.float32)
count_h = tf.cast(count_h, dtype=tf.float32)
count_w = tf.cast(count_w, dtype=tf.float32)
return 2 * (h_tv / count_h + w_tv / count_w) / batch_size<jupyter_output><empty_output><jupyter_text>Spatial consistency lossThe *spatial consistency loss* encourages spatial coherence of the enhanced image bypreserving the contrast between neighboring regions across the input image and its enhanced version.<jupyter_code>class SpatialConsistencyLoss(keras.losses.Loss):
def __init__(self, **kwargs):
super().__init__(reduction="none")
self.left_kernel = tf.constant(
[[[[0, 0, 0]], [[-1, 1, 0]], [[0, 0, 0]]]], dtype=tf.float32
)
self.right_kernel = tf.constant(
[[[[0, 0, 0]], [[0, 1, -1]], [[0, 0, 0]]]], dtype=tf.float32
)
self.up_kernel = tf.constant(
[[[[0, -1, 0]], [[0, 1, 0]], [[0, 0, 0]]]], dtype=tf.float32
)
self.down_kernel = tf.constant(
[[[[0, 0, 0]], [[0, 1, 0]], [[0, -1, 0]]]], dtype=tf.float32
)
def call(self, y_true, y_pred):
original_mean = tf.reduce_mean(y_true, 3, keepdims=True)
enhanced_mean = tf.reduce_mean(y_pred, 3, keepdims=True)
original_pool = tf.nn.avg_pool2d(
original_mean, ksize=4, strides=4, padding="VALID"
)
enhanced_pool = tf.nn.avg_pool2d(
enhanced_mean, ksize=4, strides=4, padding="VALID"
)
d_original_left = tf.nn.conv2d(
original_pool,
self.left_kernel,
strides=[1, 1, 1, 1],
padding="SAME",
)
d_original_right = tf.nn.conv2d(
original_pool,
self.right_kernel,
strides=[1, 1, 1, 1],
padding="SAME",
)
d_original_up = tf.nn.conv2d(
original_pool, self.up_kernel, strides=[1, 1, 1, 1], padding="SAME"
)
d_original_down = tf.nn.conv2d(
original_pool,
self.down_kernel,
strides=[1, 1, 1, 1],
padding="SAME",
)
d_enhanced_left = tf.nn.conv2d(
enhanced_pool,
self.left_kernel,
strides=[1, 1, 1, 1],
padding="SAME",
)
d_enhanced_right = tf.nn.conv2d(
enhanced_pool,
self.right_kernel,
strides=[1, 1, 1, 1],
padding="SAME",
)
d_enhanced_up = tf.nn.conv2d(
enhanced_pool, self.up_kernel, strides=[1, 1, 1, 1], padding="SAME"
)
d_enhanced_down = tf.nn.conv2d(
enhanced_pool,
self.down_kernel,
strides=[1, 1, 1, 1],
padding="SAME",
)
d_left = tf.square(d_original_left - d_enhanced_left)
d_right = tf.square(d_original_right - d_enhanced_right)
d_up = tf.square(d_original_up - d_enhanced_up)
d_down = tf.square(d_original_down - d_enhanced_down)
return d_left + d_right + d_up + d_down<jupyter_output><empty_output><jupyter_text>Deep curve estimation modelWe implement the Zero-DCE framework as a Keras subclassed model.<jupyter_code>class ZeroDCE(keras.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.dce_model = build_dce_net()
def compile(self, learning_rate, **kwargs):
super().compile(**kwargs)
self.optimizer = keras.optimizers.Adam(learning_rate=learning_rate)
self.spatial_constancy_loss = SpatialConsistencyLoss(reduction="none")
self.total_loss_tracker = keras.metrics.Mean(name="total_loss")
self.illumination_smoothness_loss_tracker = keras.metrics.Mean(
name="illumination_smoothness_loss"
)
self.spatial_constancy_loss_tracker = keras.metrics.Mean(
name="spatial_constancy_loss"
)
self.color_constancy_loss_tracker = keras.metrics.Mean(
name="color_constancy_loss"
)
self.exposure_loss_tracker = keras.metrics.Mean(name="exposure_loss")
@property
def metrics(self):
return [
self.total_loss_tracker,
self.illumination_smoothness_loss_tracker,
self.spatial_constancy_loss_tracker,
self.color_constancy_loss_tracker,
self.exposure_loss_tracker,
]
def get_enhanced_image(self, data, output):
r1 = output[:, :, :, :3]
r2 = output[:, :, :, 3:6]
r3 = output[:, :, :, 6:9]
r4 = output[:, :, :, 9:12]
r5 = output[:, :, :, 12:15]
r6 = output[:, :, :, 15:18]
r7 = output[:, :, :, 18:21]
r8 = output[:, :, :, 21:24]
x = data + r1 * (tf.square(data) - data)
x = x + r2 * (tf.square(x) - x)
x = x + r3 * (tf.square(x) - x)
enhanced_image = x + r4 * (tf.square(x) - x)
x = enhanced_image + r5 * (tf.square(enhanced_image) - enhanced_image)
x = x + r6 * (tf.square(x) - x)
x = x + r7 * (tf.square(x) - x)
enhanced_image = x + r8 * (tf.square(x) - x)
return enhanced_image
def call(self, data):
dce_net_output = self.dce_model(data)
return self.get_enhanced_image(data, dce_net_output)
def compute_losses(self, data, output):
enhanced_image = self.get_enhanced_image(data, output)
loss_illumination = 200 * illumination_smoothness_loss(output)
loss_spatial_constancy = tf.reduce_mean(
self.spatial_constancy_loss(enhanced_image, data)
)
loss_color_constancy = 5 * tf.reduce_mean(color_constancy_loss(enhanced_image))
loss_exposure = 10 * tf.reduce_mean(exposure_loss(enhanced_image))
total_loss = (
loss_illumination
+ loss_spatial_constancy
+ loss_color_constancy
+ loss_exposure
)
return {
"total_loss": total_loss,
"illumination_smoothness_loss": loss_illumination,
"spatial_constancy_loss": loss_spatial_constancy,
"color_constancy_loss": loss_color_constancy,
"exposure_loss": loss_exposure,
}
def train_step(self, data):
with tf.GradientTape() as tape:
output = self.dce_model(data)
losses = self.compute_losses(data, output)
gradients = tape.gradient(
losses["total_loss"], self.dce_model.trainable_weights
)
self.optimizer.apply_gradients(zip(gradients, self.dce_model.trainable_weights))
self.total_loss_tracker.update_state(losses["total_loss"])
self.illumination_smoothness_loss_tracker.update_state(
losses["illumination_smoothness_loss"]
)
self.spatial_constancy_loss_tracker.update_state(
losses["spatial_constancy_loss"]
)
self.color_constancy_loss_tracker.update_state(losses["color_constancy_loss"])
self.exposure_loss_tracker.update_state(losses["exposure_loss"])
return {metric.name: metric.result() for metric in self.metrics}
def test_step(self, data):
output = self.dce_model(data)
losses = self.compute_losses(data, output)
self.total_loss_tracker.update_state(losses["total_loss"])
self.illumination_smoothness_loss_tracker.update_state(
losses["illumination_smoothness_loss"]
)
self.spatial_constancy_loss_tracker.update_state(
losses["spatial_constancy_loss"]
)
self.color_constancy_loss_tracker.update_state(losses["color_constancy_loss"])
self.exposure_loss_tracker.update_state(losses["exposure_loss"])
return {metric.name: metric.result() for metric in self.metrics}
def save_weights(self, filepath, overwrite=True, save_format=None, options=None):
"""While saving the weights, we simply save the weights of the DCE-Net"""
self.dce_model.save_weights(
filepath,
overwrite=overwrite,
save_format=save_format,
options=options,
)
def load_weights(self, filepath, by_name=False, skip_mismatch=False, options=None):
"""While loading the weights, we simply load the weights of the DCE-Net"""
self.dce_model.load_weights(
filepath=filepath,
by_name=by_name,
skip_mismatch=skip_mismatch,
options=options,
)<jupyter_output><empty_output><jupyter_text>Training<jupyter_code>zero_dce_model = ZeroDCE()
zero_dce_model.compile(learning_rate=1e-4)
history = zero_dce_model.fit(train_dataset, validation_data=val_dataset, epochs=100)
def plot_result(item):
plt.plot(history.history[item], label=item)
plt.plot(history.history["val_" + item], label="val_" + item)
plt.xlabel("Epochs")
plt.ylabel(item)
plt.title("Train and Validation {} Over Epochs".format(item), fontsize=14)
plt.legend()
plt.grid()
plt.show()
plot_result("total_loss")
plot_result("illumination_smoothness_loss")
plot_result("spatial_constancy_loss")
plot_result("color_constancy_loss")
plot_result("exposure_loss")<jupyter_output><empty_output><jupyter_text>Inference<jupyter_code>def plot_results(images, titles, figure_size=(12, 12)):
fig = plt.figure(figsize=figure_size)
for i in range(len(images)):
fig.add_subplot(1, len(images), i + 1).set_title(titles[i])
_ = plt.imshow(images[i])
plt.axis("off")
plt.show()
def infer(original_image):
image = keras.utils.img_to_array(original_image)
image = image.astype("float32") / 255.0
image = np.expand_dims(image, axis=0)
output_image = zero_dce_model(image)
output_image = tf.cast((output_image[0, :, :, :] * 255), dtype=np.uint8)
output_image = Image.fromarray(output_image.numpy())
return output_image<jupyter_output><empty_output><jupyter_text>Inference on test imagesWe compare the test images from LOLDataset enhanced by MIRNet with images enhanced viathe `PIL.ImageOps.autocontrast()` function.You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/keras-io/low-light-image-enhancement)and try the demo on [Hugging Face Spaces](https://huggingface.co/spaces/keras-io/low-light-image-enhancement).<jupyter_code>for val_image_file in test_low_light_images:
original_image = Image.open(val_image_file)
enhanced_image = infer(original_image)
plot_results(
[original_image, ImageOps.autocontrast(original_image), enhanced_image],
["Original", "PIL Autocontrast", "Enhanced"],
(20, 12),
)<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/zero_dce.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/zero_dce.ipynb",
"repo_id": "keras-io",
"token_count": 7844
} | 102 |
# Next-Frame Video Prediction with Convolutional LSTMs
**Author:** [Amogh Joshi](https://github.com/amogh7joshi)<br>
**Date created:** 2021/06/02<br>
**Last modified:** 2023/11/10<br>
**Description:** How to build and train a convolutional LSTM model for next-frame video prediction.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/conv_lstm.ipynb) <span class="k-dot">โข</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/conv_lstm.py)
---
## Introduction
The
[Convolutional LSTM](https://papers.nips.cc/paper/2015/file/07563a3fe3bbe7e3ba84431ad9d055af-Paper.pdf)
architectures bring together time series processing and computer vision by
introducing a convolutional recurrent cell in a LSTM layer. In this example, we will explore the
Convolutional LSTM model in an application to next-frame prediction, the process
of predicting what video frames come next given a series of past frames.
---
## Setup
```python
import numpy as np
import matplotlib.pyplot as plt
import keras
from keras import layers
import io
import imageio
from IPython.display import Image, display
from ipywidgets import widgets, Layout, HBox
```
---
## Dataset Construction
For this example, we will be using the
[Moving MNIST](http://www.cs.toronto.edu/~nitish/unsupervised_video/)
dataset.
We will download the dataset and then construct and
preprocess training and validation sets.
For next-frame prediction, our model will be using a previous frame,
which we'll call `f_n`, to predict a new frame, called `f_(n + 1)`.
To allow the model to create these predictions, we'll need to process
the data such that we have "shifted" inputs and outputs, where the
input data is frame `x_n`, being used to predict frame `y_(n + 1)`.
```python
# Download and load the dataset.
fpath = keras.utils.get_file(
"moving_mnist.npy",
"http://www.cs.toronto.edu/~nitish/unsupervised_video/mnist_test_seq.npy",
)
dataset = np.load(fpath)
# Swap the axes representing the number of frames and number of data samples.
dataset = np.swapaxes(dataset, 0, 1)
# We'll pick out 1000 of the 10000 total examples and use those.
dataset = dataset[:1000, ...]
# Add a channel dimension since the images are grayscale.
dataset = np.expand_dims(dataset, axis=-1)
# Split into train and validation sets using indexing to optimize memory.
indexes = np.arange(dataset.shape[0])
np.random.shuffle(indexes)
train_index = indexes[: int(0.9 * dataset.shape[0])]
val_index = indexes[int(0.9 * dataset.shape[0]) :]
train_dataset = dataset[train_index]
val_dataset = dataset[val_index]
# Normalize the data to the 0-1 range.
train_dataset = train_dataset / 255
val_dataset = val_dataset / 255
# We'll define a helper function to shift the frames, where
# `x` is frames 0 to n - 1, and `y` is frames 1 to n.
def create_shifted_frames(data):
x = data[:, 0 : data.shape[1] - 1, :, :]
y = data[:, 1 : data.shape[1], :, :]
return x, y
# Apply the processing function to the datasets.
x_train, y_train = create_shifted_frames(train_dataset)
x_val, y_val = create_shifted_frames(val_dataset)
# Inspect the dataset.
print("Training Dataset Shapes: " + str(x_train.shape) + ", " + str(y_train.shape))
print("Validation Dataset Shapes: " + str(x_val.shape) + ", " + str(y_val.shape))
```
<div class="k-default-codeblock">
```
Downloading data from http://www.cs.toronto.edu/~nitish/unsupervised_video/mnist_test_seq.npy
819200096/819200096 โโโโโโโโโโโโโโโโโโโโ 116s 0us/step
Training Dataset Shapes: (900, 19, 64, 64, 1), (900, 19, 64, 64, 1)
Validation Dataset Shapes: (100, 19, 64, 64, 1), (100, 19, 64, 64, 1)
```
</div>
---
## Data Visualization
Our data consists of sequences of frames, each of which
are used to predict the upcoming frame. Let's take a look
at some of these sequential frames.
```python
# Construct a figure on which we will visualize the images.
fig, axes = plt.subplots(4, 5, figsize=(10, 8))
# Plot each of the sequential images for one random data example.
data_choice = np.random.choice(range(len(train_dataset)), size=1)[0]
for idx, ax in enumerate(axes.flat):
ax.imshow(np.squeeze(train_dataset[data_choice][idx]), cmap="gray")
ax.set_title(f"Frame {idx + 1}")
ax.axis("off")
# Print information and display the figure.
print(f"Displaying frames for example {data_choice}.")
plt.show()
```
<div class="k-default-codeblock">
```
Displaying frames for example 95.
```
</div>

---
## Model Construction
To build a Convolutional LSTM model, we will use the
`ConvLSTM2D` layer, which will accept inputs of shape
`(batch_size, num_frames, width, height, channels)`, and return
a prediction movie of the same shape.
```python
# Construct the input layer with no definite frame size.
inp = layers.Input(shape=(None, *x_train.shape[2:]))
# We will construct 3 `ConvLSTM2D` layers with batch normalization,
# followed by a `Conv3D` layer for the spatiotemporal outputs.
x = layers.ConvLSTM2D(
filters=64,
kernel_size=(5, 5),
padding="same",
return_sequences=True,
activation="relu",
)(inp)
x = layers.BatchNormalization()(x)
x = layers.ConvLSTM2D(
filters=64,
kernel_size=(3, 3),
padding="same",
return_sequences=True,
activation="relu",
)(x)
x = layers.BatchNormalization()(x)
x = layers.ConvLSTM2D(
filters=64,
kernel_size=(1, 1),
padding="same",
return_sequences=True,
activation="relu",
)(x)
x = layers.Conv3D(
filters=1, kernel_size=(3, 3, 3), activation="sigmoid", padding="same"
)(x)
# Next, we will build the complete model and compile it.
model = keras.models.Model(inp, x)
model.compile(
loss=keras.losses.binary_crossentropy,
optimizer=keras.optimizers.Adam(),
)
```
---
## Model Training
With our model and data constructed, we can now train the model.
```python
# Define some callbacks to improve training.
early_stopping = keras.callbacks.EarlyStopping(monitor="val_loss", patience=10)
reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor="val_loss", patience=5)
# Define modifiable training hyperparameters.
epochs = 20
batch_size = 5
# Fit the model to the training data.
model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_val, y_val),
callbacks=[early_stopping, reduce_lr],
)
```
<div class="k-default-codeblock">
```
Epoch 1/20
180/180 โโโโโโโโโโโโโโโโโโโโ 50s 226ms/step - loss: 0.1510 - val_loss: 0.2966 - learning_rate: 0.0010
Epoch 2/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0287 - val_loss: 0.1766 - learning_rate: 0.0010
Epoch 3/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0269 - val_loss: 0.0661 - learning_rate: 0.0010
Epoch 4/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0264 - val_loss: 0.0279 - learning_rate: 0.0010
Epoch 5/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0258 - val_loss: 0.0254 - learning_rate: 0.0010
Epoch 6/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0256 - val_loss: 0.0253 - learning_rate: 0.0010
Epoch 7/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0251 - val_loss: 0.0248 - learning_rate: 0.0010
Epoch 8/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0251 - val_loss: 0.0251 - learning_rate: 0.0010
Epoch 9/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0247 - val_loss: 0.0243 - learning_rate: 0.0010
Epoch 10/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0246 - val_loss: 0.0246 - learning_rate: 0.0010
Epoch 11/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0245 - val_loss: 0.0247 - learning_rate: 0.0010
Epoch 12/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0241 - val_loss: 0.0243 - learning_rate: 0.0010
Epoch 13/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0244 - val_loss: 0.0245 - learning_rate: 0.0010
Epoch 14/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0241 - val_loss: 0.0241 - learning_rate: 0.0010
Epoch 15/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0243 - val_loss: 0.0241 - learning_rate: 0.0010
Epoch 16/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0242 - val_loss: 0.0242 - learning_rate: 0.0010
Epoch 17/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0240 - val_loss: 0.0240 - learning_rate: 0.0010
Epoch 18/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0240 - val_loss: 0.0243 - learning_rate: 0.0010
Epoch 19/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0240 - val_loss: 0.0244 - learning_rate: 0.0010
Epoch 20/20
180/180 โโโโโโโโโโโโโโโโโโโโ 40s 219ms/step - loss: 0.0237 - val_loss: 0.0238 - learning_rate: 1.0000e-04
<keras.src.callbacks.history.History at 0x7ff294f9c340>
```
</div>
---
## Frame Prediction Visualizations
With our model now constructed and trained, we can generate
some example frame predictions based on a new video.
We'll pick a random example from the validation set and
then choose the first ten frames from them. From there, we can
allow the model to predict 10 new frames, which we can compare
to the ground truth frame predictions.
```python
# Select a random example from the validation dataset.
example = val_dataset[np.random.choice(range(len(val_dataset)), size=1)[0]]
# Pick the first/last ten frames from the example.
frames = example[:10, ...]
original_frames = example[10:, ...]
# Predict a new set of 10 frames.
for _ in range(10):
# Extract the model's prediction and post-process it.
new_prediction = model.predict(np.expand_dims(frames, axis=0))
new_prediction = np.squeeze(new_prediction, axis=0)
predicted_frame = np.expand_dims(new_prediction[-1, ...], axis=0)
# Extend the set of prediction frames.
frames = np.concatenate((frames, predicted_frame), axis=0)
# Construct a figure for the original and new frames.
fig, axes = plt.subplots(2, 10, figsize=(20, 4))
# Plot the original frames.
for idx, ax in enumerate(axes[0]):
ax.imshow(np.squeeze(original_frames[idx]), cmap="gray")
ax.set_title(f"Frame {idx + 11}")
ax.axis("off")
# Plot the new frames.
new_frames = frames[10:, ...]
for idx, ax in enumerate(axes[1]):
ax.imshow(np.squeeze(new_frames[idx]), cmap="gray")
ax.set_title(f"Frame {idx + 11}")
ax.axis("off")
# Display the figure.
plt.show()
```
<div class="k-default-codeblock">
```
1/1 โโโโโโโโโโโโโโโโโโโโ 2s 2s/step
1/1 โโโโโโโโโโโโโโโโโโโโ 1s 800ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 1s 805ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 1s 790ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 1s 821ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 1s 824ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 1s 928ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 1s 813ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 1s 810ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 1s 814ms/step
```
</div>

---
## Predicted Videos
Finally, we'll pick a few examples from the validation set
and construct some GIFs with them to see the model's
predicted videos.
You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/keras-io/conv-lstm)
and try the demo on [Hugging Face Spaces](https://huggingface.co/spaces/keras-io/conv-lstm).
```python
# Select a few random examples from the dataset.
examples = val_dataset[np.random.choice(range(len(val_dataset)), size=5)]
# Iterate over the examples and predict the frames.
predicted_videos = []
for example in examples:
# Pick the first/last ten frames from the example.
frames = example[:10, ...]
original_frames = example[10:, ...]
new_predictions = np.zeros(shape=(10, *frames[0].shape))
# Predict a new set of 10 frames.
for i in range(10):
# Extract the model's prediction and post-process it.
frames = example[: 10 + i + 1, ...]
new_prediction = model.predict(np.expand_dims(frames, axis=0))
new_prediction = np.squeeze(new_prediction, axis=0)
predicted_frame = np.expand_dims(new_prediction[-1, ...], axis=0)
# Extend the set of prediction frames.
new_predictions[i] = predicted_frame
# Create and save GIFs for each of the ground truth/prediction images.
for frame_set in [original_frames, new_predictions]:
# Construct a GIF from the selected video frames.
current_frames = np.squeeze(frame_set)
current_frames = current_frames[..., np.newaxis] * np.ones(3)
current_frames = (current_frames * 255).astype(np.uint8)
current_frames = list(current_frames)
# Construct a GIF from the frames.
with io.BytesIO() as gif:
imageio.mimsave(gif, current_frames, "GIF", duration=200)
predicted_videos.append(gif.getvalue())
# Display the videos.
print(" Truth\tPrediction")
for i in range(0, len(predicted_videos), 2):
# Construct and display an `HBox` with the ground truth and prediction.
box = HBox(
[
widgets.Image(value=predicted_videos[i]),
widgets.Image(value=predicted_videos[i + 1]),
]
)
display(box)
```
<div class="k-default-codeblock">
```
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 8ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 1s 790ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 8ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 8ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 8ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 8ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 9ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 5ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 6ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 7ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 8ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 10ms/step
Truth Prediction
HBox(children=(Image(value=b'GIF89a@\x00@\x00\x87\x00\x00\xff\xff\xff\xfe\xfe\xfe\xfd\xfd\xfd\xfc\xfc\xfc\xf8\โฆ
HBox(children=(Image(value=b'GIF89a@\x00@\x00\x86\x00\x00\xff\xff\xff\xfd\xfd\xfd\xfc\xfc\xfc\xfb\xfb\xfb\xf4\โฆ
HBox(children=(Image(value=b'GIF89a@\x00@\x00\x86\x00\x00\xff\xff\xff\xfe\xfe\xfe\xfd\xfd\xfd\xfc\xfc\xfc\xfb\โฆ
HBox(children=(Image(value=b'GIF89a@\x00@\x00\x86\x00\x00\xff\xff\xff\xfe\xfe\xfe\xfd\xfd\xfd\xfc\xfc\xfc\xfb\โฆ
HBox(children=(Image(value=b'GIF89a@\x00@\x00\x86\x00\x00\xff\xff\xff\xfd\xfd\xfd\xfc\xfc\xfc\xf9\xf9\xf9\xf7\โฆ
```
</div> | keras-io/examples/vision/md/conv_lstm.md/0 | {
"file_path": "keras-io/examples/vision/md/conv_lstm.md",
"repo_id": "keras-io",
"token_count": 7353
} | 103 |
# MobileViT: A mobile-friendly Transformer-based model for image classification
**Author:** [Sayak Paul](https://twitter.com/RisingSayak)<br>
**Date created:** 2021/10/20<br>
**Last modified:** 2024/02/11<br>
**Description:** MobileViT for image classification with combined benefits of convolutions and Transformers.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/mobilevit.ipynb) <span class="k-dot">โข</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/mobilevit.py)
---
## Introduction
In this example, we implement the MobileViT architecture
([Mehta et al.](https://arxiv.org/abs/2110.02178)),
which combines the benefits of Transformers
([Vaswani et al.](https://arxiv.org/abs/1706.03762))
and convolutions. With Transformers, we can capture long-range dependencies that result
in global representations. With convolutions, we can capture spatial relationships that
model locality.
Besides combining the properties of Transformers and convolutions, the authors introduce
MobileViT as a general-purpose mobile-friendly backbone for different image recognition
tasks. Their findings suggest that, performance-wise, MobileViT is better than other
models with the same or higher complexity ([MobileNetV3](https://arxiv.org/abs/1905.02244),
for example), while being efficient on mobile devices.
Note: This example should be run with Tensorflow 2.13 and higher.
---
## Imports
```python
import os
import tensorflow as tf
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
from keras import layers
from keras import backend
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
```
---
## Hyperparameters
```python
# Values are from table 4.
patch_size = 4 # 2x2, for the Transformer blocks.
image_size = 256
expansion_factor = 2 # expansion factor for the MobileNetV2 blocks.
```
---
## MobileViT utilities
The MobileViT architecture is comprised of the following blocks:
* Strided 3x3 convolutions that process the input image.
* [MobileNetV2](https://arxiv.org/abs/1801.04381)-style inverted residual blocks for
downsampling the resolution of the intermediate feature maps.
* MobileViT blocks that combine the benefits of Transformers and convolutions. It is
presented in the figure below (taken from the
[original paper](https://arxiv.org/abs/2110.02178)):

```python
def conv_block(x, filters=16, kernel_size=3, strides=2):
conv_layer = layers.Conv2D(
filters,
kernel_size,
strides=strides,
activation=keras.activations.swish,
padding="same",
)
return conv_layer(x)
# Reference: https://github.com/keras-team/keras/blob/e3858739d178fe16a0c77ce7fab88b0be6dbbdc7/keras/applications/imagenet_utils.py#L413C17-L435
def correct_pad(inputs, kernel_size):
img_dim = 2 if backend.image_data_format() == "channels_first" else 1
input_size = inputs.shape[img_dim : (img_dim + 2)]
if isinstance(kernel_size, int):
kernel_size = (kernel_size, kernel_size)
if input_size[0] is None:
adjust = (1, 1)
else:
adjust = (1 - input_size[0] % 2, 1 - input_size[1] % 2)
correct = (kernel_size[0] // 2, kernel_size[1] // 2)
return (
(correct[0] - adjust[0], correct[0]),
(correct[1] - adjust[1], correct[1]),
)
# Reference: https://git.io/JKgtC
def inverted_residual_block(x, expanded_channels, output_channels, strides=1):
m = layers.Conv2D(expanded_channels, 1, padding="same", use_bias=False)(x)
m = layers.BatchNormalization()(m)
m = keras.activations.swish(m)
if strides == 2:
m = layers.ZeroPadding2D(padding=correct_pad(m, 3))(m)
m = layers.DepthwiseConv2D(
3, strides=strides, padding="same" if strides == 1 else "valid", use_bias=False
)(m)
m = layers.BatchNormalization()(m)
m = keras.activations.swish(m)
m = layers.Conv2D(output_channels, 1, padding="same", use_bias=False)(m)
m = layers.BatchNormalization()(m)
if keras.ops.equal(x.shape[-1], output_channels) and strides == 1:
return layers.Add()([m, x])
return m
# Reference:
# https://keras.io/examples/vision/image_classification_with_vision_transformer/
def mlp(x, hidden_units, dropout_rate):
for units in hidden_units:
x = layers.Dense(units, activation=keras.activations.swish)(x)
x = layers.Dropout(dropout_rate)(x)
return x
def transformer_block(x, transformer_layers, projection_dim, num_heads=2):
for _ in range(transformer_layers):
# Layer normalization 1.
x1 = layers.LayerNormalization(epsilon=1e-6)(x)
# Create a multi-head attention layer.
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=projection_dim, dropout=0.1
)(x1, x1)
# Skip connection 1.
x2 = layers.Add()([attention_output, x])
# Layer normalization 2.
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
# MLP.
x3 = mlp(
x3,
hidden_units=[x.shape[-1] * 2, x.shape[-1]],
dropout_rate=0.1,
)
# Skip connection 2.
x = layers.Add()([x3, x2])
return x
def mobilevit_block(x, num_blocks, projection_dim, strides=1):
# Local projection with convolutions.
local_features = conv_block(x, filters=projection_dim, strides=strides)
local_features = conv_block(
local_features, filters=projection_dim, kernel_size=1, strides=strides
)
# Unfold into patches and then pass through Transformers.
num_patches = int((local_features.shape[1] * local_features.shape[2]) / patch_size)
non_overlapping_patches = layers.Reshape((patch_size, num_patches, projection_dim))(
local_features
)
global_features = transformer_block(
non_overlapping_patches, num_blocks, projection_dim
)
# Fold into conv-like feature-maps.
folded_feature_map = layers.Reshape((*local_features.shape[1:-1], projection_dim))(
global_features
)
# Apply point-wise conv -> concatenate with the input features.
folded_feature_map = conv_block(
folded_feature_map, filters=x.shape[-1], kernel_size=1, strides=strides
)
local_global_features = layers.Concatenate(axis=-1)([x, folded_feature_map])
# Fuse the local and global features using a convoluion layer.
local_global_features = conv_block(
local_global_features, filters=projection_dim, strides=strides
)
return local_global_features
```
**More on the MobileViT block**:
* First, the feature representations (A) go through convolution blocks that capture local
relationships. The expected shape of a single entry here would be `(h, w, num_channels)`.
* Then they get unfolded into another vector with shape `(p, n, num_channels)`,
where `p` is the area of a small patch, and `n` is `(h * w) / p`. So, we end up with `n`
non-overlapping patches.
* This unfolded vector is then passed through a Tranformer block that captures global
relationships between the patches.
* The output vector (B) is again folded into a vector of shape `(h, w, num_channels)`
resembling a feature map coming out of convolutions.
Vectors A and B are then passed through two more convolutional layers to fuse the local
and global representations. Notice how the spatial resolution of the final vector remains
unchanged at this point. The authors also present an explanation of how the MobileViT
block resembles a convolution block of a CNN. For more details, please refer to the
original paper.
Next, we combine these blocks together and implement the MobileViT architecture (XXS
variant). The following figure (taken from the original paper) presents a schematic
representation of the architecture:

```python
def create_mobilevit(num_classes=5):
inputs = keras.Input((image_size, image_size, 3))
x = layers.Rescaling(scale=1.0 / 255)(inputs)
# Initial conv-stem -> MV2 block.
x = conv_block(x, filters=16)
x = inverted_residual_block(
x, expanded_channels=16 * expansion_factor, output_channels=16
)
# Downsampling with MV2 block.
x = inverted_residual_block(
x, expanded_channels=16 * expansion_factor, output_channels=24, strides=2
)
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=24
)
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=24
)
# First MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=48, strides=2
)
x = mobilevit_block(x, num_blocks=2, projection_dim=64)
# Second MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=64 * expansion_factor, output_channels=64, strides=2
)
x = mobilevit_block(x, num_blocks=4, projection_dim=80)
# Third MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=80 * expansion_factor, output_channels=80, strides=2
)
x = mobilevit_block(x, num_blocks=3, projection_dim=96)
x = conv_block(x, filters=320, kernel_size=1, strides=1)
# Classification head.
x = layers.GlobalAvgPool2D()(x)
outputs = layers.Dense(num_classes, activation="softmax")(x)
return keras.Model(inputs, outputs)
mobilevit_xxs = create_mobilevit()
mobilevit_xxs.summary()
```
<div class="k-default-codeblock">
```
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 256, 256, 3) 0
__________________________________________________________________________________________________
rescaling (Rescaling) (None, 256, 256, 3) 0 input_1[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 128, 128, 16) 448 rescaling[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 128, 128, 32) 512 conv2d[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 128, 128, 32) 128 conv2d_1[0][0]
__________________________________________________________________________________________________
tf.nn.silu (TFOpLambda) (None, 128, 128, 32) 0 batch_normalization[0][0]
__________________________________________________________________________________________________
depthwise_conv2d (DepthwiseConv (None, 128, 128, 32) 288 tf.nn.silu[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 128, 128, 32) 128 depthwise_conv2d[0][0]
__________________________________________________________________________________________________
tf.nn.silu_1 (TFOpLambda) (None, 128, 128, 32) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 128, 128, 16) 512 tf.nn.silu_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 128, 128, 16) 64 conv2d_2[0][0]
__________________________________________________________________________________________________
add (Add) (None, 128, 128, 16) 0 batch_normalization_2[0][0]
conv2d[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 128, 128, 32) 512 add[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 128, 128, 32) 128 conv2d_3[0][0]
__________________________________________________________________________________________________
tf.nn.silu_2 (TFOpLambda) (None, 128, 128, 32) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
zero_padding2d (ZeroPadding2D) (None, 129, 129, 32) 0 tf.nn.silu_2[0][0]
__________________________________________________________________________________________________
depthwise_conv2d_1 (DepthwiseCo (None, 64, 64, 32) 288 zero_padding2d[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 64, 64, 32) 128 depthwise_conv2d_1[0][0]
__________________________________________________________________________________________________
tf.nn.silu_3 (TFOpLambda) (None, 64, 64, 32) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 64, 64, 24) 768 tf.nn.silu_3[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 64, 64, 24) 96 conv2d_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 64, 64, 48) 1152 batch_normalization_5[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 64, 64, 48) 192 conv2d_5[0][0]
__________________________________________________________________________________________________
tf.nn.silu_4 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
depthwise_conv2d_2 (DepthwiseCo (None, 64, 64, 48) 432 tf.nn.silu_4[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 64, 64, 48) 192 depthwise_conv2d_2[0][0]
__________________________________________________________________________________________________
tf.nn.silu_5 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 64, 64, 24) 1152 tf.nn.silu_5[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 64, 64, 24) 96 conv2d_6[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 64, 64, 24) 0 batch_normalization_8[0][0]
batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 64, 64, 48) 1152 add_1[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 64, 64, 48) 192 conv2d_7[0][0]
__________________________________________________________________________________________________
tf.nn.silu_6 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
depthwise_conv2d_3 (DepthwiseCo (None, 64, 64, 48) 432 tf.nn.silu_6[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 64, 64, 48) 192 depthwise_conv2d_3[0][0]
__________________________________________________________________________________________________
tf.nn.silu_7 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_10[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 64, 64, 24) 1152 tf.nn.silu_7[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 64, 64, 24) 96 conv2d_8[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 64, 64, 24) 0 batch_normalization_11[0][0]
add_1[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 64, 64, 48) 1152 add_2[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 64, 64, 48) 192 conv2d_9[0][0]
__________________________________________________________________________________________________
tf.nn.silu_8 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_12[0][0]
__________________________________________________________________________________________________
zero_padding2d_1 (ZeroPadding2D (None, 65, 65, 48) 0 tf.nn.silu_8[0][0]
__________________________________________________________________________________________________
depthwise_conv2d_4 (DepthwiseCo (None, 32, 32, 48) 432 zero_padding2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 32, 32, 48) 192 depthwise_conv2d_4[0][0]
__________________________________________________________________________________________________
tf.nn.silu_9 (TFOpLambda) (None, 32, 32, 48) 0 batch_normalization_13[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 32, 32, 48) 2304 tf.nn.silu_9[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 32, 32, 48) 192 conv2d_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 32, 32, 64) 27712 batch_normalization_14[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 32, 32, 64) 4160 conv2d_11[0][0]
__________________________________________________________________________________________________
reshape (Reshape) (None, 4, 256, 64) 0 conv2d_12[0][0]
__________________________________________________________________________________________________
layer_normalization (LayerNorma (None, 4, 256, 64) 128 reshape[0][0]
__________________________________________________________________________________________________
multi_head_attention (MultiHead (None, 4, 256, 64) 33216 layer_normalization[0][0]
layer_normalization[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 4, 256, 64) 0 multi_head_attention[0][0]
reshape[0][0]
__________________________________________________________________________________________________
layer_normalization_1 (LayerNor (None, 4, 256, 64) 128 add_3[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 4, 256, 128) 8320 layer_normalization_1[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 4, 256, 128) 0 dense[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 4, 256, 64) 8256 dropout[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 4, 256, 64) 0 dense_1[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 4, 256, 64) 0 dropout_1[0][0]
add_3[0][0]
__________________________________________________________________________________________________
layer_normalization_2 (LayerNor (None, 4, 256, 64) 128 add_4[0][0]
__________________________________________________________________________________________________
multi_head_attention_1 (MultiHe (None, 4, 256, 64) 33216 layer_normalization_2[0][0]
layer_normalization_2[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, 4, 256, 64) 0 multi_head_attention_1[0][0]
add_4[0][0]
__________________________________________________________________________________________________
layer_normalization_3 (LayerNor (None, 4, 256, 64) 128 add_5[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 4, 256, 128) 8320 layer_normalization_3[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 4, 256, 128) 0 dense_2[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 4, 256, 64) 8256 dropout_2[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 4, 256, 64) 0 dense_3[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, 4, 256, 64) 0 dropout_3[0][0]
add_5[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 32, 32, 64) 0 add_6[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 32, 32, 48) 3120 reshape_1[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 32, 32, 96) 0 batch_normalization_14[0][0]
conv2d_13[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 32, 32, 64) 55360 concatenate[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 32, 32, 128) 8192 conv2d_14[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 32, 32, 128) 512 conv2d_15[0][0]
__________________________________________________________________________________________________
tf.nn.silu_10 (TFOpLambda) (None, 32, 32, 128) 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
zero_padding2d_2 (ZeroPadding2D (None, 33, 33, 128) 0 tf.nn.silu_10[0][0]
__________________________________________________________________________________________________
depthwise_conv2d_5 (DepthwiseCo (None, 16, 16, 128) 1152 zero_padding2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 16, 16, 128) 512 depthwise_conv2d_5[0][0]
__________________________________________________________________________________________________
tf.nn.silu_11 (TFOpLambda) (None, 16, 16, 128) 0 batch_normalization_16[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 16, 16, 64) 8192 tf.nn.silu_11[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 16, 16, 64) 256 conv2d_16[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 16, 16, 80) 46160 batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 16, 16, 80) 6480 conv2d_17[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 4, 64, 80) 0 conv2d_18[0][0]
__________________________________________________________________________________________________
layer_normalization_4 (LayerNor (None, 4, 64, 80) 160 reshape_2[0][0]
__________________________________________________________________________________________________
multi_head_attention_2 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_4[0][0]
layer_normalization_4[0][0]
__________________________________________________________________________________________________
add_7 (Add) (None, 4, 64, 80) 0 multi_head_attention_2[0][0]
reshape_2[0][0]
__________________________________________________________________________________________________
layer_normalization_5 (LayerNor (None, 4, 64, 80) 160 add_7[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 4, 64, 160) 12960 layer_normalization_5[0][0]
__________________________________________________________________________________________________
dropout_4 (Dropout) (None, 4, 64, 160) 0 dense_4[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None, 4, 64, 80) 12880 dropout_4[0][0]
__________________________________________________________________________________________________
dropout_5 (Dropout) (None, 4, 64, 80) 0 dense_5[0][0]
__________________________________________________________________________________________________
add_8 (Add) (None, 4, 64, 80) 0 dropout_5[0][0]
add_7[0][0]
__________________________________________________________________________________________________
layer_normalization_6 (LayerNor (None, 4, 64, 80) 160 add_8[0][0]
__________________________________________________________________________________________________
multi_head_attention_3 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_6[0][0]
layer_normalization_6[0][0]
__________________________________________________________________________________________________
add_9 (Add) (None, 4, 64, 80) 0 multi_head_attention_3[0][0]
add_8[0][0]
__________________________________________________________________________________________________
layer_normalization_7 (LayerNor (None, 4, 64, 80) 160 add_9[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None, 4, 64, 160) 12960 layer_normalization_7[0][0]
__________________________________________________________________________________________________
dropout_6 (Dropout) (None, 4, 64, 160) 0 dense_6[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 4, 64, 80) 12880 dropout_6[0][0]
__________________________________________________________________________________________________
dropout_7 (Dropout) (None, 4, 64, 80) 0 dense_7[0][0]
__________________________________________________________________________________________________
add_10 (Add) (None, 4, 64, 80) 0 dropout_7[0][0]
add_9[0][0]
__________________________________________________________________________________________________
layer_normalization_8 (LayerNor (None, 4, 64, 80) 160 add_10[0][0]
__________________________________________________________________________________________________
multi_head_attention_4 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_8[0][0]
layer_normalization_8[0][0]
__________________________________________________________________________________________________
add_11 (Add) (None, 4, 64, 80) 0 multi_head_attention_4[0][0]
add_10[0][0]
__________________________________________________________________________________________________
layer_normalization_9 (LayerNor (None, 4, 64, 80) 160 add_11[0][0]
__________________________________________________________________________________________________
dense_8 (Dense) (None, 4, 64, 160) 12960 layer_normalization_9[0][0]
__________________________________________________________________________________________________
dropout_8 (Dropout) (None, 4, 64, 160) 0 dense_8[0][0]
__________________________________________________________________________________________________
dense_9 (Dense) (None, 4, 64, 80) 12880 dropout_8[0][0]
__________________________________________________________________________________________________
dropout_9 (Dropout) (None, 4, 64, 80) 0 dense_9[0][0]
__________________________________________________________________________________________________
add_12 (Add) (None, 4, 64, 80) 0 dropout_9[0][0]
add_11[0][0]
__________________________________________________________________________________________________
layer_normalization_10 (LayerNo (None, 4, 64, 80) 160 add_12[0][0]
__________________________________________________________________________________________________
multi_head_attention_5 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_10[0][0]
layer_normalization_10[0][0]
__________________________________________________________________________________________________
add_13 (Add) (None, 4, 64, 80) 0 multi_head_attention_5[0][0]
add_12[0][0]
__________________________________________________________________________________________________
layer_normalization_11 (LayerNo (None, 4, 64, 80) 160 add_13[0][0]
__________________________________________________________________________________________________
dense_10 (Dense) (None, 4, 64, 160) 12960 layer_normalization_11[0][0]
__________________________________________________________________________________________________
dropout_10 (Dropout) (None, 4, 64, 160) 0 dense_10[0][0]
__________________________________________________________________________________________________
dense_11 (Dense) (None, 4, 64, 80) 12880 dropout_10[0][0]
__________________________________________________________________________________________________
dropout_11 (Dropout) (None, 4, 64, 80) 0 dense_11[0][0]
__________________________________________________________________________________________________
add_14 (Add) (None, 4, 64, 80) 0 dropout_11[0][0]
add_13[0][0]
__________________________________________________________________________________________________
reshape_3 (Reshape) (None, 16, 16, 80) 0 add_14[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 16, 16, 64) 5184 reshape_3[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 16, 16, 128) 0 batch_normalization_17[0][0]
conv2d_19[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 16, 16, 80) 92240 concatenate_1[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 16, 16, 160) 12800 conv2d_20[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 16, 16, 160) 640 conv2d_21[0][0]
__________________________________________________________________________________________________
tf.nn.silu_12 (TFOpLambda) (None, 16, 16, 160) 0 batch_normalization_18[0][0]
__________________________________________________________________________________________________
zero_padding2d_3 (ZeroPadding2D (None, 17, 17, 160) 0 tf.nn.silu_12[0][0]
__________________________________________________________________________________________________
depthwise_conv2d_6 (DepthwiseCo (None, 8, 8, 160) 1440 zero_padding2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 8, 8, 160) 640 depthwise_conv2d_6[0][0]
__________________________________________________________________________________________________
tf.nn.silu_13 (TFOpLambda) (None, 8, 8, 160) 0 batch_normalization_19[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 8, 8, 80) 12800 tf.nn.silu_13[0][0]
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 8, 8, 80) 320 conv2d_22[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 8, 8, 96) 69216 batch_normalization_20[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D) (None, 8, 8, 96) 9312 conv2d_23[0][0]
__________________________________________________________________________________________________
reshape_4 (Reshape) (None, 4, 16, 96) 0 conv2d_24[0][0]
__________________________________________________________________________________________________
layer_normalization_12 (LayerNo (None, 4, 16, 96) 192 reshape_4[0][0]
__________________________________________________________________________________________________
multi_head_attention_6 (MultiHe (None, 4, 16, 96) 74400 layer_normalization_12[0][0]
layer_normalization_12[0][0]
__________________________________________________________________________________________________
add_15 (Add) (None, 4, 16, 96) 0 multi_head_attention_6[0][0]
reshape_4[0][0]
__________________________________________________________________________________________________
layer_normalization_13 (LayerNo (None, 4, 16, 96) 192 add_15[0][0]
__________________________________________________________________________________________________
dense_12 (Dense) (None, 4, 16, 192) 18624 layer_normalization_13[0][0]
__________________________________________________________________________________________________
dropout_12 (Dropout) (None, 4, 16, 192) 0 dense_12[0][0]
__________________________________________________________________________________________________
dense_13 (Dense) (None, 4, 16, 96) 18528 dropout_12[0][0]
__________________________________________________________________________________________________
dropout_13 (Dropout) (None, 4, 16, 96) 0 dense_13[0][0]
__________________________________________________________________________________________________
add_16 (Add) (None, 4, 16, 96) 0 dropout_13[0][0]
add_15[0][0]
__________________________________________________________________________________________________
layer_normalization_14 (LayerNo (None, 4, 16, 96) 192 add_16[0][0]
__________________________________________________________________________________________________
multi_head_attention_7 (MultiHe (None, 4, 16, 96) 74400 layer_normalization_14[0][0]
layer_normalization_14[0][0]
__________________________________________________________________________________________________
add_17 (Add) (None, 4, 16, 96) 0 multi_head_attention_7[0][0]
add_16[0][0]
__________________________________________________________________________________________________
layer_normalization_15 (LayerNo (None, 4, 16, 96) 192 add_17[0][0]
__________________________________________________________________________________________________
dense_14 (Dense) (None, 4, 16, 192) 18624 layer_normalization_15[0][0]
__________________________________________________________________________________________________
dropout_14 (Dropout) (None, 4, 16, 192) 0 dense_14[0][0]
__________________________________________________________________________________________________
dense_15 (Dense) (None, 4, 16, 96) 18528 dropout_14[0][0]
__________________________________________________________________________________________________
dropout_15 (Dropout) (None, 4, 16, 96) 0 dense_15[0][0]
__________________________________________________________________________________________________
add_18 (Add) (None, 4, 16, 96) 0 dropout_15[0][0]
add_17[0][0]
__________________________________________________________________________________________________
layer_normalization_16 (LayerNo (None, 4, 16, 96) 192 add_18[0][0]
__________________________________________________________________________________________________
multi_head_attention_8 (MultiHe (None, 4, 16, 96) 74400 layer_normalization_16[0][0]
layer_normalization_16[0][0]
__________________________________________________________________________________________________
add_19 (Add) (None, 4, 16, 96) 0 multi_head_attention_8[0][0]
add_18[0][0]
__________________________________________________________________________________________________
layer_normalization_17 (LayerNo (None, 4, 16, 96) 192 add_19[0][0]
__________________________________________________________________________________________________
dense_16 (Dense) (None, 4, 16, 192) 18624 layer_normalization_17[0][0]
__________________________________________________________________________________________________
dropout_16 (Dropout) (None, 4, 16, 192) 0 dense_16[0][0]
__________________________________________________________________________________________________
dense_17 (Dense) (None, 4, 16, 96) 18528 dropout_16[0][0]
__________________________________________________________________________________________________
dropout_17 (Dropout) (None, 4, 16, 96) 0 dense_17[0][0]
__________________________________________________________________________________________________
add_20 (Add) (None, 4, 16, 96) 0 dropout_17[0][0]
add_19[0][0]
__________________________________________________________________________________________________
reshape_5 (Reshape) (None, 8, 8, 96) 0 add_20[0][0]
__________________________________________________________________________________________________
conv2d_25 (Conv2D) (None, 8, 8, 80) 7760 reshape_5[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 8, 8, 160) 0 batch_normalization_20[0][0]
conv2d_25[0][0]
__________________________________________________________________________________________________
conv2d_26 (Conv2D) (None, 8, 8, 96) 138336 concatenate_2[0][0]
__________________________________________________________________________________________________
conv2d_27 (Conv2D) (None, 8, 8, 320) 31040 conv2d_26[0][0]
__________________________________________________________________________________________________
global_average_pooling2d (Globa (None, 320) 0 conv2d_27[0][0]
__________________________________________________________________________________________________
dense_18 (Dense) (None, 5) 1605 global_average_pooling2d[0][0]
==================================================================================================
Total params: 1,307,621
Trainable params: 1,305,077
Non-trainable params: 2,544
__________________________________________________________________________________________________
---
## Dataset preparation
We will be using the
[`tf_flowers`](https://www.tensorflow.org/datasets/catalog/tf_flowers)
dataset to demonstrate the model. Unlike other Transformer-based architectures,
MobileViT uses a simple augmentation pipeline primarily because it has the properties
of a CNN.
```python
batch_size = 64
auto = tf.data.AUTOTUNE
resize_bigger = 280
num_classes = 5
def preprocess_dataset(is_training=True):
def _pp(image, label):
if is_training:
# Resize to a bigger spatial resolution and take the random
# crops.
image = tf.image.resize(image, (resize_bigger, resize_bigger))
image = tf.image.random_crop(image, (image_size, image_size, 3))
image = tf.image.random_flip_left_right(image)
else:
image = tf.image.resize(image, (image_size, image_size))
label = tf.one_hot(label, depth=num_classes)
return image, label
return _pp
def prepare_dataset(dataset, is_training=True):
if is_training:
dataset = dataset.shuffle(batch_size * 10)
dataset = dataset.map(preprocess_dataset(is_training), num_parallel_calls=auto)
return dataset.batch(batch_size).prefetch(auto)
```
The authors use a multi-scale data sampler to help the model learn representations of
varied scales. In this example, we discard this part.
---
## Load and prepare the dataset
```python
train_dataset, val_dataset = tfds.load(
"tf_flowers", split=["train[:90%]", "train[90%:]"], as_supervised=True
)
num_train = train_dataset.cardinality()
num_val = val_dataset.cardinality()
print(f"Number of training examples: {num_train}")
print(f"Number of validation examples: {num_val}")
train_dataset = prepare_dataset(train_dataset, is_training=True)
val_dataset = prepare_dataset(val_dataset, is_training=False)
```
<div class="k-default-codeblock">
```
Number of training examples: 3303
Number of validation examples: 367
```
</div>
---
## Train a MobileViT (XXS) model
```python
learning_rate = 0.002
label_smoothing_factor = 0.1
epochs = 30
optimizer = keras.optimizers.Adam(learning_rate=learning_rate)
loss_fn = keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing_factor)
def run_experiment(epochs=epochs):
mobilevit_xxs = create_mobilevit(num_classes=num_classes)
mobilevit_xxs.compile(optimizer=optimizer, loss=loss_fn, metrics=["accuracy"])
# When using `save_weights_only=True` in `ModelCheckpoint`, the filepath provided must end in `.weights.h5`
checkpoint_filepath = "/tmp/checkpoint.weights.h5"
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath,
monitor="val_accuracy",
save_best_only=True,
save_weights_only=True,
)
mobilevit_xxs.fit(
train_dataset,
validation_data=val_dataset,
epochs=epochs,
callbacks=[checkpoint_callback],
)
mobilevit_xxs.load_weights(checkpoint_filepath)
_, accuracy = mobilevit_xxs.evaluate(val_dataset)
print(f"Validation accuracy: {round(accuracy * 100, 2)}%")
return mobilevit_xxs
mobilevit_xxs = run_experiment()
```
<div class="k-default-codeblock">
```
Epoch 1/30
52/52 [==============================] - 47s 459ms/step - loss: 1.3397 - accuracy: 0.4832 - val_loss: 1.7250 - val_accuracy: 0.1662
Epoch 2/30
52/52 [==============================] - 21s 404ms/step - loss: 1.1167 - accuracy: 0.6210 - val_loss: 1.9844 - val_accuracy: 0.1907
Epoch 3/30
52/52 [==============================] - 21s 403ms/step - loss: 1.0217 - accuracy: 0.6709 - val_loss: 1.8187 - val_accuracy: 0.1907
Epoch 4/30
52/52 [==============================] - 21s 409ms/step - loss: 0.9682 - accuracy: 0.7048 - val_loss: 2.0329 - val_accuracy: 0.1907
Epoch 5/30
52/52 [==============================] - 21s 408ms/step - loss: 0.9552 - accuracy: 0.7196 - val_loss: 2.1150 - val_accuracy: 0.1907
Epoch 6/30
52/52 [==============================] - 21s 407ms/step - loss: 0.9186 - accuracy: 0.7318 - val_loss: 2.9713 - val_accuracy: 0.1907
Epoch 7/30
52/52 [==============================] - 21s 407ms/step - loss: 0.8986 - accuracy: 0.7457 - val_loss: 3.2062 - val_accuracy: 0.1907
Epoch 8/30
52/52 [==============================] - 21s 408ms/step - loss: 0.8831 - accuracy: 0.7542 - val_loss: 3.8631 - val_accuracy: 0.1907
Epoch 9/30
52/52 [==============================] - 21s 408ms/step - loss: 0.8433 - accuracy: 0.7714 - val_loss: 1.8029 - val_accuracy: 0.3542
Epoch 10/30
52/52 [==============================] - 21s 408ms/step - loss: 0.8489 - accuracy: 0.7763 - val_loss: 1.7920 - val_accuracy: 0.4796
Epoch 11/30
52/52 [==============================] - 21s 409ms/step - loss: 0.8256 - accuracy: 0.7884 - val_loss: 1.4992 - val_accuracy: 0.5477
Epoch 12/30
52/52 [==============================] - 21s 407ms/step - loss: 0.7859 - accuracy: 0.8123 - val_loss: 0.9236 - val_accuracy: 0.7330
Epoch 13/30
52/52 [==============================] - 21s 409ms/step - loss: 0.7702 - accuracy: 0.8159 - val_loss: 0.8059 - val_accuracy: 0.8011
Epoch 14/30
52/52 [==============================] - 21s 403ms/step - loss: 0.7670 - accuracy: 0.8153 - val_loss: 1.1535 - val_accuracy: 0.7084
Epoch 15/30
52/52 [==============================] - 21s 408ms/step - loss: 0.7332 - accuracy: 0.8344 - val_loss: 0.7746 - val_accuracy: 0.8147
Epoch 16/30
52/52 [==============================] - 21s 404ms/step - loss: 0.7284 - accuracy: 0.8335 - val_loss: 1.0342 - val_accuracy: 0.7330
Epoch 17/30
52/52 [==============================] - 21s 409ms/step - loss: 0.7484 - accuracy: 0.8262 - val_loss: 1.0523 - val_accuracy: 0.7112
Epoch 18/30
52/52 [==============================] - 21s 408ms/step - loss: 0.7209 - accuracy: 0.8450 - val_loss: 0.8146 - val_accuracy: 0.8174
Epoch 19/30
52/52 [==============================] - 21s 409ms/step - loss: 0.7141 - accuracy: 0.8435 - val_loss: 0.8016 - val_accuracy: 0.7875
Epoch 20/30
52/52 [==============================] - 21s 410ms/step - loss: 0.7075 - accuracy: 0.8435 - val_loss: 0.9352 - val_accuracy: 0.7439
Epoch 21/30
52/52 [==============================] - 21s 406ms/step - loss: 0.7066 - accuracy: 0.8504 - val_loss: 1.0171 - val_accuracy: 0.7139
Epoch 22/30
52/52 [==============================] - 21s 405ms/step - loss: 0.6913 - accuracy: 0.8532 - val_loss: 0.7059 - val_accuracy: 0.8610
Epoch 23/30
52/52 [==============================] - 21s 408ms/step - loss: 0.6681 - accuracy: 0.8671 - val_loss: 0.8007 - val_accuracy: 0.8147
Epoch 24/30
52/52 [==============================] - 21s 409ms/step - loss: 0.6636 - accuracy: 0.8747 - val_loss: 0.9490 - val_accuracy: 0.7302
Epoch 25/30
52/52 [==============================] - 21s 408ms/step - loss: 0.6637 - accuracy: 0.8722 - val_loss: 0.6913 - val_accuracy: 0.8556
Epoch 26/30
52/52 [==============================] - 21s 406ms/step - loss: 0.6443 - accuracy: 0.8837 - val_loss: 1.0483 - val_accuracy: 0.7139
Epoch 27/30
52/52 [==============================] - 21s 407ms/step - loss: 0.6555 - accuracy: 0.8695 - val_loss: 0.9448 - val_accuracy: 0.7602
Epoch 28/30
52/52 [==============================] - 21s 409ms/step - loss: 0.6409 - accuracy: 0.8807 - val_loss: 0.9337 - val_accuracy: 0.7302
Epoch 29/30
52/52 [==============================] - 21s 408ms/step - loss: 0.6300 - accuracy: 0.8910 - val_loss: 0.7461 - val_accuracy: 0.8256
Epoch 30/30
52/52 [==============================] - 21s 408ms/step - loss: 0.6093 - accuracy: 0.8968 - val_loss: 0.8651 - val_accuracy: 0.7766
6/6 [==============================] - 0s 65ms/step - loss: 0.7059 - accuracy: 0.8610
Validation accuracy: 86.1%
```
</div>
---
## Results and TFLite conversion
With about one million parameters, getting to ~85% top-1 accuracy on 256x256 resolution is
a strong result. This MobileViT mobile is fully compatible with TensorFlow Lite (TFLite)
and can be converted with the following code:
```python
# Serialize the model as a SavedModel.
tf.saved_model.save(mobilevit_xxs, "mobilevit_xxs")
# Convert to TFLite. This form of quantization is called
# post-training dynamic-range quantization in TFLite.
converter = tf.lite.TFLiteConverter.from_saved_model("mobilevit_xxs")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # Enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS, # Enable TensorFlow ops.
]
tflite_model = converter.convert()
open("mobilevit_xxs.tflite", "wb").write(tflite_model)
```
To learn more about different quantization recipes available in TFLite and running
inference with TFLite models, check out
[this official resource](https://www.tensorflow.org/lite/performance/post_training_quantization).
You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/keras-io/mobile-vit-xxs)
and try the demo on [Hugging Face Spaces](https://huggingface.co/spaces/keras-io/Flowers-Classification-MobileViT).
| keras-io/examples/vision/md/mobilevit.md/0 | {
"file_path": "keras-io/examples/vision/md/mobilevit.md",
"repo_id": "keras-io",
"token_count": 21755
} | 104 |
# Semantic segmentation with SegFormer and Hugging Face Transformers
**Author:** [Sayak Paul](https://twitter.com/RisingSayak)<br>
**Date created:** 2023/01/25<br>
**Last modified:** 2023/01/29<br>
**Description:** Fine-tuning a SegFormer model variant for semantic segmentation.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/segformer.ipynb) <span class="k-dot">โข</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/segformer.py)
---
## Introduction
In this example, we show how to fine-tune a SegFormer model variant to do
semantic segmentation on a custom dataset. Semantic segmentation is the task of
assigning a category to each and every pixel of an image. SegFormer was proposed in
[SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203).
SegFormer uses a hierarchical Transformer architecture (called "Mix Transformer") as
its encoder and a lightweight decoder for segmentation. As a result, it yields
state-of-the-art performance on semantic segmentation while being more efficient than
existing models. For more details, check out the original paper.

We leverage
[Hugging Face Transformers](https://github.com/huggingface/transformers)
to load a pretrained SegFormer checkpoint and fine-tune it on a custom dataset.
**Note:** this example reuses code from the following sources:
* [Official tutorial on segmentation from the TensorFlow team](https://www.tensorflow.org/tutorials/images/segmentation)
* [Hugging Face Task guide on segmentation](https://huggingface.co/docs/transformers/main/en/tasks/semantic_segmentation)
To run this example, we need to install the `transformers` library:
```python
!!pip install transformers -q
```
<div class="k-default-codeblock">
```
[]
```
</div>
---
## Load the data
We use the [Oxford-IIIT Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/) dataset for
this example. We leverage `tensorflow_datasets` to load the dataset.
```python
import tensorflow_datasets as tfds
dataset, info = tfds.load("oxford_iiit_pet:3.*.*", with_info=True)
```
<div class="k-default-codeblock">
```
/opt/conda/lib/python3.7/site-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/opt/conda/lib/python3.7/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so']
caused by: ['/opt/conda/lib/python3.7/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl5mutexC1Ev']
warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}")
/opt/conda/lib/python3.7/site-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/opt/conda/lib/python3.7/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['/opt/conda/lib/python3.7/site-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZNK10tensorflow4data11DatasetBase8FinalizeEPNS_15OpKernelContextESt8functionIFN3tsl8StatusOrISt10unique_ptrIS1_NS5_4core15RefCountDeleterEEEEvEE']
warnings.warn(f"file system plugins are not loaded: {e}")
```
</div>
---
## Prepare the datasets
For preparing the datasets for training and evaluation, we:
* Normalize the images with the mean and standard deviation used during pre-training
SegFormer.
* Subtract 1 from the segmentation masks so that the pixel values start from 0.
* Resize the images.
* Transpose the images such that they are in `"channels_first"` format. This is to make
them compatible with the SegFormer model from Hugging Face Transformers.
```python
import tensorflow as tf
from tensorflow.keras import backend
image_size = 512
mean = tf.constant([0.485, 0.456, 0.406])
std = tf.constant([0.229, 0.224, 0.225])
def normalize(input_image, input_mask):
input_image = tf.image.convert_image_dtype(input_image, tf.float32)
input_image = (input_image - mean) / tf.maximum(std, backend.epsilon())
input_mask -= 1
return input_image, input_mask
def load_image(datapoint):
input_image = tf.image.resize(datapoint["image"], (image_size, image_size))
input_mask = tf.image.resize(
datapoint["segmentation_mask"],
(image_size, image_size),
method="bilinear",
)
input_image, input_mask = normalize(input_image, input_mask)
input_image = tf.transpose(input_image, (2, 0, 1))
return {"pixel_values": input_image, "labels": tf.squeeze(input_mask)}
```
We now use the above utilities to prepare `tf.data.Dataset` objects including
`prefetch()` for performance. Change the `batch_size` to match the size of the GPU memory
on the GPU that you're using for training.
```python
auto = tf.data.AUTOTUNE
batch_size = 4
train_ds = (
dataset["train"]
.cache()
.shuffle(batch_size * 10)
.map(load_image, num_parallel_calls=auto)
.batch(batch_size)
.prefetch(auto)
)
test_ds = (
dataset["test"]
.map(load_image, num_parallel_calls=auto)
.batch(batch_size)
.prefetch(auto)
)
```
We can check the shapes of the input images and their segmentation maps:
```python
print(train_ds.element_spec)
```
<div class="k-default-codeblock">
```
{'pixel_values': TensorSpec(shape=(None, 3, 512, 512), dtype=tf.float32, name=None), 'labels': TensorSpec(shape=(None, 512, 512), dtype=tf.float32, name=None)}
```
</div>
---
## Visualize dataset
```python
import matplotlib.pyplot as plt
def display(display_list):
plt.figure(figsize=(15, 15))
title = ["Input Image", "True Mask", "Predicted Mask"]
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i + 1)
plt.title(title[i])
plt.imshow(tf.keras.utils.array_to_img(display_list[i]))
plt.axis("off")
plt.show()
for samples in train_ds.take(2):
sample_image, sample_mask = samples["pixel_values"][0], samples["labels"][0]
sample_image = tf.transpose(sample_image, (1, 2, 0))
sample_mask = tf.expand_dims(sample_mask, -1)
display([sample_image, sample_mask])
```


---
## Load a pretrained SegFormer checkpoint
We now load a pretrained SegFormer model variant from Hugging Face Transformers. The
SegFormer model comes in different variants dubbed as **MiT-B0** to **MiT-B5**. You can
find these checkpoints
[here](https://huggingface.co/models?pipeline_tag=image-segmentation&sort=downloads&search=segformer).
We load the smallest variant Mix-B0, which produces a good trade-off
between inference efficiency and predictive performance.
```python
from transformers import TFSegformerForSemanticSegmentation
model_checkpoint = "nvidia/mit-b0"
id2label = {0: "outer", 1: "inner", 2: "border"}
label2id = {label: id for id, label in id2label.items()}
num_labels = len(id2label)
model = TFSegformerForSemanticSegmentation.from_pretrained(
model_checkpoint,
num_labels=num_labels,
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True,
)
```
<div class="k-default-codeblock">
```
WARNING:tensorflow:5 out of the last 5 calls to <function Conv._jit_compiled_convolution_op at 0x7fa8cc1139e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:5 out of the last 5 calls to <function Conv._jit_compiled_convolution_op at 0x7fa8cc1139e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:6 out of the last 6 calls to <function Conv._jit_compiled_convolution_op at 0x7fa8bde37440> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:6 out of the last 6 calls to <function Conv._jit_compiled_convolution_op at 0x7fa8bde37440> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Some layers from the model checkpoint at nvidia/mit-b0 were not used when initializing TFSegformerForSemanticSegmentation: ['classifier']
- This IS expected if you are initializing TFSegformerForSemanticSegmentation from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFSegformerForSemanticSegmentation from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFSegformerForSemanticSegmentation were not initialized from the model checkpoint at nvidia/mit-b0 and are newly initialized: ['decode_head']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
</div>
The warning is telling us that we're throwing away some weights and newly initializing
some others. Don't panic! This is absolutely normal. Since we're using a custom dataset
which has a different set of semantic class labels than the pre-training dataset,
[`TFSegformerForSemanticSegmentation`](https://huggingface.co/docs/transformers/model_doc/segformer#transformers.TFSegformerForSemanticSegmentation)
is initializing a new decoder head.
We can now initialize an optimizer and compile the model with it.
---
## Compile the model
```python
lr = 0.00006
optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=optimizer)
```
<div class="k-default-codeblock">
```
No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss.
```
</div>
Notice that we are not using any loss function for compiling the model. This is because
the forward pass of the model
[implements](https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/models/segformer/modeling_tf_segformer.py#L873)
the loss computation part when we provide labels alongside the input images. After
computing the loss, the model returned a structured `dataclass` object which is
then used to guide the training process.
With the compiled model, we can proceed and call `fit()` on it to begin the fine-tuning
process!
---
## Prediction callback to monitor training progress
It helps us to visualize some sample predictions when the model is being fine-tuned,
thereby helping us to monitor the progress of the model. This callback is inspired from
[this tutorial](https://www.tensorflow.org/tutorials/images/segmentation).
```python
from IPython.display import clear_output
def create_mask(pred_mask):
pred_mask = tf.math.argmax(pred_mask, axis=1)
pred_mask = tf.expand_dims(pred_mask, -1)
return pred_mask[0]
def show_predictions(dataset=None, num=1):
if dataset:
for sample in dataset.take(num):
images, masks = sample["pixel_values"], sample["labels"]
masks = tf.expand_dims(masks, -1)
pred_masks = model.predict(images).logits
images = tf.transpose(images, (0, 2, 3, 1))
display([images[0], masks[0], create_mask(pred_masks)])
else:
display(
[
sample_image,
sample_mask,
create_mask(model.predict(tf.expand_dims(sample_image, 0))),
]
)
class DisplayCallback(tf.keras.callbacks.Callback):
def __init__(self, dataset, **kwargs):
super().__init__(**kwargs)
self.dataset = dataset
def on_epoch_end(self, epoch, logs=None):
clear_output(wait=True)
show_predictions(self.dataset)
print("\nSample Prediction after epoch {}\n".format(epoch + 1))
```
---
## Train model
```python
# Increase the number of epochs if the results are not of expected quality.
epochs = 5
history = model.fit(
train_ds,
validation_data=test_ds,
callbacks=[DisplayCallback(test_ds)],
epochs=epochs,
)
```
<div class="k-default-codeblock">
```
1/1 [==============================] - 0s 54ms/step
```
</div>

<div class="k-default-codeblock">
```
Sample Prediction after epoch 5
```
</div>
<div class="k-default-codeblock">
```
920/920 [==============================] - 89s 97ms/step - loss: 0.1742 - val_loss: 0.1927
```
</div>
---
## Inference
We perform inference on a few samples from the test set.
```python
show_predictions(test_ds, 5)
```
<div class="k-default-codeblock">
```
1/1 [==============================] - 0s 54ms/step
```
</div>

<div class="k-default-codeblock">
```
1/1 [==============================] - 0s 54ms/step
```
</div>

<div class="k-default-codeblock">
```
1/1 [==============================] - 0s 53ms/step
```
</div>

<div class="k-default-codeblock">
```
1/1 [==============================] - 0s 53ms/step
```
</div>

<div class="k-default-codeblock">
```
1/1 [==============================] - 0s 53ms/step
```
</div>

---
## Conclusion
In this example, we learned how to fine-tune a SegFormer model variant on a custom
dataset for semantic segmentation. In the interest of brevity, the example
was kept short. However, there are a couple of things, you can further try out:
* Incorporate data augmentation to potentially improve the results.
* Use a larger SegFormer model checkpoint to see how the results are affected.
* Push the fine-tuned model to the Hugging Face for sharing with the community easily.
You can do so just by doing `model.push_to_hub("your-username/your-awesome-model")`.
And then you can load the model by doing
`TFSegformerForSemanticSegmentation.from_pretrained("your-username/your-awesome-model"`).
[Here](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb)
is an end-to-end example if you're looking for a reference.
* If you'd rather push the model checkpoints to the Hub as the model is being
fine-tuned you can instead use the `PushToHubCallback` Keras callback.
[Here](https://gist.github.com/sayakpaul/f474ffb01f0cdcc8ba239357965c3bca) is an example.
[Here](https://huggingface.co/sayakpaul/mit-b0-finetuned-pets) is an example of a model
repository that was created using this callback.
| keras-io/examples/vision/md/segformer.md/0 | {
"file_path": "keras-io/examples/vision/md/segformer.md",
"repo_id": "keras-io",
"token_count": 5803
} | 105 |
# Video Vision Transformer
**Author:** [Aritra Roy Gosthipaty](https://twitter.com/ariG23498), [Ayush Thakur](https://twitter.com/ayushthakur0) (equal contribution)<br>
**Date created:** 2022/01/12<br>
**Last modified:** 2024/01/15<br>
**Description:** A Transformer-based architecture for video classification.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/vivit.ipynb) <span class="k-dot">โข</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/vivit.py)
---
## Introduction
Videos are sequences of images. Let's assume you have an image
representation model (CNN, ViT, etc.) and a sequence model
(RNN, LSTM, etc.) at hand. We ask you to tweak the model for video
classification. The simplest approach would be to apply the image
model to individual frames, use the sequence model to learn
sequences of image features, then apply a classification head on
the learned sequence representation.
The Keras example
[Video Classification with a CNN-RNN Architecture](https://keras.io/examples/vision/video_classification/)
explains this approach in detail. Alernatively, you can also
build a hybrid Transformer-based model for video classification as shown in the Keras example
[Video Classification with Transformers](https://keras.io/examples/vision/video_transformers/).
In this example, we minimally implement
[ViViT: A Video Vision Transformer](https://arxiv.org/abs/2103.15691)
by Arnab et al., a **pure Transformer-based** model
for video classification. The authors propose a novel embedding scheme
and a number of Transformer variants to model video clips. We implement
the embedding scheme and one of the variants of the Transformer
architecture, for simplicity.
This example requires `medmnist` package, which can be installed
by running the code cell below.
```python
!pip install -qq medmnist
```
---
## Imports
```python
import os
import io
import imageio
import medmnist
import ipywidgets
import numpy as np
import tensorflow as tf # for data preprocessing only
import keras
from keras import layers, ops
# Setting seed for reproducibility
SEED = 42
os.environ["TF_CUDNN_DETERMINISTIC"] = "1"
keras.utils.set_random_seed(SEED)
```
---
## Hyperparameters
The hyperparameters are chosen via hyperparameter
search. You can learn more about the process in the "conclusion" section.
```python
# DATA
DATASET_NAME = "organmnist3d"
BATCH_SIZE = 32
AUTO = tf.data.AUTOTUNE
INPUT_SHAPE = (28, 28, 28, 1)
NUM_CLASSES = 11
# OPTIMIZER
LEARNING_RATE = 1e-4
WEIGHT_DECAY = 1e-5
# TRAINING
EPOCHS = 60
# TUBELET EMBEDDING
PATCH_SIZE = (8, 8, 8)
NUM_PATCHES = (INPUT_SHAPE[0] // PATCH_SIZE[0]) ** 2
# ViViT ARCHITECTURE
LAYER_NORM_EPS = 1e-6
PROJECTION_DIM = 128
NUM_HEADS = 8
NUM_LAYERS = 8
```
---
## Dataset
For our example we use the
[MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification](https://medmnist.com/)
dataset. The videos are lightweight and easy to train on.
```python
def download_and_prepare_dataset(data_info: dict):
"""Utility function to download the dataset.
Arguments:
data_info (dict): Dataset metadata.
"""
data_path = keras.utils.get_file(origin=data_info["url"], md5_hash=data_info["MD5"])
with np.load(data_path) as data:
# Get videos
train_videos = data["train_images"]
valid_videos = data["val_images"]
test_videos = data["test_images"]
# Get labels
train_labels = data["train_labels"].flatten()
valid_labels = data["val_labels"].flatten()
test_labels = data["test_labels"].flatten()
return (
(train_videos, train_labels),
(valid_videos, valid_labels),
(test_videos, test_labels),
)
# Get the metadata of the dataset
info = medmnist.INFO[DATASET_NAME]
# Get the dataset
prepared_dataset = download_and_prepare_dataset(info)
(train_videos, train_labels) = prepared_dataset[0]
(valid_videos, valid_labels) = prepared_dataset[1]
(test_videos, test_labels) = prepared_dataset[2]
```
### `tf.data` pipeline
```python
def preprocess(frames: tf.Tensor, label: tf.Tensor):
"""Preprocess the frames tensors and parse the labels."""
# Preprocess images
frames = tf.image.convert_image_dtype(
frames[
..., tf.newaxis
], # The new axis is to help for further processing with Conv3D layers
tf.float32,
)
# Parse label
label = tf.cast(label, tf.float32)
return frames, label
def prepare_dataloader(
videos: np.ndarray,
labels: np.ndarray,
loader_type: str = "train",
batch_size: int = BATCH_SIZE,
):
"""Utility function to prepare the dataloader."""
dataset = tf.data.Dataset.from_tensor_slices((videos, labels))
if loader_type == "train":
dataset = dataset.shuffle(BATCH_SIZE * 2)
dataloader = (
dataset.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE)
.batch(batch_size)
.prefetch(tf.data.AUTOTUNE)
)
return dataloader
trainloader = prepare_dataloader(train_videos, train_labels, "train")
validloader = prepare_dataloader(valid_videos, valid_labels, "valid")
testloader = prepare_dataloader(test_videos, test_labels, "test")
```
---
## Tubelet Embedding
In ViTs, an image is divided into patches, which are then spatially
flattened, a process known as tokenization. For a video, one can
repeat this process for individual frames. **Uniform frame sampling**
as suggested by the authors is a tokenization scheme in which we
sample frames from the video clip and perform simple ViT tokenization.
|  |
| :--: |
| Uniform Frame Sampling [Source](https://arxiv.org/abs/2103.15691) |
**Tubelet Embedding** is different in terms of capturing temporal
information from the video.
First, we extract volumes from the video -- these volumes contain
patches of the frame and the temporal information as well. The volumes
are then flattened to build video tokens.
|  |
| :--: |
| Tubelet Embedding [Source](https://arxiv.org/abs/2103.15691) |
```python
class TubeletEmbedding(layers.Layer):
def __init__(self, embed_dim, patch_size, **kwargs):
super().__init__(**kwargs)
self.projection = layers.Conv3D(
filters=embed_dim,
kernel_size=patch_size,
strides=patch_size,
padding="VALID",
)
self.flatten = layers.Reshape(target_shape=(-1, embed_dim))
def call(self, videos):
projected_patches = self.projection(videos)
flattened_patches = self.flatten(projected_patches)
return flattened_patches
```
---
## Positional Embedding
This layer adds positional information to the encoded video tokens.
```python
class PositionalEncoder(layers.Layer):
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
def build(self, input_shape):
_, num_tokens, _ = input_shape
self.position_embedding = layers.Embedding(
input_dim=num_tokens, output_dim=self.embed_dim
)
self.positions = ops.arange(0, num_tokens, 1)
def call(self, encoded_tokens):
# Encode the positions and add it to the encoded tokens
encoded_positions = self.position_embedding(self.positions)
encoded_tokens = encoded_tokens + encoded_positions
return encoded_tokens
```
---
## Video Vision Transformer
The authors suggest 4 variants of Vision Transformer:
- Spatio-temporal attention
- Factorized encoder
- Factorized self-attention
- Factorized dot-product attention
In this example, we will implement the **Spatio-temporal attention**
model for simplicity. The following code snippet is heavily inspired from
[Image classification with Vision Transformer](https://keras.io/examples/vision/image_classification_with_vision_transformer/).
One can also refer to the
[official repository of ViViT](https://github.com/google-research/scenic/tree/main/scenic/projects/vivit)
which contains all the variants, implemented in JAX.
```python
def create_vivit_classifier(
tubelet_embedder,
positional_encoder,
input_shape=INPUT_SHAPE,
transformer_layers=NUM_LAYERS,
num_heads=NUM_HEADS,
embed_dim=PROJECTION_DIM,
layer_norm_eps=LAYER_NORM_EPS,
num_classes=NUM_CLASSES,
):
# Get the input layer
inputs = layers.Input(shape=input_shape)
# Create patches.
patches = tubelet_embedder(inputs)
# Encode patches.
encoded_patches = positional_encoder(patches)
# Create multiple layers of the Transformer block.
for _ in range(transformer_layers):
# Layer normalization and MHSA
x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim // num_heads, dropout=0.1
)(x1, x1)
# Skip connection
x2 = layers.Add()([attention_output, encoded_patches])
# Layer Normalization and MLP
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
x3 = keras.Sequential(
[
layers.Dense(units=embed_dim * 4, activation=ops.gelu),
layers.Dense(units=embed_dim, activation=ops.gelu),
]
)(x3)
# Skip connection
encoded_patches = layers.Add()([x3, x2])
# Layer normalization and Global average pooling.
representation = layers.LayerNormalization(epsilon=layer_norm_eps)(encoded_patches)
representation = layers.GlobalAvgPool1D()(representation)
# Classify outputs.
outputs = layers.Dense(units=num_classes, activation="softmax")(representation)
# Create the Keras model.
model = keras.Model(inputs=inputs, outputs=outputs)
return model
```
---
## Train
```python
def run_experiment():
# Initialize model
model = create_vivit_classifier(
tubelet_embedder=TubeletEmbedding(
embed_dim=PROJECTION_DIM, patch_size=PATCH_SIZE
),
positional_encoder=PositionalEncoder(embed_dim=PROJECTION_DIM),
)
# Compile the model with the optimizer, loss function
# and the metrics.
optimizer = keras.optimizers.Adam(learning_rate=LEARNING_RATE)
model.compile(
optimizer=optimizer,
loss="sparse_categorical_crossentropy",
metrics=[
keras.metrics.SparseCategoricalAccuracy(name="accuracy"),
keras.metrics.SparseTopKCategoricalAccuracy(5, name="top-5-accuracy"),
],
)
# Train the model.
_ = model.fit(trainloader, epochs=EPOCHS, validation_data=validloader)
_, accuracy, top_5_accuracy = model.evaluate(testloader)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
return model
model = run_experiment()
```
```
Test accuracy: 76.72%
Test top 5 accuracy: 97.54%
```
</div>
---
## Inference
```python
NUM_SAMPLES_VIZ = 25
testsamples, labels = next(iter(testloader))
testsamples, labels = testsamples[:NUM_SAMPLES_VIZ], labels[:NUM_SAMPLES_VIZ]
ground_truths = []
preds = []
videos = []
for i, (testsample, label) in enumerate(zip(testsamples, labels)):
# Generate gif
testsample = np.reshape(testsample.numpy(), (-1, 28, 28))
with io.BytesIO() as gif:
imageio.mimsave(gif, (testsample * 255).astype("uint8"), "GIF", fps=5)
videos.append(gif.getvalue())
# Get model prediction
output = model.predict(ops.expand_dims(testsample, axis=0))[0]
pred = np.argmax(output, axis=0)
ground_truths.append(label.numpy().astype("int"))
preds.append(pred)
def make_box_for_grid(image_widget, fit):
"""Make a VBox to hold caption/image for demonstrating option_fit values.
Source: https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Styling.html
"""
# Make the caption
if fit is not None:
fit_str = "'{}'".format(fit)
else:
fit_str = str(fit)
h = ipywidgets.HTML(value="" + str(fit_str) + "")
# Make the green box with the image widget inside it
boxb = ipywidgets.widgets.Box()
boxb.children = [image_widget]
# Compose into a vertical box
vb = ipywidgets.widgets.VBox()
vb.layout.align_items = "center"
vb.children = [h, boxb]
return vb
boxes = []
for i in range(NUM_SAMPLES_VIZ):
ib = ipywidgets.widgets.Image(value=videos[i], width=100, height=100)
true_class = info["label"][str(ground_truths[i])]
pred_class = info["label"][str(preds[i])]
caption = f"T: {true_class} | P: {pred_class}"
boxes.append(make_box_for_grid(ib, caption))
ipywidgets.widgets.GridBox(
boxes, layout=ipywidgets.widgets.Layout(grid_template_columns="repeat(5, 200px)")
)
```
---
## Final thoughts
With a vanilla implementation, we achieve ~79-80% Top-1 accuracy on the
test dataset.
The hyperparameters used in this tutorial were finalized by running a
hyperparameter search using
[W&B Sweeps](https://docs.wandb.ai/guides/sweeps).
You can find out our sweeps result
[here](https://wandb.ai/minimal-implementations/vivit/sweeps/66fp0lhz)
and our quick analysis of the results
[here](https://wandb.ai/minimal-implementations/vivit/reports/Hyperparameter-Tuning-Analysis--VmlldzoxNDEwNzcx).
For further improvement, you could look into the following:
- Using data augmentation for videos.
- Using a better regularization scheme for training.
- Apply different variants of the transformer model as in the paper.
We would like to thank [Anurag Arnab](https://anuragarnab.github.io/)
(first author of ViViT) for helpful discussion. We are grateful to
[Weights and Biases](https://wandb.ai/site) program for helping with
GPU credits.
You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/keras-io/video-vision-transformer)
and try the demo on [Hugging Face Spaces](https://huggingface.co/spaces/keras-io/video-vision-transformer-CT).
| keras-io/examples/vision/md/vivit.md/0 | {
"file_path": "keras-io/examples/vision/md/vivit.md",
"repo_id": "keras-io",
"token_count": 5346
} | 106 |
"""
Title: Self-supervised contrastive learning with SimSiam
Author: [Sayak Paul](https://twitter.com/RisingSayak)
Date created: 2021/03/19
Last modified: 2023/12/29
Description: Implementation of a self-supervised learning method for computer vision.
Accelerator: GPU
"""
"""
Self-supervised learning (SSL) is an interesting branch of study in the field of
representation learning. SSL systems try to formulate a supervised signal from a corpus
of unlabeled data points. An example is we train a deep neural network to predict the
next word from a given set of words. In literature, these tasks are known as *pretext
tasks* or *auxiliary tasks*. If we [train such a network](https://arxiv.org/abs/1801.06146) on a huge dataset (such as
the [Wikipedia text corpus](https://www.corpusdata.org/wikipedia.asp)) it learns very effective
representations that transfer well to downstream tasks. Language models like
[BERT](https://arxiv.org/abs/1810.04805), [GPT-3](https://arxiv.org/abs/2005.14165),
[ELMo](https://allennlp.org/elmo) all benefit from this.
Much like the language models we can train computer vision models using similar
approaches. To make things work in computer vision, we need to formulate the learning
tasks such that the underlying model (a deep neural network) is able to make sense of the
semantic information present in vision data. One such task is to a model to _contrast_
between two different versions of the same image. The hope is that in this way the model
will have learn representations where the similar images are grouped as together possible
while the dissimilar images are further away.
In this example, we will be implementing one such system called **SimSiam** proposed in
[Exploring Simple Siamese Representation Learning](https://arxiv.org/abs/2011.10566). It
is implemented as the following:
1. We create two different versions of the same dataset with a stochastic data
augmentation pipeline. Note that the random initialization seed needs to be the same
during create these versions.
2. We take a ResNet without any classification head (**backbone**) and we add a shallow
fully-connected network (**projection head**) on top of it. Collectively, this is known
as the **encoder**.
3. We pass the output of the encoder through a **predictor** which is again a shallow
fully-connected network having an
[AutoEncoder](https://en.wikipedia.org/wiki/Autoencoder) like structure.
4. We then train our encoder to maximize the cosine similarity between the two different
versions of our dataset.
"""
"""
## Setup
"""
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
import keras_cv
from keras import ops
from keras import layers
from keras import regularizers
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
"""
## Define hyperparameters
"""
AUTO = tf.data.AUTOTUNE
BATCH_SIZE = 128
EPOCHS = 5
CROP_TO = 32
SEED = 26
PROJECT_DIM = 2048
LATENT_DIM = 512
WEIGHT_DECAY = 0.0005
"""
## Load the CIFAR-10 dataset
"""
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
print(f"Total training examples: {len(x_train)}")
print(f"Total test examples: {len(x_test)}")
"""
## Defining our data augmentation pipeline
As studied in [SimCLR](https://arxiv.org/abs/2002.05709) having the right data
augmentation pipeline is critical for SSL systems to work effectively in computer vision.
Two particular augmentation transforms that seem to matter the most are: 1.) Random
resized crops and 2.) Color distortions. Most of the other SSL systems for computer
vision (such as [BYOL](https://arxiv.org/abs/2006.07733),
[MoCoV2](https://arxiv.org/abs/2003.04297), [SwAV](https://arxiv.org/abs/2006.09882),
etc.) include these in their training pipelines.
"""
strength = [0.4, 0.4, 0.3, 0.1]
random_flip = layers.RandomFlip(mode="horizontal_and_vertical")
random_crop = layers.RandomCrop(CROP_TO, CROP_TO)
random_brightness = layers.RandomBrightness(0.8 * strength[0])
random_contrast = layers.RandomContrast((1 - 0.8 * strength[1], 1 + 0.8 * strength[1]))
random_saturation = keras_cv.layers.RandomSaturation(
(0.5 - 0.8 * strength[2], 0.5 + 0.8 * strength[2])
)
random_hue = keras_cv.layers.RandomHue(0.2 * strength[3], [0, 255])
grayscale = keras_cv.layers.Grayscale()
def flip_random_crop(image):
# With random crops we also apply horizontal flipping.
image = random_flip(image)
image = random_crop(image)
return image
def color_jitter(x):
x = random_brightness(x)
x = random_contrast(x)
x = random_saturation(x)
x = random_hue(x)
# Affine transformations can disturb the natural range of
# RGB images, hence this is needed.
x = ops.clip(x, 0, 255)
return x
def color_drop(x):
x = grayscale(x)
x = ops.tile(x, [1, 1, 3])
return x
def random_apply(func, x, p):
if keras.random.uniform([], minval=0, maxval=1) < p:
return func(x)
else:
return x
def custom_augment(image):
# As discussed in the SimCLR paper, the series of augmentation
# transformations (except for random crops) need to be applied
# randomly to impose translational invariance.
image = flip_random_crop(image)
image = random_apply(color_jitter, image, p=0.8)
image = random_apply(color_drop, image, p=0.2)
return image
"""
It should be noted that an augmentation pipeline is generally dependent on various
properties of the dataset we are dealing with. For example, if images in the dataset are
heavily object-centric then taking random crops with a very high probability may hurt the
training performance.
Let's now apply our augmentation pipeline to our dataset and visualize a few outputs.
"""
"""
## Convert the data into TensorFlow `Dataset` objects
Here we create two different versions of our dataset *without* any ground-truth labels.
"""
ssl_ds_one = tf.data.Dataset.from_tensor_slices(x_train)
ssl_ds_one = (
ssl_ds_one.shuffle(1024, seed=SEED)
.map(custom_augment, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
ssl_ds_two = tf.data.Dataset.from_tensor_slices(x_train)
ssl_ds_two = (
ssl_ds_two.shuffle(1024, seed=SEED)
.map(custom_augment, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
# We then zip both of these datasets.
ssl_ds = tf.data.Dataset.zip((ssl_ds_one, ssl_ds_two))
# Visualize a few augmented images.
sample_images_one = next(iter(ssl_ds_one))
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(sample_images_one[n].numpy().astype("int"))
plt.axis("off")
plt.show()
# Ensure that the different versions of the dataset actually contain
# identical images.
sample_images_two = next(iter(ssl_ds_two))
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(sample_images_two[n].numpy().astype("int"))
plt.axis("off")
plt.show()
"""
Notice that the images in `samples_images_one` and `sample_images_two` are essentially
the same but are augmented differently.
"""
"""
## Defining the encoder and the predictor
We use an implementation of ResNet20 that is specifically configured for the CIFAR10
dataset. The code is taken from the
[keras-idiomatic-programmer](https://github.com/GoogleCloudPlatform/keras-idiomatic-programmer/blob/master/zoo/resnet/resnet_cifar10_v2.py) repository. The hyperparameters of
these architectures have been referred from Section 3 and Appendix A of [the original
paper](https://arxiv.org/abs/2011.10566).
"""
"""shell
wget -q https://git.io/JYx2x -O resnet_cifar10_v2.py
"""
import resnet_cifar10_v2
N = 2
DEPTH = N * 9 + 2
NUM_BLOCKS = ((DEPTH - 2) // 9) - 1
def get_encoder():
# Input and backbone.
inputs = layers.Input((CROP_TO, CROP_TO, 3))
x = layers.Rescaling(scale=1.0 / 127.5, offset=-1)(inputs)
x = resnet_cifar10_v2.stem(x)
x = resnet_cifar10_v2.learner(x, NUM_BLOCKS)
x = layers.GlobalAveragePooling2D(name="backbone_pool")(x)
# Projection head.
x = layers.Dense(
PROJECT_DIM, use_bias=False, kernel_regularizer=regularizers.l2(WEIGHT_DECAY)
)(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
x = layers.Dense(
PROJECT_DIM, use_bias=False, kernel_regularizer=regularizers.l2(WEIGHT_DECAY)
)(x)
outputs = layers.BatchNormalization()(x)
return keras.Model(inputs, outputs, name="encoder")
def get_predictor():
model = keras.Sequential(
[
# Note the AutoEncoder-like structure.
layers.Input((PROJECT_DIM,)),
layers.Dense(
LATENT_DIM,
use_bias=False,
kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
),
layers.ReLU(),
layers.BatchNormalization(),
layers.Dense(PROJECT_DIM),
],
name="predictor",
)
return model
"""
## Defining the (pre-)training loop
One of the main reasons behind training networks with these kinds of approaches is to
utilize the learned representations for downstream tasks like classification. This is why
this particular training phase is also referred to as _pre-training_.
We start by defining the loss function.
"""
def compute_loss(p, z):
# The authors of SimSiam emphasize the impact of
# the `stop_gradient` operator in the paper as it
# has an important role in the overall optimization.
z = ops.stop_gradient(z)
p = keras.utils.normalize(p, axis=1, order=2)
z = keras.utils.normalize(z, axis=1, order=2)
# Negative cosine similarity (minimizing this is
# equivalent to maximizing the similarity).
return -ops.mean(ops.sum((p * z), axis=1))
"""
We then define our training loop by overriding the `train_step()` function of the
`keras.Model` class.
"""
class SimSiam(keras.Model):
def __init__(self, encoder, predictor):
super().__init__()
self.encoder = encoder
self.predictor = predictor
self.loss_tracker = keras.metrics.Mean(name="loss")
@property
def metrics(self):
return [self.loss_tracker]
def train_step(self, data):
# Unpack the data.
ds_one, ds_two = data
# Forward pass through the encoder and predictor.
with tf.GradientTape() as tape:
z1, z2 = self.encoder(ds_one), self.encoder(ds_two)
p1, p2 = self.predictor(z1), self.predictor(z2)
# Note that here we are enforcing the network to match
# the representations of two differently augmented batches
# of data.
loss = compute_loss(p1, z2) / 2 + compute_loss(p2, z1) / 2
# Compute gradients and update the parameters.
learnable_params = (
self.encoder.trainable_variables + self.predictor.trainable_variables
)
gradients = tape.gradient(loss, learnable_params)
self.optimizer.apply_gradients(zip(gradients, learnable_params))
# Monitor loss.
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
"""
## Pre-training our networks
In the interest of this example, we will train the model for only 5 epochs. In reality,
this should at least be 100 epochs.
"""
# Create a cosine decay learning scheduler.
num_training_samples = len(x_train)
steps = EPOCHS * (num_training_samples // BATCH_SIZE)
lr_decayed_fn = keras.optimizers.schedules.CosineDecay(
initial_learning_rate=0.03, decay_steps=steps
)
# Create an early stopping callback.
early_stopping = keras.callbacks.EarlyStopping(
monitor="loss", patience=5, restore_best_weights=True
)
# Compile model and start training.
simsiam = SimSiam(get_encoder(), get_predictor())
simsiam.compile(optimizer=keras.optimizers.SGD(lr_decayed_fn, momentum=0.6))
history = simsiam.fit(ssl_ds, epochs=EPOCHS, callbacks=[early_stopping])
# Visualize the training progress of the model.
plt.plot(history.history["loss"])
plt.grid()
plt.title("Negative Cosine Similairty")
plt.show()
"""
If your solution gets very close to -1 (minimum value of our loss) very quickly with a
different dataset and a different backbone architecture that is likely because of
*representation collapse*. It is a phenomenon where the encoder yields similar output for
all the images. In that case additional hyperparameter tuning is required especially in
the following areas:
* Strength of the color distortions and their probabilities.
* Learning rate and its schedule.
* Architecture of both the backbone and their projection head.
"""
"""
## Evaluating our SSL method
The most popularly used method to evaluate a SSL method in computer vision (or any other
pre-training method as such) is to learn a linear classifier on the frozen features of
the trained backbone model (in this case it is ResNet20) and evaluate the classifier on
unseen images. Other methods include
[fine-tuning](https://keras.io/guides/transfer_learning/) on the source dataset or even a
target dataset with 5% or 10% labels present. Practically, we can use the backbone model
for any downstream task such as semantic segmentation, object detection, and so on where
the backbone models are usually pre-trained with *pure supervised learning*.
"""
# We first create labeled `Dataset` objects.
train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
# Then we shuffle, batch, and prefetch this dataset for performance. We
# also apply random resized crops as an augmentation but only to the
# training set.
train_ds = (
train_ds.shuffle(1024)
.map(lambda x, y: (flip_random_crop(x), y), num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
test_ds = test_ds.batch(BATCH_SIZE).prefetch(AUTO)
# Extract the backbone ResNet20.
backbone = keras.Model(
simsiam.encoder.input, simsiam.encoder.get_layer("backbone_pool").output
)
# We then create our linear classifier and train it.
backbone.trainable = False
inputs = layers.Input((CROP_TO, CROP_TO, 3))
x = backbone(inputs, training=False)
outputs = layers.Dense(10, activation="softmax")(x)
linear_model = keras.Model(inputs, outputs, name="linear_model")
# Compile model and start training.
linear_model.compile(
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
optimizer=keras.optimizers.SGD(lr_decayed_fn, momentum=0.9),
)
history = linear_model.fit(
train_ds, validation_data=test_ds, epochs=EPOCHS, callbacks=[early_stopping]
)
_, test_acc = linear_model.evaluate(test_ds)
print("Test accuracy: {:.2f}%".format(test_acc * 100))
"""
## Notes
* More data and longer pre-training schedule benefit SSL in general.
* SSL is particularly very helpful when you do not have access to very limited *labeled*
training data but you can manage to build a large corpus of unlabeled data. Recently,
using an SSL method called [SwAV](https://arxiv.org/abs/2006.09882), a group of
researchers at Facebook trained a [RegNet](https://arxiv.org/abs/2006.09882) on 2 Billion
images. They were able to achieve downstream performance very close to those achieved by
pure supervised pre-training. For some downstream tasks, their method even outperformed
the supervised counterparts. You can check out [their
paper](https://arxiv.org/pdf/2103.01988.pdf) to know the details.
* If you are interested to understand why contrastive SSL helps networks learn meaningful
representations, you can check out the following resources:
* [Self-supervised learning: The dark matter of
intelligence](https://ai.facebook.com/blog/self-supervised-learning-the-dark-matter-of-intelligence/)
* [Understanding self-supervised learning using controlled datasets with known
structure](https://sslneuips20.github.io/files/CameraReadys%203-77/64/CameraReady/Understanding_self_supervised_learning.pdf)
"""
| keras-io/examples/vision/simsiam.py/0 | {
"file_path": "keras-io/examples/vision/simsiam.py",
"repo_id": "keras-io",
"token_count": 5443
} | 107 |
"""
Title: Working with preprocessing layers
Authors: Francois Chollet, Mark Omernick
Date created: 2020/07/25
Last modified: 2021/04/23
Description: Overview of how to leverage preprocessing layers to create end-to-end models.
Accelerator: GPU
"""
"""
## Keras preprocessing
The Keras preprocessing layers API allows developers to build Keras-native input
processing pipelines. These input processing pipelines can be used as independent
preprocessing code in non-Keras workflows, combined directly with Keras models, and
exported as part of a Keras SavedModel.
With Keras preprocessing layers, you can build and export models that are truly
end-to-end: models that accept raw images or raw structured data as input; models that
handle feature normalization or feature value indexing on their own.
"""
"""
## Available preprocessing
### Text preprocessing
- `tf.keras.layers.TextVectorization`: turns raw strings into an encoded
representation that can be read by an `Embedding` layer or `Dense` layer.
### Numerical features preprocessing
- `tf.keras.layers.Normalization`: performs feature-wise normalization of
input features.
- `tf.keras.layers.Discretization`: turns continuous numerical features
into integer categorical features.
### Categorical features preprocessing
- `tf.keras.layers.CategoryEncoding`: turns integer categorical features
into one-hot, multi-hot, or count dense representations.
- `tf.keras.layers.Hashing`: performs categorical feature hashing, also known as
the "hashing trick".
- `tf.keras.layers.StringLookup`: turns string categorical values into an encoded
representation that can be read by an `Embedding` layer or `Dense` layer.
- `tf.keras.layers.IntegerLookup`: turns integer categorical values into an
encoded representation that can be read by an `Embedding` layer or `Dense`
layer.
### Image preprocessing
These layers are for standardizing the inputs of an image model.
- `tf.keras.layers.Resizing`: resizes a batch of images to a target size.
- `tf.keras.layers.Rescaling`: rescales and offsets the values of a batch of
images (e.g. go from inputs in the `[0, 255]` range to inputs in the `[0, 1]`
range.
- `tf.keras.layers.CenterCrop`: returns a center crop of a batch of images.
### Image data augmentation
These layers apply random augmentation transforms to a batch of images. They
are only active during training.
- `tf.keras.layers.RandomCrop`
- `tf.keras.layers.RandomFlip`
- `tf.keras.layers.RandomTranslation`
- `tf.keras.layers.RandomRotation`
- `tf.keras.layers.RandomZoom`
- `tf.keras.layers.RandomContrast`
"""
"""
## The `adapt()` method
Some preprocessing layers have an internal state that can be computed based on
a sample of the training data. The list of stateful preprocessing layers is:
- `TextVectorization`: holds a mapping between string tokens and integer indices
- `StringLookup` and `IntegerLookup`: hold a mapping between input values and integer
indices.
- `Normalization`: holds the mean and standard deviation of the features.
- `Discretization`: holds information about value bucket boundaries.
Crucially, these layers are **non-trainable**. Their state is not set during training; it
must be set **before training**, either by initializing them from a precomputed constant,
or by "adapting" them on data.
You set the state of a preprocessing layer by exposing it to training data, via the
`adapt()` method:
"""
import numpy as np
import tensorflow as tf
import keras
from keras import layers
data = np.array(
[
[0.1, 0.2, 0.3],
[0.8, 0.9, 1.0],
[1.5, 1.6, 1.7],
]
)
layer = layers.Normalization()
layer.adapt(data)
normalized_data = layer(data)
print("Features mean: %.2f" % (normalized_data.numpy().mean()))
print("Features std: %.2f" % (normalized_data.numpy().std()))
"""
The `adapt()` method takes either a Numpy array or a `tf.data.Dataset` object. In the
case of `StringLookup` and `TextVectorization`, you can also pass a list of strings:
"""
data = [
"ฮพฮตแฟฮฝแพฝ, แผฆ ฯฮฟฮน ฮผแฝฒฮฝ แฝฮฝฮตฮนฯฮฟฮน แผฮผฮฎฯฮฑฮฝฮฟฮน แผฮบฯฮนฯฯฮผฯ
ฮธฮฟฮน",
"ฮณฮฏฮณฮฝฮฟฮฝฯแพฝ, ฮฟแฝฮดฮญ ฯฮน ฯฮฌฮฝฯฮฑ ฯฮตฮปฮตฮฏฮตฯฮฑฮน แผฮฝฮธฯฯฯฮฟฮนฯฮน.",
"ฮดฮฟฮนฮฑแฝถ ฮณฮฌฯ ฯฮต ฯฯฮปฮฑฮน แผฮผฮตฮฝฮทฮฝแฟถฮฝ ฮตแผฐฯแฝถฮฝ แฝฮฝฮตฮฏฯฯฮฝ:",
"ฮฑแผฑ ฮผแฝฒฮฝ ฮณแฝฐฯ ฮบฮตฯฮฌฮตฯฯฮน ฯฮตฯฮตฯฯฮฑฯฮฑฮน, ฮฑแผฑ ฮดแพฝ แผฮปฮญฯฮฑฮฝฯฮน:",
"ฯแฟถฮฝ ฮฟแผณ ฮผฮญฮฝ ฮบแพฝ แผฮปฮธฯฯฮน ฮดฮนแฝฐ ฯฯฮนฯฯฮฟแฟฆ แผฮปฮญฯฮฑฮฝฯฮฟฯ,",
"ฮฟแผต แฟฅแพฝ แผฮปฮตฯฮฑฮฏฯฮฟฮฝฯฮฑฮน, แผฯฮตแพฝ แผฮบฯฮฌฮฑฮฝฯฮฑ ฯฮญฯฮฟฮฝฯฮตฯ:",
"ฮฟแผฑ ฮดแฝฒ ฮดฮนแฝฐ ฮพฮตฯฯแฟถฮฝ ฮบฮตฯฮฌฯฮฝ แผฮปฮธฯฯฮน ฮธฯฯฮฑฮถฮต,",
"ฮฟแผต แฟฅแพฝ แผฯฯ
ฮผฮฑ ฮบฯฮฑฮฏฮฝฮฟฯ
ฯฮน, ฮฒฯฮฟฯแฟถฮฝ แฝ
ฯฮต ฮบฮญฮฝ ฯฮนฯ แผดฮดฮทฯฮฑฮน.",
]
layer = layers.TextVectorization()
layer.adapt(data)
vectorized_text = layer(data)
print(vectorized_text)
"""
In addition, adaptable layers always expose an option to directly set state via
constructor arguments or weight assignment. If the intended state values are known at
layer construction time, or are calculated outside of the `adapt()` call, they can be set
without relying on the layer's internal computation. For instance, if external vocabulary
files for the `TextVectorization`, `StringLookup`, or `IntegerLookup` layers already
exist, those can be loaded directly into the lookup tables by passing a path to the
vocabulary file in the layer's constructor arguments.
Here's an example where you instantiate a `StringLookup` layer with precomputed vocabulary:
"""
vocab = ["a", "b", "c", "d"]
data = tf.constant([["a", "c", "d"], ["d", "z", "b"]])
layer = layers.StringLookup(vocabulary=vocab)
vectorized_data = layer(data)
print(vectorized_data)
"""
## Preprocessing data before the model or inside the model
There are two ways you could be using preprocessing layers:
**Option 1:** Make them part of the model, like this:
```python
inputs = keras.Input(shape=input_shape)
x = preprocessing_layer(inputs)
outputs = rest_of_the_model(x)
model = keras.Model(inputs, outputs)
```
With this option, preprocessing will happen on device, synchronously with the rest of the
model execution, meaning that it will benefit from GPU acceleration.
If you're training on a GPU, this is the best option for the `Normalization` layer, and for
all image preprocessing and data augmentation layers.
**Option 2:** apply it to your `tf.data.Dataset`, so as to obtain a dataset that yields
batches of preprocessed data, like this:
```python
dataset = dataset.map(lambda x, y: (preprocessing_layer(x), y))
```
With this option, your preprocessing will happen on a CPU, asynchronously, and will be
buffered before going into the model.
In addition, if you call `dataset.prefetch(tf.data.AUTOTUNE)` on your dataset,
the preprocessing will happen efficiently in parallel with training:
```python
dataset = dataset.map(lambda x, y: (preprocessing_layer(x), y))
dataset = dataset.prefetch(tf.data.AUTOTUNE)
model.fit(dataset, ...)
```
This is the best option for `TextVectorization`, and all structured data preprocessing
layers. It can also be a good option if you're training on a CPU and you use image preprocessing
layers.
Note that the `TextVectorization` layer can only be executed on a CPU, as it is mostly a
dictionary lookup operation. Therefore, if you are training your model on a GPU or a TPU,
you should put the `TextVectorization` layer in the `tf.data` pipeline to get the best performance.
**When running on a TPU, you should always place preprocessing layers in the `tf.data` pipeline**
(with the exception of `Normalization` and `Rescaling`, which run fine on a TPU and are commonly
used as the first layer in an image model).
"""
"""
## Benefits of doing preprocessing inside the model at inference time
Even if you go with option 2, you may later want to export an inference-only end-to-end
model that will include the preprocessing layers. The key benefit to doing this is that
**it makes your model portable** and it **helps reduce the
[training/serving skew](https://developers.google.com/machine-learning/guides/rules-of-ml#training-serving_skew)**.
When all data preprocessing is part of the model, other people can load and use your
model without having to be aware of how each feature is expected to be encoded &
normalized. Your inference model will be able to process raw images or raw structured
data, and will not require users of the model to be aware of the details of e.g. the
tokenization scheme used for text, the indexing scheme used for categorical features,
whether image pixel values are normalized to `[-1, +1]` or to `[0, 1]`, etc. This is
especially powerful if you're exporting
your model to another runtime, such as TensorFlow.js: you won't have to
reimplement your preprocessing pipeline in JavaScript.
If you initially put your preprocessing layers in your `tf.data` pipeline,
you can export an inference model that packages the preprocessing.
Simply instantiate a new model that chains
your preprocessing layers and your training model:
```python
inputs = keras.Input(shape=input_shape)
x = preprocessing_layer(inputs)
outputs = training_model(x)
inference_model = keras.Model(inputs, outputs)
```
"""
"""
## Preprocessing during multi-worker training
Preprocessing layers are compatible with the
[tf.distribute](https://www.tensorflow.org/api_docs/python/tf/distribute) API
for running training across multiple machines.
In general, preprocessing layers should be placed inside a `tf.distribute.Strategy.scope()`
and called either inside or before the model as discussed above.
```python
with strategy.scope():
inputs = keras.Input(shape=input_shape)
preprocessing_layer = tf.keras.layers.Hashing(10)
dense_layer = tf.keras.layers.Dense(16)
```
For more details, refer to the _Data preprocessing_ section
of the [Distributed input](https://www.tensorflow.org/tutorials/distribute/input)
tutorial.
"""
"""
## Quick recipes
### Image data augmentation
Note that image data augmentation layers are only active during training (similarly to
the `Dropout` layer).
"""
from tensorflow import keras
from tensorflow.keras import layers
# Create a data augmentation stage with horizontal flipping, rotations, zooms
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
# Load some data
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
input_shape = x_train.shape[1:]
classes = 10
# Create a tf.data pipeline of augmented images (and their labels)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.batch(16).map(lambda x, y: (data_augmentation(x), y))
# Create a model and train it on the augmented image data
inputs = keras.Input(shape=input_shape)
x = layers.Rescaling(1.0 / 255)(inputs) # Rescale inputs
outputs = keras.applications.ResNet50( # Add the rest of the model
weights=None, input_shape=input_shape, classes=classes
)(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy")
model.fit(train_dataset, steps_per_epoch=5)
"""
You can see a similar setup in action in the example
[image classification from scratch](https://keras.io/examples/vision/image_classification_from_scratch/).
"""
"""
### Normalizing numerical features
"""
# Load some data
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
x_train = x_train.reshape((len(x_train), -1))
input_shape = x_train.shape[1:]
classes = 10
# Create a Normalization layer and set its internal state using the training data
normalizer = layers.Normalization()
normalizer.adapt(x_train)
# Create a model that include the normalization layer
inputs = keras.Input(shape=input_shape)
x = normalizer(inputs)
outputs = layers.Dense(classes, activation="softmax")(x)
model = keras.Model(inputs, outputs)
# Train the model
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
model.fit(x_train, y_train)
"""
### Encoding string categorical features via one-hot encoding
"""
# Define some toy data
data = tf.constant([["a"], ["b"], ["c"], ["b"], ["c"], ["a"]])
# Use StringLookup to build an index of the feature values and encode output.
lookup = layers.StringLookup(output_mode="one_hot")
lookup.adapt(data)
# Convert new test data (which includes unknown feature values)
test_data = tf.constant([["a"], ["b"], ["c"], ["d"], ["e"], [""]])
encoded_data = lookup(test_data)
print(encoded_data)
"""
Note that, here, index 0 is reserved for out-of-vocabulary values
(values that were not seen during `adapt()`).
You can see the `StringLookup` in action in the
[Structured data classification from scratch](https://keras.io/examples/structured_data/structured_data_classification_from_scratch/)
example.
"""
"""
### Encoding integer categorical features via one-hot encoding
"""
# Define some toy data
data = tf.constant([[10], [20], [20], [10], [30], [0]])
# Use IntegerLookup to build an index of the feature values and encode output.
lookup = layers.IntegerLookup(output_mode="one_hot")
lookup.adapt(data)
# Convert new test data (which includes unknown feature values)
test_data = tf.constant([[10], [10], [20], [50], [60], [0]])
encoded_data = lookup(test_data)
print(encoded_data)
"""
Note that index 0 is reserved for missing values (which you should specify as the value
0), and index 1 is reserved for out-of-vocabulary values (values that were not seen
during `adapt()`). You can configure this by using the `mask_token` and `oov_token`
constructor arguments of `IntegerLookup`.
You can see the `IntegerLookup` in action in the example
[structured data classification from scratch](https://keras.io/examples/structured_data/structured_data_classification_from_scratch/).
"""
"""
### Applying the hashing trick to an integer categorical feature
If you have a categorical feature that can take many different values (on the order of
10e3 or higher), where each value only appears a few times in the data,
it becomes impractical and ineffective to index and one-hot encode the feature values.
Instead, it can be a good idea to apply the "hashing trick": hash the values to a vector
of fixed size. This keeps the size of the feature space manageable, and removes the need
for explicit indexing.
"""
# Sample data: 10,000 random integers with values between 0 and 100,000
data = np.random.randint(0, 100000, size=(10000, 1))
# Use the Hashing layer to hash the values to the range [0, 64]
hasher = layers.Hashing(num_bins=64, salt=1337)
# Use the CategoryEncoding layer to multi-hot encode the hashed values
encoder = layers.CategoryEncoding(num_tokens=64, output_mode="multi_hot")
encoded_data = encoder(hasher(data))
print(encoded_data.shape)
"""
### Encoding text as a sequence of token indices
This is how you should preprocess text to be passed to an `Embedding` layer.
"""
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Create a TextVectorization layer
text_vectorizer = layers.TextVectorization(output_mode="int")
# Index the vocabulary via `adapt()`
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n",
text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(None,), dtype="int64")
x = layers.Embedding(input_dim=text_vectorizer.vocabulary_size(), output_dim=16)(inputs)
x = layers.GRU(8)(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
"""
You can see the `TextVectorization` layer in action, combined with an `Embedding` mode,
in the example
[text classification from scratch](https://keras.io/examples/nlp/text_classification_from_scratch/).
Note that when training such a model, for best performance, you should always
use the `TextVectorization` layer as part of the input pipeline.
"""
"""
### Encoding text as a dense matrix of N-grams with multi-hot encoding
This is how you should preprocess text to be passed to a `Dense` layer.
"""
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Instantiate TextVectorization with "multi_hot" output_mode
# and ngrams=2 (index all bigrams)
text_vectorizer = layers.TextVectorization(output_mode="multi_hot", ngrams=2)
# Index the bigrams via `adapt()`
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n",
text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),))
outputs = layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
"""
### Encoding text as a dense matrix of N-grams with TF-IDF weighting
This is an alternative way of preprocessing text before passing it to a `Dense` layer.
"""
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Instantiate TextVectorization with "tf-idf" output_mode
# (multi-hot with TF-IDF weighting) and ngrams=2 (index all bigrams)
text_vectorizer = layers.TextVectorization(output_mode="tf-idf", ngrams=2)
# Index the bigrams and learn the TF-IDF weights via `adapt()`
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n",
text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),))
outputs = layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
"""
## Important gotchas
### Working with lookup layers with very large vocabularies
You may find yourself working with a very large vocabulary in a `TextVectorization`, a `StringLookup` layer,
or an `IntegerLookup` layer. Typically, a vocabulary larger than 500MB would be considered "very large".
In such a case, for best performance, you should avoid using `adapt()`.
Instead, pre-compute your vocabulary in advance
(you could use Apache Beam or TF Transform for this)
and store it in a file. Then load the vocabulary into the layer at construction
time by passing the file path as the `vocabulary` argument.
### Using lookup layers on a TPU pod or with `ParameterServerStrategy`.
There is an outstanding issue that causes performance to degrade when using
a `TextVectorization`, `StringLookup`, or `IntegerLookup` layer while
training on a TPU pod or on multiple machines via `ParameterServerStrategy`.
This is slated to be fixed in TensorFlow 2.7.
"""
| keras-io/guides/_preprocessing_layers.py/0 | {
"file_path": "keras-io/guides/_preprocessing_layers.py",
"repo_id": "keras-io",
"token_count": 6969
} | 108 |
<jupyter_start><jupyter_text>Customizing Saving and Serialization**Author:** Neel Kovelamudi**Date created:** 2023/03/15**Last modified:** 2023/03/15**Description:** A more advanced guide on customizing saving for your layers and models. IntroductionThis guide covers advanced methods that can be customized in Keras saving. For mostusers, the methods outlined in the primary[Serialize, save, and export guide](https://keras.io/guides/serialization_and_saving)are sufficient. APIsWe will cover the following APIs:- `save_assets()` and `load_assets()`- `save_own_variables()` and `load_own_variables()`- `get_build_config()` and `build_from_config()`- `get_compile_config()` and `compile_from_config()`When restoring a model, these get executed in the following order:- `build_from_config()`- `compile_from_config()`- `load_own_variables()`- `load_assets()` Setup<jupyter_code>import os
import numpy as np
import keras<jupyter_output><empty_output><jupyter_text>State saving customizationThese methods determine how the state of your model's layers is saved when calling`model.save()`. You can override them to take full control of the state saving process. `save_own_variables()` and `load_own_variables()`These methods save and load the state variables of the layer when `model.save()` and`keras.models.load_model()` are called, respectively. By default, the state variablessaved and loaded are the weights of the layer (both trainable and non-trainable). Here isthe default implementation of `save_own_variables()`:```pythondef save_own_variables(self, store): all_vars = self._trainable_weights + self._non_trainable_weights for i, v in enumerate(all_vars): store[f"{i}"] = v.numpy()```The store used by these methods is a dictionary that can be populated with the layervariables. Let's take a look at an example customizing this.**Example:**<jupyter_code>@keras.utils.register_keras_serializable(package="my_custom_package")
class LayerWithCustomVariable(keras.layers.Dense):
def __init__(self, units, **kwargs):
super().__init__(units, **kwargs)
self.my_variable = keras.Variable(
np.random.random((units,)), name="my_variable", dtype="float32"
)
def save_own_variables(self, store):
super().save_own_variables(store)
# Stores the value of the variable upon saving
store["variables"] = self.my_variable.numpy()
def load_own_variables(self, store):
# Assigns the value of the variable upon loading
self.my_variable.assign(store["variables"])
# Load the remaining weights
for i, v in enumerate(self.weights):
v.assign(store[f"{i}"])
# Note: You must specify how all variables (including layer weights)
# are loaded in `load_own_variables.`
def call(self, inputs):
dense_out = super().call(inputs)
return dense_out + self.my_variable
model = keras.Sequential([LayerWithCustomVariable(1)])
ref_input = np.random.random((8, 10))
ref_output = np.random.random((8, 10))
model.compile(optimizer="adam", loss="mean_squared_error")
model.fit(ref_input, ref_output)
model.save("custom_vars_model.keras")
restored_model = keras.models.load_model("custom_vars_model.keras")
np.testing.assert_allclose(
model.layers[0].my_variable.numpy(),
restored_model.layers[0].my_variable.numpy(),
)<jupyter_output><empty_output><jupyter_text>`save_assets()` and `load_assets()`These methods can be added to your model class definition to store and load anyadditional information that your model needs.For example, NLP domain layers such as TextVectorization layers and IndexLookup layersmay need to store their associated vocabulary (or lookup table) in a text file uponsaving.Let's take at the basics of this workflow with a simple file `assets.txt`.**Example:**<jupyter_code>@keras.saving.register_keras_serializable(package="my_custom_package")
class LayerWithCustomAssets(keras.layers.Dense):
def __init__(self, vocab=None, *args, **kwargs):
super().__init__(*args, **kwargs)
self.vocab = vocab
def save_assets(self, inner_path):
# Writes the vocab (sentence) to text file at save time.
with open(os.path.join(inner_path, "vocabulary.txt"), "w") as f:
f.write(self.vocab)
def load_assets(self, inner_path):
# Reads the vocab (sentence) from text file at load time.
with open(os.path.join(inner_path, "vocabulary.txt"), "r") as f:
text = f.read()
self.vocab = text.replace("<unk>", "little")
model = keras.Sequential(
[LayerWithCustomAssets(vocab="Mary had a <unk> lamb.", units=5)]
)
x = np.random.random((10, 10))
y = model(x)
model.save("custom_assets_model.keras")
restored_model = keras.models.load_model("custom_assets_model.keras")
np.testing.assert_string_equal(
restored_model.layers[0].vocab, "Mary had a little lamb."
)<jupyter_output><empty_output><jupyter_text>`build` and `compile` saving customization `get_build_config()` and `build_from_config()`These methods work together to save the layer's built states and restore them uponloading.By default, this only includes a build config dictionary with the layer's input shape,but overriding these methods can be used to include further Variables and Lookup Tablesthat can be useful to restore for your built model.**Example:**<jupyter_code>@keras.saving.register_keras_serializable(package="my_custom_package")
class LayerWithCustomBuild(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super().__init__(**kwargs)
self.units = units
def call(self, inputs):
return keras.ops.matmul(inputs, self.w) + self.b
def get_config(self):
return dict(units=self.units, **super().get_config())
def build(self, input_shape, layer_init):
# Note the overriding of `build()` to add an extra argument.
# Therefore, we will need to manually call build with `layer_init` argument
# before the first execution of `call()`.
super().build(input_shape)
self._input_shape = input_shape
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer=layer_init,
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,),
initializer=layer_init,
trainable=True,
)
self.layer_init = layer_init
def get_build_config(self):
build_config = {
"layer_init": self.layer_init,
"input_shape": self._input_shape,
} # Stores our initializer for `build()`
return build_config
def build_from_config(self, config):
# Calls `build()` with the parameters at loading time
self.build(config["input_shape"], config["layer_init"])
custom_layer = LayerWithCustomBuild(units=16)
custom_layer.build(input_shape=(8,), layer_init="random_normal")
model = keras.Sequential(
[
custom_layer,
keras.layers.Dense(1, activation="sigmoid"),
]
)
x = np.random.random((16, 8))
y = model(x)
model.save("custom_build_model.keras")
restored_model = keras.models.load_model("custom_build_model.keras")
np.testing.assert_equal(restored_model.layers[0].layer_init, "random_normal")
np.testing.assert_equal(restored_model.built, True)<jupyter_output><empty_output><jupyter_text>`get_compile_config()` and `compile_from_config()`These methods work together to save the information with which the model was compiled(optimizers, losses, etc.) and restore and re-compile the model with this information.Overriding these methods can be useful for compiling the restored model with customoptimizers, custom losses, etc., as these will need to be deserialized prior to calling`model.compile` in `compile_from_config()`.Let's take a look at an example of this.**Example:**<jupyter_code>@keras.saving.register_keras_serializable(package="my_custom_package")
def small_square_sum_loss(y_true, y_pred):
loss = keras.ops.square(y_pred - y_true)
loss = loss / 10.0
loss = keras.ops.sum(loss, axis=1)
return loss
@keras.saving.register_keras_serializable(package="my_custom_package")
def mean_pred(y_true, y_pred):
return keras.ops.mean(y_pred)
@keras.saving.register_keras_serializable(package="my_custom_package")
class ModelWithCustomCompile(keras.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.dense1 = keras.layers.Dense(8, activation="relu")
self.dense2 = keras.layers.Dense(4, activation="softmax")
def call(self, inputs):
x = self.dense1(inputs)
return self.dense2(x)
def compile(self, optimizer, loss_fn, metrics):
super().compile(optimizer=optimizer, loss=loss_fn, metrics=metrics)
self.model_optimizer = optimizer
self.loss_fn = loss_fn
self.loss_metrics = metrics
def get_compile_config(self):
# These parameters will be serialized at saving time.
return {
"model_optimizer": self.model_optimizer,
"loss_fn": self.loss_fn,
"metric": self.loss_metrics,
}
def compile_from_config(self, config):
# Deserializes the compile parameters (important, since many are custom)
optimizer = keras.utils.deserialize_keras_object(config["model_optimizer"])
loss_fn = keras.utils.deserialize_keras_object(config["loss_fn"])
metrics = keras.utils.deserialize_keras_object(config["metric"])
# Calls compile with the deserialized parameters
self.compile(optimizer=optimizer, loss_fn=loss_fn, metrics=metrics)
model = ModelWithCustomCompile()
model.compile(
optimizer="SGD", loss_fn=small_square_sum_loss, metrics=["accuracy", mean_pred]
)
x = np.random.random((4, 8))
y = np.random.random((4,))
model.fit(x, y)
model.save("custom_compile_model.keras")
restored_model = keras.models.load_model("custom_compile_model.keras")
np.testing.assert_equal(model.model_optimizer, restored_model.model_optimizer)
np.testing.assert_equal(model.loss_fn, restored_model.loss_fn)
np.testing.assert_equal(model.loss_metrics, restored_model.loss_metrics)<jupyter_output><empty_output> | keras-io/guides/ipynb/customizing_saving_and_serialization.ipynb/0 | {
"file_path": "keras-io/guides/ipynb/customizing_saving_and_serialization.ipynb",
"repo_id": "keras-io",
"token_count": 3757
} | 109 |
<jupyter_start><jupyter_text>The Functional API**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2023/06/25**Description:** Complete guide to the functional API. Setup<jupyter_code>import numpy as np
import keras
from keras import layers
from keras import ops<jupyter_output><empty_output><jupyter_text>IntroductionThe Keras *functional API* is a way to create models that are more flexiblethan the `keras.Sequential` API. The functional API can handle modelswith non-linear topology, shared layers, and even multiple inputs or outputs.The main idea is that a deep learning model is usuallya directed acyclic graph (DAG) of layers.So the functional API is a way to build *graphs of layers*.Consider the following model:```(input: 784-dimensional vectors) โง[Dense (64 units, relu activation)] โง[Dense (64 units, relu activation)] โง[Dense (10 units, softmax activation)] โง(output: logits of a probability distribution over 10 classes)```This is a basic graph with three layers.To build this model using the functional API, start by creating an input node:<jupyter_code>inputs = keras.Input(shape=(784,))<jupyter_output><empty_output><jupyter_text>The shape of the data is set as a 784-dimensional vector.The batch size is always omitted since only the shape of each sample is specified.If, for example, you have an image input with a shape of `(32, 32, 3)`,you would use:<jupyter_code># Just for demonstration purposes.
img_inputs = keras.Input(shape=(32, 32, 3))<jupyter_output><empty_output><jupyter_text>The `inputs` that is returned contains information about the shape and `dtype`of the input data that you feed to your model.Here's the shape:<jupyter_code>inputs.shape<jupyter_output><empty_output><jupyter_text>Here's the dtype:<jupyter_code>inputs.dtype<jupyter_output><empty_output><jupyter_text>You create a new node in the graph of layers by calling a layer on this `inputs`object:<jupyter_code>dense = layers.Dense(64, activation="relu")
x = dense(inputs)<jupyter_output><empty_output><jupyter_text>The "layer call" action is like drawing an arrow from "inputs" to this layeryou created.You're "passing" the inputs to the `dense` layer, and you get `x` as the output.Let's add a few more layers to the graph of layers:<jupyter_code>x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10)(x)<jupyter_output><empty_output><jupyter_text>At this point, you can create a `Model` by specifying its inputs and outputsin the graph of layers:<jupyter_code>model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model")<jupyter_output><empty_output><jupyter_text>Let's check out what the model summary looks like:<jupyter_code>model.summary()<jupyter_output><empty_output><jupyter_text>You can also plot the model as a graph:<jupyter_code>keras.utils.plot_model(model, "my_first_model.png")<jupyter_output><empty_output><jupyter_text>And, optionally, display the input and output shapes of each layerin the plotted graph:<jupyter_code>keras.utils.plot_model(model, "my_first_model_with_shape_info.png", show_shapes=True)<jupyter_output><empty_output><jupyter_text>This figure and the code are almost identical. In the code version,the connection arrows are replaced by the call operation.A "graph of layers" is an intuitive mental image for a deep learning model,and the functional API is a way to create models that closely mirrors this. Training, evaluation, and inferenceTraining, evaluation, and inference work exactly in the same way for modelsbuilt using the functional API as for `Sequential` models.The `Model` class offers a built-in training loop (the `fit()` method)and a built-in evaluation loop (the `evaluate()` method). Notethat you can easily customize these loops to implement your own training routines.See also the guides on customizing what happens in `fit()`:- [Writing a custom train step with TensorFlow](/guides/custom_train_step_in_tensorflow/)- [Writing a custom train step with JAX](/guides/custom_train_step_in_jax/)- [Writing a custom train step with PyTorch](/guides/custom_train_step_in_torch/)Here, load the MNIST image data, reshape it into vectors,fit the model on the data (while monitoring performance on a validation split),then evaluate the model on the test data:<jupyter_code>(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.RMSprop(),
metrics=["accuracy"],
)
history = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])<jupyter_output><empty_output><jupyter_text>For further reading, see the[training and evaluation](/guides/training_with_built_in_methods/) guide. Save and serializeSaving the model and serialization work the same way for models built usingthe functional API as they do for `Sequential` models. The standard wayto save a functional model is to call `model.save()`to save the entire model as a single file. You can later recreate the same modelfrom this file, even if the code that built the model is no longer available.This saved file includes the:- model architecture- model weight values (that were learned during training)- model training config, if any (as passed to `compile()`)- optimizer and its state, if any (to restart training where you left off)<jupyter_code>model.save("my_model.keras")
del model
# Recreate the exact same model purely from the file:
model = keras.models.load_model("my_model.keras")<jupyter_output><empty_output><jupyter_text>For details, read the model [serialization & saving]( /guides/serialization_and_saving/) guide. Use the same graph of layers to define multiple modelsIn the functional API, models are created by specifying their inputsand outputs in a graph of layers. That means that a singlegraph of layers can be used to generate multiple models.In the example below, you use the same stack of layers to instantiate two models:an `encoder` model that turns image inputs into 16-dimensional vectors,and an end-to-end `autoencoder` model for training.<jupyter_code>encoder_input = keras.Input(shape=(28, 28, 1), name="img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
autoencoder = keras.Model(encoder_input, decoder_output, name="autoencoder")
autoencoder.summary()<jupyter_output><empty_output><jupyter_text>Here, the decoding architecture is strictly symmetricalto the encoding architecture, so the output shape is the same asthe input shape `(28, 28, 1)`.The reverse of a `Conv2D` layer is a `Conv2DTranspose` layer,and the reverse of a `MaxPooling2D` layer is an `UpSampling2D` layer. All models are callable, just like layersYou can treat any model as if it were a layer by invoking it on an `Input` oron the output of another layer. By calling a model you aren't just reusingthe architecture of the model, you're also reusing its weights.To see this in action, here's a different take on the autoencoder example thatcreates an encoder model, a decoder model, and chains them in two callsto obtain the autoencoder model:<jupyter_code>encoder_input = keras.Input(shape=(28, 28, 1), name="original_img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()
decoder_input = keras.Input(shape=(16,), name="encoded_img")
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
decoder = keras.Model(decoder_input, decoder_output, name="decoder")
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name="img")
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name="autoencoder")
autoencoder.summary()<jupyter_output><empty_output><jupyter_text>As you can see, the model can be nested: a model can contain sub-models(since a model is just like a layer).A common use case for model nesting is *ensembling*.For example, here's how to ensemble a set of models into a single modelthat averages their predictions:<jupyter_code>def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1)(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)<jupyter_output><empty_output><jupyter_text>Manipulate complex graph topologies Models with multiple inputs and outputsThe functional API makes it easy to manipulate multiple inputs and outputs.This cannot be handled with the `Sequential` API.For example, if you're building a system for ranking customer issue tickets bypriority and routing them to the correct department,then the model will have three inputs:- the title of the ticket (text input),- the text body of the ticket (text input), and- any tags added by the user (categorical input)This model will have two outputs:- the priority score between 0 and 1 (scalar sigmoid output), and- the department that should handle the ticket (softmax outputover the set of departments).You can build this model in a few lines with the functional API:<jupyter_code>num_tags = 12 # Number of unique issue tags
num_words = 10000 # Size of vocabulary obtained when preprocessing text data
num_departments = 4 # Number of departments for predictions
title_input = keras.Input(
shape=(None,), name="title"
) # Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name="body") # Variable-length sequence of ints
tags_input = keras.Input(
shape=(num_tags,), name="tags"
) # Binary vectors of size `num_tags`
# Embed each word in the title into a 64-dimensional vector
title_features = layers.Embedding(num_words, 64)(title_input)
# Embed each word in the text into a 64-dimensional vector
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce sequence of embedded words in the title into a single 128-dimensional vector
title_features = layers.LSTM(128)(title_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = layers.LSTM(32)(body_features)
# Merge all available features into a single large vector via concatenation
x = layers.concatenate([title_features, body_features, tags_input])
# Stick a logistic regression for priority prediction on top of the features
priority_pred = layers.Dense(1, name="priority")(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, name="department")(x)
# Instantiate an end-to-end model predicting both priority and department
model = keras.Model(
inputs=[title_input, body_input, tags_input],
outputs={"priority": priority_pred, "department": department_pred},
)<jupyter_output><empty_output><jupyter_text>Now plot the model:<jupyter_code>keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)<jupyter_output><empty_output><jupyter_text>When compiling this model, you can assign different losses to each output.You can even assign different weights to each loss -- to modulatetheir contribution to the total training loss.<jupyter_code>model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[
keras.losses.BinaryCrossentropy(from_logits=True),
keras.losses.CategoricalCrossentropy(from_logits=True),
],
loss_weights=[1.0, 0.2],
)<jupyter_output><empty_output><jupyter_text>Since the output layers have different names, you could also specifythe losses and loss weights with the corresponding layer names:<jupyter_code>model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"priority": keras.losses.BinaryCrossentropy(from_logits=True),
"department": keras.losses.CategoricalCrossentropy(from_logits=True),
},
loss_weights={"priority": 1.0, "department": 0.2},
)<jupyter_output><empty_output><jupyter_text>Train the model by passing lists of NumPy arrays of inputs and targets:<jupyter_code># Dummy input data
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype("float32")
# Dummy target data
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit(
{"title": title_data, "body": body_data, "tags": tags_data},
{"priority": priority_targets, "department": dept_targets},
epochs=2,
batch_size=32,
)<jupyter_output><empty_output><jupyter_text>When calling fit with a `Dataset` object, it should yield either atuple of lists like `([title_data, body_data, tags_data], [priority_targets, dept_targets])`or a tuple of dictionaries like`({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets})`.For more detailed explanation, refer to the[training and evaluation](/guides/training_with_built_in_methods/) guide. A toy ResNet modelIn addition to models with multiple inputs and outputs,the functional API makes it easy to manipulate non-linear connectivitytopologies -- these are models with layers that are not connected sequentially,which the `Sequential` API cannot handle.A common use case for this is residual connections.Let's build a toy ResNet model for CIFAR10 to demonstrate this:<jupyter_code>inputs = keras.Input(shape=(32, 32, 3), name="img")
x = layers.Conv2D(32, 3, activation="relu")(inputs)
x = layers.Conv2D(64, 3, activation="relu")(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_1_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_2_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation="relu")(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation="relu")(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10)(x)
model = keras.Model(inputs, outputs, name="toy_resnet")
model.summary()<jupyter_output><empty_output><jupyter_text>Plot the model:<jupyter_code>keras.utils.plot_model(model, "mini_resnet.png", show_shapes=True)<jupyter_output><empty_output><jupyter_text>Now train the model:<jupyter_code>(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=["acc"],
)
# We restrict the data to the first 1000 samples so as to limit execution time
# on Colab. Try to train on the entire dataset until convergence!
model.fit(
x_train[:1000],
y_train[:1000],
batch_size=64,
epochs=1,
validation_split=0.2,
)<jupyter_output><empty_output><jupyter_text>Shared layersAnother good use for the functional API are models that use *shared layers*.Shared layers are layer instances that are reused multiple times in the same model --they learn features that correspond to multiple paths in the graph-of-layers.Shared layers are often used to encode inputs from similar spaces(say, two different pieces of text that feature similar vocabulary).They enable sharing of information across these different inputs,and they make it possible to train such a model on less data.If a given word is seen in one of the inputs,that will benefit the processing of all inputs that pass through the shared layer.To share a layer in the functional API, call the same layer instance multiple times.For instance, here's an `Embedding` layer shared across two different text inputs:<jupyter_code># Embedding for 1000 unique words mapped to 128-dimensional vectors
shared_embedding = layers.Embedding(1000, 128)
# Variable-length sequence of integers
text_input_a = keras.Input(shape=(None,), dtype="int32")
# Variable-length sequence of integers
text_input_b = keras.Input(shape=(None,), dtype="int32")
# Reuse the same layer to encode both inputs
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)<jupyter_output><empty_output><jupyter_text>Extract and reuse nodes in the graph of layersBecause the graph of layers you are manipulating is a static data structure,it can be accessed and inspected. And this is how you are able to plotfunctional models as images.This also means that you can access the activations of intermediate layers("nodes" in the graph) and reuse them elsewhere --which is very useful for something like feature extraction.Let's look at an example. This is a VGG19 model with weights pretrained on ImageNet:<jupyter_code>vgg19 = keras.applications.VGG19()<jupyter_output><empty_output><jupyter_text>And these are the intermediate activations of the model,obtained by querying the graph data structure:<jupyter_code>features_list = [layer.output for layer in vgg19.layers]<jupyter_output><empty_output><jupyter_text>Use these features to create a new feature-extraction model that returnsthe values of the intermediate layer activations:<jupyter_code>feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype("float32")
extracted_features = feat_extraction_model(img)<jupyter_output><empty_output><jupyter_text>This comes in handy for tasks like[neural style transfer](https://keras.io/examples/generative/neural_style_transfer/),among other things. Extend the API using custom layers`keras` includes a wide range of built-in layers, for example:- Convolutional layers: `Conv1D`, `Conv2D`, `Conv3D`, `Conv2DTranspose`- Pooling layers: `MaxPooling1D`, `MaxPooling2D`, `MaxPooling3D`, `AveragePooling1D`- RNN layers: `GRU`, `LSTM`, `ConvLSTM2D`- `BatchNormalization`, `Dropout`, `Embedding`, etc.But if you don't find what you need, it's easy to extend the API by creatingyour own layers. All layers subclass the `Layer` class and implement:- `call` method, that specifies the computation done by the layer.- `build` method, that creates the weights of the layer (this is just a styleconvention since you can create weights in `__init__`, as well).To learn more about creating layers from scratch, read[custom layers and models](/guides/making_new_layers_and_models_via_subclassing) guide.The following is a basic implementation of `keras.layers.Dense`:<jupyter_code>class CustomDense(layers.Layer):
def __init__(self, units=32):
super().__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return ops.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)<jupyter_output><empty_output><jupyter_text>For serialization support in your custom layer, define a `get_config()`method that returns the constructor arguments of the layer instance:<jupyter_code>class CustomDense(layers.Layer):
def __init__(self, units=32):
super().__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return ops.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config, custom_objects={"CustomDense": CustomDense})<jupyter_output><empty_output><jupyter_text>Optionally, implement the class method `from_config(cls, config)` which is usedwhen recreating a layer instance given its config dictionary.The default implementation of `from_config` is:```pythondef from_config(cls, config): return cls(**config)``` When to use the functional APIShould you use the Keras functional API to create a new model,or just subclass the `Model` class directly? In general, the functional APIis higher-level, easier and safer, and has a number offeatures that subclassed models do not support.However, model subclassing provides greater flexibility when building modelsthat are not easily expressible as directed acyclic graphs of layers.For example, you could not implement a Tree-RNN with the functional APIand would have to subclass `Model` directly.For an in-depth look at the differences between the functional API andmodel subclassing, read[What are Symbolic and Imperative APIs in TensorFlow 2.0?](https://blog.tensorflow.org/2019/01/what-are-symbolic-and-imperative-apis.html). Functional API strengths:The following properties are also true for Sequential models(which are also data structures), but are not true for subclassed models(which are Python bytecode, not data structures). Less verboseThere is no `super().__init__(...)`, no `def call(self, ...):`, etc.Compare:```pythoninputs = keras.Input(shape=(32,))x = layers.Dense(64, activation='relu')(inputs)outputs = layers.Dense(10)(x)mlp = keras.Model(inputs, outputs)```With the subclassed version:```pythonclass MLP(keras.Model): def __init__(self, **kwargs): super().__init__(**kwargs) self.dense_1 = layers.Dense(64, activation='relu') self.dense_2 = layers.Dense(10) def call(self, inputs): x = self.dense_1(inputs) return self.dense_2(x) Instantiate the model.mlp = MLP() Necessary to create the model's state. The model doesn't have a state until it's called at least once._ = mlp(ops.zeros((1, 32)))``` Model validation while defining its connectivity graphIn the functional API, the input specification (shape and dtype) is createdin advance (using `Input`). Every time you call a layer,the layer checks that the specification passed to it matches its assumptions,and it will raise a helpful error message if not.This guarantees that any model you can build with the functional API will run.All debugging -- other than convergence-related debugging --happens statically during the model construction and not at execution time.This is similar to type checking in a compiler. A functional model is plottable and inspectableYou can plot the model as a graph, and you can easily access intermediate nodesin this graph. For example, to extract and reuse the activations of intermediatelayers (as seen in a previous example):```pythonfeatures_list = [layer.output for layer in vgg19.layers]feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)``` A functional model can be serialized or clonedBecause a functional model is a data structure rather than a piece of code,it is safely serializable and can be saved as a single filethat allows you to recreate the exact same modelwithout having access to any of the original code.See the [serialization & saving guide](/guides/serialization_and_saving/).To serialize a subclassed model, it is necessary for the implementerto specify a `get_config()`and `from_config()` method at the model level. Functional API weakness: It does not support dynamic architecturesThe functional API treats models as DAGs of layers.This is true for most deep learning architectures, but not all -- for example,recursive networks or Tree RNNs do not follow this assumption and cannotbe implemented in the functional API. Mix-and-match API stylesChoosing between the functional API or Model subclassing isn't abinary decision that restricts you into one category of models.All models in the `keras` API can interact with each other, whether they're`Sequential` models, functional models, or subclassed models that are writtenfrom scratch.You can always use a functional model or `Sequential` modelas part of a subclassed model or layer:<jupyter_code>units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super().__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation="tanh")
self.projection_2 = layers.Dense(units=units, activation="tanh")
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = ops.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = ops.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(ops.zeros((1, timesteps, input_dim)))<jupyter_output><empty_output><jupyter_text>You can use any subclassed layer or model in the functional APIas long as it implements a `call` method that follows one of the following patterns:- `call(self, inputs, **kwargs)` --Where `inputs` is a tensor or a nested structure of tensors (e.g. a list of tensors),and where `**kwargs` are non-tensor arguments (non-inputs).- `call(self, inputs, training=None, **kwargs)` --Where `training` is a boolean indicating whether the layer should behavein training mode and inference mode.- `call(self, inputs, mask=None, **kwargs)` --Where `mask` is a boolean mask tensor (useful for RNNs, for instance).- `call(self, inputs, training=None, mask=None, **kwargs)` --Of course, you can have both masking and training-specific behavior at the same time.Additionally, if you implement the `get_config` method on your custom Layer or model,the functional models you create will still be serializable and cloneable.Here's a quick example of a custom RNN, written from scratch,being used in a functional model:<jupyter_code>units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super().__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation="tanh")
self.projection_2 = layers.Dense(units=units, activation="tanh")
self.classifier = layers.Dense(1)
def call(self, inputs):
outputs = []
state = ops.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = ops.stack(outputs, axis=1)
return self.classifier(features)
# Note that you specify a static batch size for the inputs with the `batch_shape`
# arg, because the inner computation of `CustomRNN` requires a static batch size
# (when you create the `state` zeros tensor).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(ops.zeros((1, 10, 5)))<jupyter_output><empty_output> | keras-io/guides/ipynb/keras_core/functional_api.ipynb/0 | {
"file_path": "keras-io/guides/ipynb/keras_core/functional_api.ipynb",
"repo_id": "keras-io",
"token_count": 9516
} | 110 |
<jupyter_start><jupyter_text>High-performance image generation using Stable Diffusion in KerasCV**Authors:** [fchollet](https://twitter.com/fchollet), [lukewood](https://twitter.com/luke_wood_ml), [divamgupta](https://github.com/divamgupta)**Date created:** 2022/09/25**Last modified:** 2022/09/25**Description:** Generate new images using KerasCV's Stable Diffusion model. OverviewIn this guide, we will show how to generate novel images based on a text prompt usingthe KerasCV implementation of [stability.ai](https://stability.ai/)'s text-to-image model,[Stable Diffusion](https://github.com/CompVis/stable-diffusion).Stable Diffusion is a powerful, open-source text-to-image generation model. While thereexist multiple open-source implementations that allow you to easily create images fromtextual prompts, KerasCV's offers a few distinct advantages.These include [XLA compilation](https://www.tensorflow.org/xla) and[mixed precision](https://www.tensorflow.org/guide/mixed_precision) support,which together achieve state-of-the-art generation speed.In this guide, we will explore KerasCV's Stable Diffusion implementation, show how to usethese powerful performance boosts, and explore the performance benefitsthat they offer.**Note:** To run this guide on the `torch` backend, please set `jit_compile=False`everywhere. XLA compilation for Stable Diffusion does not currently work withtorch.To get started, let's install a few dependencies and sort out some imports:<jupyter_code>!pip install -q --upgrade keras-cv
!pip install -q --upgrade keras # Upgrade to Keras 3.
import time
import keras_cv
import keras
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>IntroductionUnlike most tutorials, where we first explain a topic then show how to implement it,with text-to-image generation it is easier to show instead of tell.Check out the power of `keras_cv.models.StableDiffusion()`.First, we construct a model:<jupyter_code>model = keras_cv.models.StableDiffusion(
img_width=512, img_height=512, jit_compile=False
)<jupyter_output><empty_output><jupyter_text>Next, we give it a prompt:<jupyter_code>images = model.text_to_image("photograph of an astronaut riding a horse", batch_size=3)
def plot_images(images):
plt.figure(figsize=(20, 20))
for i in range(len(images)):
ax = plt.subplot(1, len(images), i + 1)
plt.imshow(images[i])
plt.axis("off")
plot_images(images)<jupyter_output><empty_output><jupyter_text>Pretty incredible!But that's not all this model can do. Let's try a more complex prompt:<jupyter_code>images = model.text_to_image(
"cute magical flying dog, fantasy art, "
"golden color, high quality, highly detailed, elegant, sharp focus, "
"concept art, character concepts, digital painting, mystery, adventure",
batch_size=3,
)
plot_images(images)<jupyter_output><empty_output><jupyter_text>The possibilities are literally endless (or at least extend to the boundaries ofStable Diffusion's latent manifold). Wait, how does this even work?Unlike what you might expect at this point, Stable Diffusion doesn't actually run on magic.It's a kind of "latent diffusion model". Let's dig into what that means.You may be familiar with the idea of _super-resolution_:it's possible to train a deep learning model to _denoise_ an input image -- and thereby turn it into a higher-resolutionversion. The deep learning model doesn't do this by magically recovering the information that's missing from the noisy, low-resolutioninput -- rather, the model uses its training data distribution to hallucinate the visual details that would be most likelygiven the input. To learn more about super-resolution, you can check out the following Keras.io tutorials:- [Image Super-Resolution using an Efficient Sub-Pixel CNN](https://keras.io/examples/vision/super_resolution_sub_pixel/)- [Enhanced Deep Residual Networks for single-image super-resolution](https://keras.io/examples/vision/edsr/)When you push this idea to the limit, you may start asking -- what if we just run such a model on pure noise?The model would then "denoise the noise" and start hallucinating a brand new image. By repeating the process multipletimes, you can get turn a small patch of noise into an increasingly clear and high-resolution artificial picture.This is the key idea of latent diffusion, proposed in[High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) in 2020.To understand diffusion in depth, you can check the Keras.io tutorial[Denoising Diffusion Implicit Models](https://keras.io/examples/generative/ddim/).Now, to go from latent diffusion to a text-to-image system,you still need to add one key feature: the ability to control the generated visual contents via prompt keywords.This is done via "conditioning", a classic deep learning technique which consists of concatenating to thenoise patch a vector that represents a bit of text, then training the model on a dataset of {image: caption} pairs.This gives rise to the Stable Diffusion architecture. Stable Diffusion consists of three parts:- A text encoder, which turns your prompt into a latent vector.- A diffusion model, which repeatedly "denoises" a 64x64 latent image patch.- A decoder, which turns the final 64x64 latent patch into a higher-resolution 512x512 image.First, your text prompt gets projected into a latent vector space by the text encoder,which is simply a pretrained, frozen language model. Then that prompt vector is concatenatedto a randomly generated noise patch, which is repeatedly "denoised" by the diffusion model over a seriesof "steps" (the more steps you run the clearer and nicer your image will be -- the default value is 50 steps).Finally, the 64x64 latent image is sent through the decoder to properly render it in high resolution.All-in-all, it's a pretty simple system -- the Keras implementationfits in four files that represent less than 500 lines of code in total:- [text_encoder.py](https://github.com/keras-team/keras-cv/blob/master/keras_cv/models/stable_diffusion/text_encoder.py): 87 LOC- [diffusion_model.py](https://github.com/keras-team/keras-cv/blob/master/keras_cv/models/stable_diffusion/diffusion_model.py): 181 LOC- [decoder.py](https://github.com/keras-team/keras-cv/blob/master/keras_cv/models/stable_diffusion/decoder.py): 86 LOC- [stable_diffusion.py](https://github.com/keras-team/keras-cv/blob/master/keras_cv/models/stable_diffusion/stable_diffusion.py): 106 LOCBut this relatively simple system starts looking like magic once you train on billions of pictures and their captions.As Feynman said about the universe: _"It's not complicated, it's just a lot of it!"_ Perks of KerasCVWith several implementations of Stable Diffusion publicly available why should you use`keras_cv.models.StableDiffusion`?Aside from the easy-to-use API, KerasCV's Stable Diffusion model comeswith some powerful advantages, including:- Graph mode execution- XLA compilation through `jit_compile=True`- Support for mixed precision computationWhen these are combined, the KerasCV Stable Diffusion model runs orders of magnitudefaster than naive implementations. This section shows how to enable all of thesefeatures, and the resulting performance gain yielded from using them.For the purposes of comparison, we ran benchmarks comparing the runtime of the[HuggingFace diffusers](https://github.com/huggingface/diffusers) implementation ofStable Diffusion against the KerasCV implementation.Both implementations were tasked to generate 3 images with a step count of 50 for eachimage. In this benchmark, we used a Tesla T4 GPU.[All of our benchmarks are open source on GitHub, and may be re-run on Colab toreproduce the results.](https://github.com/LukeWood/stable-diffusion-performance-benchmarks)The results from the benchmark are displayed in the table below:| GPU | Model | Runtime ||------------|------------------------|-----------|| Tesla T4 | KerasCV (Warm Start) | **28.97s**|| Tesla T4 | diffusers (Warm Start) | 41.33s || Tesla V100 | KerasCV (Warm Start) | **12.45** || Tesla V100 | diffusers (Warm Start) | 12.72 |30% improvement in execution time on the Tesla T4!. While the improvement is much loweron the V100, we generally expect the results of the benchmark to consistently favor the KerasCVacross all NVIDIA GPUs.For the sake of completeness, both cold-start and warm-start generation times arereported. Cold-start execution time includes the one-time cost of model creation and compilation,and is therefore negligible in a production environment (where you would reuse the same model instancemany times). Regardless, here are the cold-start numbers:| GPU | Model | Runtime ||------------|------------------------|---------|| Tesla T4 | KerasCV (Cold Start) | 83.47s || Tesla T4 | diffusers (Cold Start) | 46.27s || Tesla V100 | KerasCV (Cold Start) | 76.43 || Tesla V100 | diffusers (Cold Start) | 13.90 |While the runtime results from running this guide may vary, in our testing the KerasCVimplementation of Stable Diffusion is significantly faster than its PyTorch counterpart.This may be largely attributed to XLA compilation.**Note: The performance benefits of each optimization can varysignificantly between hardware setups.**To get started, let's first benchmark our unoptimized model:<jupyter_code>benchmark_result = []
start = time.time()
images = model.text_to_image(
"A cute otter in a rainbow whirlpool holding shells, watercolor",
batch_size=3,
)
end = time.time()
benchmark_result.append(["Standard", end - start])
plot_images(images)
print(f"Standard model: {(end - start):.2f} seconds")
keras.backend.clear_session() # Clear session to preserve memory.<jupyter_output><empty_output><jupyter_text>Mixed precision"Mixed precision" consists of performing computation using `float16`precision, while storing weights in the `float32` format.This is done to take advantage of the fact that `float16` operations are backed bysignificantly faster kernels than their `float32` counterparts on modern NVIDIA GPUs.Enabling mixed precision computation in Keras(and therefore for `keras_cv.models.StableDiffusion`) is as simple as calling:<jupyter_code>keras.mixed_precision.set_global_policy("mixed_float16")<jupyter_output><empty_output><jupyter_text>That's all. Out of the box - it just works.<jupyter_code>model = keras_cv.models.StableDiffusion(jit_compile=False)
print("Compute dtype:", model.diffusion_model.compute_dtype)
print(
"Variable dtype:",
model.diffusion_model.variable_dtype,
)<jupyter_output><empty_output><jupyter_text>As you can see, the model constructed above now uses mixed precision computation;leveraging the speed of `float16` operations for computation, while storing variablesin `float32` precision.<jupyter_code># Warm up model to run graph tracing before benchmarking.
model.text_to_image("warming up the model", batch_size=3)
start = time.time()
images = model.text_to_image(
"a cute magical flying dog, fantasy art, "
"golden color, high quality, highly detailed, elegant, sharp focus, "
"concept art, character concepts, digital painting, mystery, adventure",
batch_size=3,
)
end = time.time()
benchmark_result.append(["Mixed Precision", end - start])
plot_images(images)
print(f"Mixed precision model: {(end - start):.2f} seconds")
keras.backend.clear_session()<jupyter_output><empty_output><jupyter_text>XLA CompilationTensorFlow and JAX come with the[XLA: Accelerated Linear Algebra](https://www.tensorflow.org/xla) compiler built-in.`keras_cv.models.StableDiffusion` supports a `jit_compile` argument out of the box.Setting this argument to `True` enables XLA compilation, resulting in a significantspeed-up.Let's use this below:<jupyter_code># Set back to the default for benchmarking purposes.
keras.mixed_precision.set_global_policy("float32")
model = keras_cv.models.StableDiffusion(jit_compile=True)
# Before we benchmark the model, we run inference once to make sure the TensorFlow
# graph has already been traced.
images = model.text_to_image("An avocado armchair", batch_size=3)
plot_images(images)<jupyter_output><empty_output><jupyter_text>Let's benchmark our XLA model:<jupyter_code>start = time.time()
images = model.text_to_image(
"A cute otter in a rainbow whirlpool holding shells, watercolor",
batch_size=3,
)
end = time.time()
benchmark_result.append(["XLA", end - start])
plot_images(images)
print(f"With XLA: {(end - start):.2f} seconds")
keras.backend.clear_session()<jupyter_output><empty_output><jupyter_text>On an A100 GPU, we get about a 2x speedup. Fantastic! Putting it all togetherSo, how do you assemble the world's most performant stable diffusion inferencepipeline (as of September 2022).With these two lines of code:<jupyter_code>keras.mixed_precision.set_global_policy("mixed_float16")
model = keras_cv.models.StableDiffusion(jit_compile=True)<jupyter_output><empty_output><jupyter_text>And to use it...<jupyter_code># Let's make sure to warm up the model
images = model.text_to_image(
"Teddy bears conducting machine learning research",
batch_size=3,
)
plot_images(images)<jupyter_output><empty_output><jupyter_text>Exactly how fast is it?Let's find out!<jupyter_code>start = time.time()
images = model.text_to_image(
"A mysterious dark stranger visits the great pyramids of egypt, "
"high quality, highly detailed, elegant, sharp focus, "
"concept art, character concepts, digital painting",
batch_size=3,
)
end = time.time()
benchmark_result.append(["XLA + Mixed Precision", end - start])
plot_images(images)
print(f"XLA + mixed precision: {(end - start):.2f} seconds")<jupyter_output><empty_output><jupyter_text>Let's check out the results:<jupyter_code>print("{:<22} {:<22}".format("Model", "Runtime"))
for result in benchmark_result:
name, runtime = result
print("{:<22} {:<22}".format(name, runtime))<jupyter_output><empty_output> | keras-io/guides/ipynb/keras_cv/generate_images_with_stable_diffusion.ipynb/0 | {
"file_path": "keras-io/guides/ipynb/keras_cv/generate_images_with_stable_diffusion.ipynb",
"repo_id": "keras-io",
"token_count": 4062
} | 111 |
# Object Detection with KerasCV
**Author:** [lukewood](https://twitter.com/luke_wood_ml), Ian Stenbit, Tirth Patel<br>
**Date created:** 2023/04/08<br>
**Last modified:** 2023/08/10<br>
**Description:** Train an object detection model with KerasCV.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/keras_cv/object_detection_keras_cv.ipynb) <span class="k-dot">โข</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/keras_cv/object_detection_keras_cv.py)
KerasCV offers a complete set of production grade APIs to solve object detection
problems.
These APIs include object-detection-specific
data augmentation techniques, Keras native COCO metrics, bounding box format
conversion utilities, visualization tools, pretrained object detection models,
and everything you need to train your own state of the art object detection
models!
Let's give KerasCV's object detection API a spin.
```python
!pip install -q --upgrade keras-cv
!pip install -q --upgrade keras # Upgrade to Keras 3.
```
```python
import os
os.environ["KERAS_BACKEND"] = "jax" # @param ["tensorflow", "jax", "torch"]
from tensorflow import data as tf_data
import tensorflow_datasets as tfds
import keras
import keras_cv
import numpy as np
from keras_cv import bounding_box
import os
from keras_cv import visualization
import tqdm
```
---
## Object detection introduction
Object detection is the process of identifying, classifying,
and localizing objects within a given image. Typically, your inputs are
images, and your labels are bounding boxes with optional class
labels.
Object detection can be thought of as an extension of classification, however
instead of one class label for the image, you must detect and localize an
arbitrary number of classes.
**For example:**
<img width="300" src="https://i.imgur.com/8xSEbQD.png">
The data for the above image may look something like this:
```python
image = [height, width, 3]
bounding_boxes = {
"classes": [0], # 0 is an arbitrary class ID representing "cat"
"boxes": [[0.25, 0.4, .15, .1]]
# bounding box is in "rel_xywh" format
# so 0.25 represents the start of the bounding box 25% of
# the way across the image.
# The .15 represents that the width is 15% of the image width.
}
```
Since the inception of [*You Only Look Once*](https://arxiv.org/abs/1506.02640)
(aka YOLO),
object detection has primarily been solved using deep learning.
Most deep learning architectures do this by cleverly framing the object detection
problem as a combination of many small classification problems and
many regression problems.
More specifically, this is done by generating many anchor boxes of varying
shapes and sizes across the input images and assigning them each a class label,
as well as `x`, `y`, `width` and `height` offsets.
The model is trained to predict the class labels of each box, as well as the
`x`, `y`, `width`, and `height` offsets of each box that is predicted to be an
object.
**Visualization of some sample anchor boxes**:
<img width="400" src="https://i.imgur.com/cJIuiK9.jpg">
Objection detection is a technically complex problem but luckily we offer a
bulletproof approach to getting great results.
Let's do this!
---
## Perform detections with a pretrained model

The highest level API in the KerasCV Object Detection API is the `keras_cv.models` API.
This API includes fully pretrained object detection models, such as
`keras_cv.models.YOLOV8Detector`.
Let's get started by constructing a YOLOV8Detector pretrained on the `pascalvoc`
dataset.
```python
pretrained_model = keras_cv.models.YOLOV8Detector.from_preset(
"yolo_v8_m_pascalvoc", bounding_box_format="xywh"
)
```
Notice the `bounding_box_format` argument?
Recall in the section above, the format of bounding boxes:
```
bounding_boxes = {
"classes": [num_boxes],
"boxes": [num_boxes, 4]
}
```
This argument describes *exactly* what format the values in the `"boxes"`
field of the label dictionary take in your pipeline.
For example, a box in `xywh` format with its top left corner at the coordinates
(100, 100) with a width of 55 and a height of 70 would be represented by:
```
[100, 100, 55, 75]
```
or equivalently in `xyxy` format:
```
[100, 100, 155, 175]
```
While this may seem simple, it is a critical piece of the KerasCV object
detection API!
Every component that processes bounding boxes requires a
`bounding_box_format` argument.
You can read more about
KerasCV bounding box formats [in the API docs](https://keras.io/api/keras_cv/bounding_box/formats/).
This is done because there is no one correct format for bounding boxes!
Components in different pipelines expect different formats, and so by requiring
them to be specified we ensure that our components remain readable, reusable,
and clear.
Box format conversion bugs are perhaps the most common bug surface in object
detection pipelines - by requiring this parameter we mitigate against these
bugs (especially when combining code from many sources).
Next let's load an image:
```python
filepath = keras.utils.get_file(origin="https://i.imgur.com/gCNcJJI.jpg")
image = keras.utils.load_img(filepath)
image = np.array(image)
visualization.plot_image_gallery(
np.array([image]),
value_range=(0, 255),
rows=1,
cols=1,
scale=5,
)
```


To use the `YOLOV8Detector` architecture with a ResNet50 backbone, you'll need to
resize your image to a size that is divisible by 64. This is to ensure
compatibility with the number of downscaling operations done by the convolution
layers in the ResNet.
If the resize operation distorts
the input's aspect ratio, the model will perform signficantly poorer. For the
pretrained `"yolo_v8_m_pascalvoc"` preset we are using, the final
`MeanAveragePrecision` on the `pascalvoc/2012` evaluation set drops to `0.15`
from `0.38` when using a naive resizing operation.
Additionally, if you crop to preserve the aspect ratio as you do in classification
your model may entirely miss some bounding boxes. As such, when running inference
on an object detection model we recommend the use of padding to the desired size,
while resizing the longest size to match the aspect ratio.
KerasCV makes resizing properly easy; simply pass `pad_to_aspect_ratio=True` to
a `keras_cv.layers.Resizing` layer.
This can be implemented in one line of code:
```python
inference_resizing = keras_cv.layers.Resizing(
640, 640, pad_to_aspect_ratio=True, bounding_box_format="xywh"
)
```
This can be used as our inference preprocessing pipeline:
```python
image_batch = inference_resizing([image])
```
`keras_cv.visualization.plot_bounding_box_gallery()` supports a `class_mapping`
parameter to highlight what class each box was assigned to. Let's assemble a
class mapping now.
```python
class_ids = [
"Aeroplane",
"Bicycle",
"Bird",
"Boat",
"Bottle",
"Bus",
"Car",
"Cat",
"Chair",
"Cow",
"Dining Table",
"Dog",
"Horse",
"Motorbike",
"Person",
"Potted Plant",
"Sheep",
"Sofa",
"Train",
"Tvmonitor",
"Total",
]
class_mapping = dict(zip(range(len(class_ids)), class_ids))
```
Just like any other `keras.Model` you can predict bounding boxes using the
`model.predict()` API.
```python
y_pred = pretrained_model.predict(image_batch)
# y_pred is a bounding box Tensor:
# {"classes": ..., boxes": ...}
visualization.plot_bounding_box_gallery(
image_batch,
value_range=(0, 255),
rows=1,
cols=1,
y_pred=y_pred,
scale=5,
font_scale=0.7,
bounding_box_format="xywh",
class_mapping=class_mapping,
)
```
<div class="k-default-codeblock">
```
1/1 โโโโโโโโโโโโโโโโโโโโ 11s 11s/step
```
</div>


In order to support this easy and intuitive inference workflow, KerasCV
performs non-max suppression inside of the `YOLOV8Detector` class.
Non-max suppression is a traditional computing algorithm that solves the problem
of a model detecting multiple boxes for the same object.
Non-max suppression is a highly configurable algorithm, and in most cases you
will want to customize the settings of your model's non-max
suppression operation.
This can be done by overriding to the `prediction_decoder` argument.
To show this concept off, let's temporarily disable non-max suppression on our
YOLOV8Detector. This can be done by writing to the `prediction_decoder` attribute.
```python
# The following NonMaxSuppression layer is equivalent to disabling the operation
prediction_decoder = keras_cv.layers.NonMaxSuppression(
bounding_box_format="xywh",
from_logits=True,
iou_threshold=1.0,
confidence_threshold=0.0,
)
pretrained_model = keras_cv.models.YOLOV8Detector.from_preset(
"yolo_v8_m_pascalvoc",
bounding_box_format="xywh",
prediction_decoder=prediction_decoder,
)
y_pred = pretrained_model.predict(image_batch)
visualization.plot_bounding_box_gallery(
image_batch,
value_range=(0, 255),
rows=1,
cols=1,
y_pred=y_pred,
scale=5,
font_scale=0.7,
bounding_box_format="xywh",
class_mapping=class_mapping,
)
```
<div class="k-default-codeblock">
```
1/1 โโโโโโโโโโโโโโโโโโโโ 5s 5s/step
```
</div>


Next, let's re-configure `keras_cv.layers.NonMaxSuppression` for our
use case!
In this case, we will tune the `iou_threshold` to `0.2`, and the
`confidence_threshold` to `0.7`.
Raising the `confidence_threshold` will cause the model to only output boxes
that have a higher confidence score. `iou_threshold` controls the threshold of
intersection over union (IoU) that two boxes must have in order for one to be
pruned out.
[More information on these parameters may be found in the TensorFlow API docs](https://www.tensorflow.org/api_docs/python/tf/image/combined_non_max_suppression)
```python
prediction_decoder = keras_cv.layers.NonMaxSuppression(
bounding_box_format="xywh",
from_logits=True,
# Decrease the required threshold to make predictions get pruned out
iou_threshold=0.2,
# Tune confidence threshold for predictions to pass NMS
confidence_threshold=0.7,
)
pretrained_model = keras_cv.models.YOLOV8Detector.from_preset(
"yolo_v8_m_pascalvoc",
bounding_box_format="xywh",
prediction_decoder=prediction_decoder,
)
y_pred = pretrained_model.predict(image_batch)
visualization.plot_bounding_box_gallery(
image_batch,
value_range=(0, 255),
rows=1,
cols=1,
y_pred=y_pred,
scale=5,
font_scale=0.7,
bounding_box_format="xywh",
class_mapping=class_mapping,
)
```
<div class="k-default-codeblock">
```
1/1 โโโโโโโโโโโโโโโโโโโโ 5s 5s/step
```
</div>


That looks a lot better!
---
## Train a custom object detection model

Whether you're an object detection amateur or a well seasoned veteran, assembling
an object detection pipeline from scratch is a massive undertaking.
Luckily, all KerasCV object detection APIs are built as modular components.
Whether you need a complete pipeline, just an object detection model, or even
just a conversion utility to transform your boxes from `xywh` format to `xyxy`,
KerasCV has you covered.
In this guide, we'll assemble a full training pipeline for a KerasCV object
detection model. This includes data loading, augmentation, metric evaluation,
and inference!
To get started, let's sort out all of our imports and define global
configuration parameters.
```python
BATCH_SIZE = 4
```
---
## Data loading
To get started, let's discuss data loading and bounding box formatting.
KerasCV has a predefined format for bounding boxes.
To comply with this, you
should package your bounding boxes into a dictionary matching the
specification below:
```
bounding_boxes = {
# num_boxes may be a Ragged dimension
'boxes': Tensor(shape=[batch, num_boxes, 4]),
'classes': Tensor(shape=[batch, num_boxes])
}
```
`bounding_boxes['boxes']` contains the coordinates of your bounding box in a KerasCV
supported `bounding_box_format`.
KerasCV requires a `bounding_box_format` argument in all components that process
bounding boxes.
This is done to maximize your ability to plug and play individual components
into their object detection pipelines, as well as to make code self-documenting
across object detection pipelines.
To match the KerasCV API style, it is recommended that when writing a
custom data loader, you also support a `bounding_box_format` argument.
This makes it clear to those invoking your data loader what format the bounding boxes
are in.
In this example, we format our boxes to `xywh` format.
For example:
```python
train_ds, ds_info = your_data_loader.load(
split='train', bounding_box_format='xywh', batch_size=8
)
```
This clearly yields bounding boxes in the format `xywh`. You can read more about
KerasCV bounding box formats [in the API docs](https://keras.io/api/keras_cv/bounding_box/formats/).
Our data comes loaded into the format
`{"images": images, "bounding_boxes": bounding_boxes}`. This format is
supported in all KerasCV preprocessing components.
Let's load some data and verify that the data looks as we expect it to.
```python
def visualize_dataset(inputs, value_range, rows, cols, bounding_box_format):
inputs = next(iter(inputs.take(1)))
images, bounding_boxes = inputs["images"], inputs["bounding_boxes"]
visualization.plot_bounding_box_gallery(
images,
value_range=value_range,
rows=rows,
cols=cols,
y_true=bounding_boxes,
scale=5,
font_scale=0.7,
bounding_box_format=bounding_box_format,
class_mapping=class_mapping,
)
def unpackage_raw_tfds_inputs(inputs, bounding_box_format):
image = inputs["image"]
boxes = keras_cv.bounding_box.convert_format(
inputs["objects"]["bbox"],
images=image,
source="rel_yxyx",
target=bounding_box_format,
)
bounding_boxes = {
"classes": inputs["objects"]["label"],
"boxes": boxes,
}
return {"images": image, "bounding_boxes": bounding_boxes}
def load_pascal_voc(split, dataset, bounding_box_format):
ds = tfds.load(dataset, split=split, with_info=False, shuffle_files=True)
ds = ds.map(
lambda x: unpackage_raw_tfds_inputs(x, bounding_box_format=bounding_box_format),
num_parallel_calls=tf_data.AUTOTUNE,
)
return ds
train_ds = load_pascal_voc(
split="train", dataset="voc/2007", bounding_box_format="xywh"
)
eval_ds = load_pascal_voc(split="test", dataset="voc/2007", bounding_box_format="xywh")
train_ds = train_ds.shuffle(BATCH_SIZE * 4)
```
Next, let's batch our data.
In KerasCV object detection tasks it is recommended that
users use ragged batches of inputs.
This is due to the fact that images may be of different sizes in PascalVOC,
as well as the fact that there may be different numbers of bounding boxes per
image.
To construct a ragged dataset in a `tf.data` pipeline, you can use the
`ragged_batch()` method.
```python
train_ds = train_ds.ragged_batch(BATCH_SIZE, drop_remainder=True)
eval_ds = eval_ds.ragged_batch(BATCH_SIZE, drop_remainder=True)
```
Let's make sure our dataset is following the format KerasCV expects.
By using the `visualize_dataset()` function, you can visually verify
that your data is in the format that KerasCV expects. If the bounding boxes
are not visible or are visible in the wrong locations that is a sign that your
data is mis-formatted.
```python
visualize_dataset(
train_ds, bounding_box_format="xywh", value_range=(0, 255), rows=2, cols=2
)
```

And for the eval set:
```python
visualize_dataset(
eval_ds,
bounding_box_format="xywh",
value_range=(0, 255),
rows=2,
cols=2,
# If you are not running your experiment on a local machine, you can also
# make `visualize_dataset()` dump the plot to a file using `path`:
# path="eval.png"
)
```

Looks like everything is structured as expected.
Now we can move on to constructing our
data augmentation pipeline.
---
## Data augmentation
One of the most challenging tasks when constructing object detection
pipelines is data augmentation. Image augmentation techniques must be aware of the underlying
bounding boxes, and must update them accordingly.
Luckily, KerasCV natively supports bounding box augmentation with its extensive
library
of [data augmentation layers](https://keras.io/api/keras_cv/layers/preprocessing/).
The code below loads the Pascal VOC dataset, and performs on-the-fly,
bounding-box-friendly data augmentation inside a `tf.data` pipeline.
```python
augmenters = [
keras_cv.layers.RandomFlip(mode="horizontal", bounding_box_format="xywh"),
keras_cv.layers.JitteredResize(
target_size=(640, 640), scale_factor=(0.75, 1.3), bounding_box_format="xywh"
),
]
def create_augmenter_fn(augmenters):
def augmenter_fn(inputs):
for augmenter in augmenters:
inputs = augmenter(inputs)
return inputs
return augmenter_fn
augmenter_fn = create_augmenter_fn(augmenters)
train_ds = train_ds.map(augmenter_fn, num_parallel_calls=tf_data.AUTOTUNE)
visualize_dataset(
train_ds, bounding_box_format="xywh", value_range=(0, 255), rows=2, cols=2
)
```

Great! We now have a bounding-box-friendly data augmentation pipeline.
Let's format our evaluation dataset to match. Instead of using
`JitteredResize`, let's use the deterministic `keras_cv.layers.Resizing()`
layer.
```python
inference_resizing = keras_cv.layers.Resizing(
640, 640, bounding_box_format="xywh", pad_to_aspect_ratio=True
)
eval_ds = eval_ds.map(inference_resizing, num_parallel_calls=tf_data.AUTOTUNE)
```
Due to the fact that the resize operation differs between the train dataset,
which uses `JitteredResize()` to resize images, and the inference dataset, which
uses `layers.Resizing(pad_to_aspect_ratio=True)`, it is good practice to
visualize both datasets:
```python
visualize_dataset(
eval_ds, bounding_box_format="xywh", value_range=(0, 255), rows=2, cols=2
)
```

Finally, let's unpackage our inputs from the preprocessing dictionary, and
prepare to feed the inputs into our model. In order to be TPU compatible,
bounding box Tensors need to be `Dense` instead of `Ragged`.
```python
def dict_to_tuple(inputs):
return inputs["images"], bounding_box.to_dense(
inputs["bounding_boxes"], max_boxes=32
)
train_ds = train_ds.map(dict_to_tuple, num_parallel_calls=tf_data.AUTOTUNE)
eval_ds = eval_ds.map(dict_to_tuple, num_parallel_calls=tf_data.AUTOTUNE)
train_ds = train_ds.prefetch(tf_data.AUTOTUNE)
eval_ds = eval_ds.prefetch(tf_data.AUTOTUNE)
```
### Optimizer
In this guide, we use a standard SGD optimizer and rely on the
[`keras.callbacks.ReduceLROnPlateau`](https://keras.io/api/callbacks/reduce_lr_on_plateau/)
callback to reduce the learning rate.
You will always want to include a `global_clipnorm` when training object
detection models. This is to remedy exploding gradient problems that frequently
occur when training object detection models.
```python
base_lr = 0.005
# including a global_clipnorm is extremely important in object detection tasks
optimizer = keras.optimizers.SGD(
learning_rate=base_lr, momentum=0.9, global_clipnorm=10.0
)
```
To achieve the best results on your dataset, you'll likely want to hand craft a
`PiecewiseConstantDecay` learning rate schedule.
While `PiecewiseConstantDecay` schedules tend to perform better, they don't
translate between problems.
### Loss functions
You may not be familiar with the `"ciou"` loss. While not common in other
models, this loss is sometimes used in the object detection world.
In short, ["Complete IoU"](https://arxiv.org/abs/1911.08287) is a flavour of the Intersection over Union loss and is used due to its convergence properties.
In KerasCV, you can use this loss simply by passing the string `"ciou"` to `compile()`.
We also use standard binary crossentropy loss for the class head.
```python
pretrained_model.compile(
classification_loss="binary_crossentropy",
box_loss="ciou",
)
```
### Metric evaluation
The most popular object detection metrics are COCO metrics,
which were published alongside the MSCOCO dataset. KerasCV provides an
easy-to-use suite of COCO metrics under the `keras_cv.callbacks.PyCOCOCallback`
symbol. Note that we use a Keras callback instead of a Keras metric to compute
COCO metrics. This is because computing COCO metrics requires storing all of a
model's predictions for the entire evaluation dataset in memory at once, which
is impractical to do during training time.
```python
coco_metrics_callback = keras_cv.callbacks.PyCOCOCallback(
eval_ds.take(20), bounding_box_format="xywh"
)
```
Our data pipeline is now complete!
We can now move on to model creation and training.
---
## Model creation
Next, let's use the KerasCV API to construct an untrained YOLOV8Detector model.
In this tutorial we use a pretrained ResNet50 backbone from the imagenet
dataset.
KerasCV makes it easy to construct a `YOLOV8Detector` with any of the KerasCV
backbones. Simply use one of the presets for the architecture you'd like!
For example:
```python
model = keras_cv.models.YOLOV8Detector.from_preset(
"resnet50_imagenet",
# For more info on supported bounding box formats, visit
# https://keras.io/api/keras_cv/bounding_box/
bounding_box_format="xywh",
num_classes=20,
)
```
That is all it takes to construct a KerasCV YOLOv8. The YOLOv8 accepts
tuples of dense image Tensors and bounding box dictionaries to `fit()` and
`train_on_batch()`
This matches what we have constructed in our input pipeline above.
---
## Training our model
All that is left to do is train our model. KerasCV object detection models
follow the standard Keras workflow, leveraging `compile()` and `fit()`.
Let's compile our model:
```python
model.compile(
classification_loss="binary_crossentropy",
box_loss="ciou",
optimizer=optimizer,
)
```
If you want to fully train the model, remove `.take(20)` from all dataset
references (below and in the initialization of the metrics callback).
```python
model.fit(
train_ds.take(20),
# Run for 10-35~ epochs to achieve good scores.
epochs=1,
callbacks=[coco_metrics_callback],
)
```
<div class="k-default-codeblock">
```
20/20 โโโโโโโโโโโโโโโโโโโโ 7s 59ms/step
creating index...
index created!
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.16s).
Accumulating evaluation results...
DONE (t=0.07s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.002
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.002
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.002
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.002
20/20 โโโโโโโโโโโโโโโโโโโโ 73s 681ms/step - loss: 9221.7988 - val_AP: 3.1673e-05 - val_AP50: 2.1886e-04 - val_AP75: 0.0000e+00 - val_APs: 0.0000e+00 - val_APm: 0.0000e+00 - val_APl: 3.1673e-05 - val_ARmax1: 0.0016 - val_ARmax10: 0.0021 - val_ARmax100: 0.0021 - val_ARs: 0.0000e+00 - val_ARm: 0.0000e+00 - val_ARl: 0.0021
<keras.src.callbacks.history.History at 0x7fb23010a850>
```
</div>
---
## Inference and plotting results
KerasCV makes object detection inference simple. `model.predict(images)`
returns a tensor of bounding boxes. By default, `YOLOV8Detector.predict()`
will perform a non max suppression operation for you.
In this section, we will use a `keras_cv` provided preset:
```python
model = keras_cv.models.YOLOV8Detector.from_preset(
"yolo_v8_m_pascalvoc", bounding_box_format="xywh"
)
```
Next, for convenience we construct a dataset with larger batches:
```python
visualization_ds = eval_ds.unbatch()
visualization_ds = visualization_ds.ragged_batch(16)
visualization_ds = visualization_ds.shuffle(8)
```
Let's create a simple function to plot our inferences:
```python
def visualize_detections(model, dataset, bounding_box_format):
images, y_true = next(iter(dataset.take(1)))
y_pred = model.predict(images)
visualization.plot_bounding_box_gallery(
images,
value_range=(0, 255),
bounding_box_format=bounding_box_format,
y_true=y_true,
y_pred=y_pred,
scale=4,
rows=2,
cols=2,
show=True,
font_scale=0.7,
class_mapping=class_mapping,
)
```
You may need to configure your NonMaxSuppression operation to achieve
visually appealing results.
```python
model.prediction_decoder = keras_cv.layers.NonMaxSuppression(
bounding_box_format="xywh",
from_logits=True,
iou_threshold=0.5,
confidence_threshold=0.75,
)
visualize_detections(model, dataset=visualization_ds, bounding_box_format="xywh")
```
<div class="k-default-codeblock">
```
1/1 โโโโโโโโโโโโโโโโโโโโ 16s 16s/step
```
</div>

Awesome!
One final helpful pattern to be aware of is to visualize
detections in a `keras.callbacks.Callback` to monitor training :
```python
class VisualizeDetections(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
visualize_detections(
self.model, bounding_box_format="xywh", dataset=visualization_ds
)
```
---
## Takeaways and next steps
KerasCV makes it easy to construct state-of-the-art object detection pipelines.
In this guide, we started off by writing a data loader using the KerasCV
bounding box specification.
Following this, we assembled a production grade data augmentation pipeline using
KerasCV preprocessing layers in <50 lines of code.
KerasCV object detection components can be used independently, but also have deep
integration with each other.
KerasCV makes authoring production grade bounding box augmentation,
model training, visualization, and
metric evaluation easy.
Some follow up exercises for the reader:
- add additional augmentation techniques to improve model performance
- tune the hyperparameters and data augmentation used to produce high quality results
- train an object detection model on your own dataset
One last fun code snippet to showcase the power of KerasCV's API!
```python
stable_diffusion = keras_cv.models.StableDiffusionV2(512, 512)
images = stable_diffusion.text_to_image(
prompt="A zoomed out photograph of a cool looking cat. The cat stands in a beautiful forest",
negative_prompt="unrealistic, bad looking, malformed",
batch_size=4,
seed=1231,
)
encoded_predictions = model(images)
y_pred = model.decode_predictions(encoded_predictions, images)
visualization.plot_bounding_box_gallery(
images,
value_range=(0, 255),
y_pred=y_pred,
rows=2,
cols=2,
scale=5,
font_scale=0.7,
bounding_box_format="xywh",
class_mapping=class_mapping,
)
```
<div class="k-default-codeblock">
```
By using this model checkpoint, you acknowledge that its usage is subject to the terms of the CreativeML Open RAIL++-M license at https://github.com/Stability-AI/stablediffusion/blob/main/LICENSE-MODEL
50/50 โโโโโโโโโโโโโโโโโโโโ 47s 356ms/step
```
</div>


| keras-io/guides/md/keras_cv/object_detection_keras_cv.md/0 | {
"file_path": "keras-io/guides/md/keras_cv/object_detection_keras_cv.md",
"repo_id": "keras-io",
"token_count": 10387
} | 112 |
# Save, serialize, and export models
**Authors:** Neel Kovelamudi, Francois Chollet<br>
**Date created:** 2023/06/14<br>
**Last modified:** 2023/06/30<br>
**Description:** Complete guide to saving, serializing, and exporting models.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/serialization_and_saving.ipynb) <span class="k-dot">โข</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/serialization_and_saving.py)
---
## Introduction
A Keras model consists of multiple components:
- The architecture, or configuration, which specifies what layers the model
contain, and how they're connected.
- A set of weights values (the "state of the model").
- An optimizer (defined by compiling the model).
- A set of losses and metrics (defined by compiling the model).
The Keras API saves all of these pieces together in a unified format,
marked by the `.keras` extension. This is a zip archive consisting of the
following:
- A JSON-based configuration file (config.json): Records of model, layer, and
other trackables' configuration.
- A H5-based state file, such as `model.weights.h5` (for the whole model),
with directory keys for layers and their weights.
- A metadata file in JSON, storing things such as the current Keras version.
Let's take a look at how this works.
---
## How to save and load a model
If you only have 10 seconds to read this guide, here's what you need to know.
**Saving a Keras model:**
```python
model = ... # Get model (Sequential, Functional Model, or Model subclass)
model.save('path/to/location.keras') # The file needs to end with the .keras extension
```
**Loading the model back:**
```python
model = keras.models.load_model('path/to/location.keras')
```
Now, let's look at the details.
---
## Setup
```python
import numpy as np
import keras
from keras import ops
```
---
## Saving
This section is about saving an entire model to a single file. The file will include:
- The model's architecture/config
- The model's weight values (which were learned during training)
- The model's compilation information (if `compile()` was called)
- The optimizer and its state, if any (this enables you to restart training
where you left)
#### APIs
You can save a model with `model.save()` or `keras.models.save_model()` (which is equivalent).
You can load it back with `keras.models.load_model()`.
The only supported format in Keras 3 is the "Keras v3" format,
which uses the `.keras` extension.
**Example:**
```python
def get_model():
# Create a simple model.
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
model.compile(optimizer=keras.optimizers.Adam(), loss="mean_squared_error")
return model
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model.keras')` creates a zip archive `my_model.keras`.
model.save("my_model.keras")
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_model.keras")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
```
<div class="k-default-codeblock">
```
4/4 โโโโโโโโโโโโโโโโโโโโ 0s 8ms/step - loss: 0.4232
4/4 โโโโโโโโโโโโโโโโโโโโ 0s 281us/step
4/4 โโโโโโโโโโโโโโโโโโโโ 0s 373us/step
```
</div>
### Custom objects
This section covers the basic workflows for handling custom layers, functions, and
models in Keras saving and reloading.
When saving a model that includes custom objects, such as a subclassed Layer,
you **must** define a `get_config()` method on the object class.
If the arguments passed to the constructor (`__init__()` method) of the custom object
aren't Python objects (anything other than base types like ints, strings,
etc.), then you **must** also explicitly deserialize these arguments in the `from_config()`
class method.
Like this:
```python
class CustomLayer(keras.layers.Layer):
def __init__(self, sublayer, **kwargs):
super().__init__(**kwargs)
self.sublayer = layer
def call(self, x):
return self.sublayer(x)
def get_config(self):
base_config = super().get_config()
config = {
"sublayer": keras.saving.serialize_keras_object(self.sublayer),
}
return {**base_config, **config}
@classmethod
def from_config(cls, config):
sublayer_config = config.pop("sublayer")
sublayer = keras.saving.deserialize_keras_object(sublayer_config)
return cls(sublayer, **config)
```
Please see the [Defining the config methods section](#config_methods) for more
details and examples.
The saved `.keras` file is lightweight and does not store the Python code for custom
objects. Therefore, to reload the model, `load_model` requires access to the definition
of any custom objects used through one of the following methods:
1. Registering custom objects **(preferred)**,
2. Passing custom objects directly when loading, or
3. Using a custom object scope
Below are examples of each workflow:
#### Registering custom objects (**preferred**)
This is the preferred method, as custom object registration greatly simplifies saving and
loading code. Adding the `@keras.saving.register_keras_serializable` decorator to the
class definition of a custom object registers the object globally in a master list,
allowing Keras to recognize the object when loading the model.
Let's create a custom model involving both a custom layer and a custom activation
function to demonstrate this.
**Example:**
```python
# Clear all previously registered custom objects
keras.saving.get_custom_objects().clear()
# Upon registration, you can optionally specify a package or a name.
# If left blank, the package defaults to `Custom` and the name defaults to
# the class name.
@keras.saving.register_keras_serializable(package="MyLayers")
class CustomLayer(keras.layers.Layer):
def __init__(self, factor):
super().__init__()
self.factor = factor
def call(self, x):
return x * self.factor
def get_config(self):
return {"factor": self.factor}
@keras.saving.register_keras_serializable(package="my_package", name="custom_fn")
def custom_fn(x):
return x**2
# Create the model.
def get_model():
inputs = keras.Input(shape=(4,))
mid = CustomLayer(0.5)(inputs)
outputs = keras.layers.Dense(1, activation=custom_fn)(mid)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="mean_squared_error")
return model
# Train the model.
def train_model(model):
input = np.random.random((4, 4))
target = np.random.random((4, 1))
model.fit(input, target)
return model
test_input = np.random.random((4, 4))
test_target = np.random.random((4, 1))
model = get_model()
model = train_model(model)
model.save("custom_model.keras")
# Now, we can simply load without worrying about our custom objects.
reconstructed_model = keras.models.load_model("custom_model.keras")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
```
<div class="k-default-codeblock">
```
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 46ms/step - loss: 0.2571
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 11ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 12ms/step
```
</div>
#### Passing custom objects to `load_model()`
```python
model = get_model()
model = train_model(model)
# Calling `save('my_model.keras')` creates a zip archive `my_model.keras`.
model.save("custom_model.keras")
# Upon loading, pass a dict containing the custom objects used in the
# `custom_objects` argument of `keras.models.load_model()`.
reconstructed_model = keras.models.load_model(
"custom_model.keras",
custom_objects={"CustomLayer": CustomLayer, "custom_fn": custom_fn},
)
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
```
<div class="k-default-codeblock">
```
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 37ms/step - loss: 0.0535
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 12ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 12ms/step
```
</div>
#### Using a custom object scope
Any code within the custom object scope will be able to recognize the custom objects
passed to the scope argument. Therefore, loading the model within the scope will allow
the loading of our custom objects.
**Example:**
```python
model = get_model()
model = train_model(model)
model.save("custom_model.keras")
# Pass the custom objects dictionary to a custom object scope and place
# the `keras.models.load_model()` call within the scope.
custom_objects = {"CustomLayer": CustomLayer, "custom_fn": custom_fn}
with keras.saving.custom_object_scope(custom_objects):
reconstructed_model = keras.models.load_model("custom_model.keras")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
```
<div class="k-default-codeblock">
```
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 40ms/step - loss: 0.0868
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 11ms/step
1/1 โโโโโโโโโโโโโโโโโโโโ 0s 11ms/step
```
</div>
### Model serialization
This section is about saving only the model's configuration, without its state.
The model's configuration (or architecture) specifies what layers the model
contains, and how these layers are connected. If you have the configuration of a model,
then the model can be created with a freshly initialized state (no weights or compilation
information).
#### APIs
The following serialization APIs are available:
- `keras.models.clone_model(model)`: make a (randomly initialized) copy of a model.
- `get_config()` and `cls.from_config()`: retrieve the configuration of a layer or model, and recreate
a model instance from its config, respectively.
- `keras.models.model_to_json()` and `keras.models.model_from_json()`: similar, but as JSON strings.
- `keras.saving.serialize_keras_object()`: retrieve the configuration any arbitrary Keras object.
- `keras.saving.deserialize_keras_object()`: recreate an object instance from its configuration.
#### In-memory model cloning
You can do in-memory cloning of a model via `keras.models.clone_model()`.
This is equivalent to getting the config then recreating the model from its config
(so it does not preserve compilation information or layer weights values).
**Example:**
```python
new_model = keras.models.clone_model(model)
```
#### `get_config()` and `from_config()`
Calling `model.get_config()` or `layer.get_config()` will return a Python dict containing
the configuration of the model or layer, respectively. You should define `get_config()`
to contain arguments needed for the `__init__()` method of the model or layer. At loading time,
the `from_config(config)` method will then call `__init__()` with these arguments to
reconstruct the model or layer.
**Layer example:**
```python
layer = keras.layers.Dense(3, activation="relu")
layer_config = layer.get_config()
print(layer_config)
```
<div class="k-default-codeblock">
```
{'name': 'dense_4', 'trainable': True, 'dtype': 'float32', 'units': 3, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.src.initializers.random_initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': 'GlorotUniform'}, 'bias_initializer': {'module': 'keras.src.initializers.constant_initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': 'Zeros'}, 'kernel_regularizer': None, 'bias_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}
```
</div>
Now let's reconstruct the layer using the `from_config()` method:
```python
new_layer = keras.layers.Dense.from_config(layer_config)
```
**Sequential model example:**
```python
model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])
config = model.get_config()
new_model = keras.Sequential.from_config(config)
```
**Functional model example:**
```python
inputs = keras.Input((32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config)
```
#### `to_json()` and `keras.models.model_from_json()`
This is similar to `get_config` / `from_config`, except it turns the model
into a JSON string, which can then be loaded without the original model class.
It is also specific to models, it isn't meant for layers.
**Example:**
```python
model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])
json_config = model.to_json()
new_model = keras.models.model_from_json(json_config)
```
#### Arbitrary object serialization and deserialization
The `keras.saving.serialize_keras_object()` and `keras.saving.deserialize_keras_object()`
APIs are general-purpose APIs that can be used to serialize or deserialize any Keras
object and any custom object. It is at the foundation of saving model architecture and is
behind all `serialize()`/`deserialize()` calls in keras.
**Example**:
```python
my_reg = keras.regularizers.L1(0.005)
config = keras.saving.serialize_keras_object(my_reg)
print(config)
```
<div class="k-default-codeblock">
```
{'module': 'keras.src.regularizers.regularizers', 'class_name': 'L1', 'config': {'l1': 0.004999999888241291}, 'registered_name': 'L1'}
```
</div>
Note the serialization format containing all the necessary information for proper
reconstruction:
- `module` containing the name of the Keras module or other identifying module the object
comes from
- `class_name` containing the name of the object's class.
- `config` with all the information needed to reconstruct the object
- `registered_name` for custom objects. See [here](#custom_object_serialization).
Now we can reconstruct the regularizer.
```python
new_reg = keras.saving.deserialize_keras_object(config)
```
### Model weights saving
You can choose to only save & load a model's weights. This can be useful if:
- You only need the model for inference: in this case you won't need to
restart training, so you don't need the compilation information or optimizer state.
- You are doing transfer learning: in this case you will be training a new model
reusing the state of a prior model, so you don't need the compilation
information of the prior model.
#### APIs for in-memory weight transfer
Weights can be copied between different objects by using `get_weights()`
and `set_weights()`:
* `keras.layers.Layer.get_weights()`: Returns a list of NumPy arrays of weight values.
* `keras.layers.Layer.set_weights(weights)`: Sets the model weights to the values
provided (as NumPy arrays).
Examples:
***Transferring weights from one layer to another, in memory***
```python
def create_layer():
layer = keras.layers.Dense(64, activation="relu", name="dense_2")
layer.build((None, 784))
return layer
layer_1 = create_layer()
layer_2 = create_layer()
# Copy weights from layer 1 to layer 2
layer_2.set_weights(layer_1.get_weights())
```
***Transferring weights from one model to another model with a compatible architecture, in memory***
```python
# Create a simple functional model
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
# Define a subclassed model with the same architecture
class SubclassedModel(keras.Model):
def __init__(self, output_dim, name=None):
super().__init__(name=name)
self.output_dim = output_dim
self.dense_1 = keras.layers.Dense(64, activation="relu", name="dense_1")
self.dense_2 = keras.layers.Dense(64, activation="relu", name="dense_2")
self.dense_3 = keras.layers.Dense(output_dim, name="predictions")
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
x = self.dense_3(x)
return x
def get_config(self):
return {"output_dim": self.output_dim, "name": self.name}
subclassed_model = SubclassedModel(10)
# Call the subclassed model once to create the weights.
subclassed_model(np.ones((1, 784)))
# Copy weights from functional_model to subclassed_model.
subclassed_model.set_weights(functional_model.get_weights())
assert len(functional_model.weights) == len(subclassed_model.weights)
for a, b in zip(functional_model.weights, subclassed_model.weights):
np.testing.assert_allclose(a.numpy(), b.numpy())
```
***The case of stateless layers***
Because stateless layers do not change the order or number of weights,
models can have compatible architectures even if there are extra/missing
stateless layers.
```python
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
# Add a dropout layer, which does not contain any weights.
x = keras.layers.Dropout(0.5)(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model_with_dropout = keras.Model(
inputs=inputs, outputs=outputs, name="3_layer_mlp"
)
functional_model_with_dropout.set_weights(functional_model.get_weights())
```
#### APIs for saving weights to disk & loading them back
Weights can be saved to disk by calling `model.save_weights(filepath)`.
The filename should end in `.weights.h5`.
**Example:**
```python
# Runnable example
sequential_model = keras.Sequential(
[
keras.Input(shape=(784,), name="digits"),
keras.layers.Dense(64, activation="relu", name="dense_1"),
keras.layers.Dense(64, activation="relu", name="dense_2"),
keras.layers.Dense(10, name="predictions"),
]
)
sequential_model.save_weights("my_model.weights.h5")
sequential_model.load_weights("my_model.weights.h5")
```
Note that changing `layer.trainable` may result in a different
`layer.weights` ordering when the model contains nested layers.
```python
class NestedDenseLayer(keras.layers.Layer):
def __init__(self, units, name=None):
super().__init__(name=name)
self.dense_1 = keras.layers.Dense(units, name="dense_1")
self.dense_2 = keras.layers.Dense(units, name="dense_2")
def call(self, inputs):
return self.dense_2(self.dense_1(inputs))
nested_model = keras.Sequential([keras.Input((784,)), NestedDenseLayer(10, "nested")])
variable_names = [v.name for v in nested_model.weights]
print("variables: {}".format(variable_names))
print("\nChanging trainable status of one of the nested layers...")
nested_model.get_layer("nested").dense_1.trainable = False
variable_names_2 = [v.name for v in nested_model.weights]
print("\nvariables: {}".format(variable_names_2))
print("variable ordering changed:", variable_names != variable_names_2)
```
<div class="k-default-codeblock">
```
variables: ['kernel', 'bias', 'kernel', 'bias']
```
</div>
<div class="k-default-codeblock">
```
Changing trainable status of one of the nested layers...
```
</div>
<div class="k-default-codeblock">
```
variables: ['kernel', 'bias', 'kernel', 'bias']
variable ordering changed: False
```
</div>
##### **Transfer learning example**
When loading pretrained weights from a weights file, it is recommended to load
the weights into the original checkpointed model, and then extract
the desired weights/layers into a new model.
**Example:**
```python
def create_functional_model():
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
return keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
functional_model = create_functional_model()
functional_model.save_weights("pretrained.weights.h5")
# In a separate program:
pretrained_model = create_functional_model()
pretrained_model.load_weights("pretrained.weights.h5")
# Create a new model by extracting layers from the original model:
extracted_layers = pretrained_model.layers[:-1]
extracted_layers.append(keras.layers.Dense(5, name="dense_3"))
model = keras.Sequential(extracted_layers)
model.summary()
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "sequential_4"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโ
โ<span style="font-weight: bold"> Layer (type) </span>โ<span style="font-weight: bold"> Output Shape </span>โ<span style="font-weight: bold"> Param # </span>โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ dense_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">50,240</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ dense_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">4,160</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ dense_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) โ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">5</span>) โ <span style="color: #00af00; text-decoration-color: #00af00">325</span> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโ
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">54,725</span> (213.77 KB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">54,725</span> (213.77 KB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">0</span> (0.00 B)
</pre>
### Appendix: Handling custom objects
<a name="config_methods"></a>
#### Defining the config methods
Specifications:
* `get_config()` should return a JSON-serializable dictionary in order to be
compatible with the Keras architecture- and model-saving APIs.
* `from_config(config)` (a `classmethod`) should return a new layer or model
object that is created from the config.
The default implementation returns `cls(**config)`.
**NOTE**: If all your constructor arguments are already serializable, e.g. strings and
ints, or non-custom Keras objects, overriding `from_config` is not necessary. However,
for more complex objects such as layers or models passed to `__init__`, deserialization
must be handled explicitly either in `__init__` itself or overriding the `from_config()`
method.
**Example:**
```python
@keras.saving.register_keras_serializable(package="MyLayers", name="KernelMult")
class MyDense(keras.layers.Layer):
def __init__(
self,
units,
*,
kernel_regularizer=None,
kernel_initializer=None,
nested_model=None,
**kwargs
):
super().__init__(**kwargs)
self.hidden_units = units
self.kernel_regularizer = kernel_regularizer
self.kernel_initializer = kernel_initializer
self.nested_model = nested_model
def get_config(self):
config = super().get_config()
# Update the config with the custom layer's parameters
config.update(
{
"units": self.hidden_units,
"kernel_regularizer": self.kernel_regularizer,
"kernel_initializer": self.kernel_initializer,
"nested_model": self.nested_model,
}
)
return config
def build(self, input_shape):
input_units = input_shape[-1]
self.kernel = self.add_weight(
name="kernel",
shape=(input_units, self.hidden_units),
regularizer=self.kernel_regularizer,
initializer=self.kernel_initializer,
)
def call(self, inputs):
return ops.matmul(inputs, self.kernel)
layer = MyDense(units=16, kernel_regularizer="l1", kernel_initializer="ones")
layer3 = MyDense(units=64, nested_model=layer)
config = keras.layers.serialize(layer3)
print(config)
new_layer = keras.layers.deserialize(config)
print(new_layer)
```
<div class="k-default-codeblock">
```
{'module': None, 'class_name': 'MyDense', 'config': {'name': 'my_dense_1', 'trainable': True, 'dtype': 'float32', 'units': 64, 'kernel_regularizer': None, 'kernel_initializer': None, 'nested_model': {'module': None, 'class_name': 'MyDense', 'config': {'name': 'my_dense', 'trainable': True, 'dtype': 'float32', 'units': 16, 'kernel_regularizer': 'l1', 'kernel_initializer': 'ones', 'nested_model': None}, 'registered_name': 'MyLayers>KernelMult'}}, 'registered_name': 'MyLayers>KernelMult'}
<MyDense name=my_dense_1, built=False>
```
</div>
Note that overriding `from_config` is unnecessary above for `MyDense` because
`hidden_units`, `kernel_initializer`, and `kernel_regularizer` are ints, strings, and a
built-in Keras object, respectively. This means that the default `from_config`
implementation of `cls(**config)` will work as intended.
For more complex objects, such as layers and models passed to `__init__`, for
example, you must explicitly deserialize these objects. Let's take a look at an example
of a model where a `from_config` override is necessary.
**Example:**
<a name="registration_example"></a>
```python
@keras.saving.register_keras_serializable(package="ComplexModels")
class CustomModel(keras.layers.Layer):
def __init__(self, first_layer, second_layer=None, **kwargs):
super().__init__(**kwargs)
self.first_layer = first_layer
if second_layer is not None:
self.second_layer = second_layer
else:
self.second_layer = keras.layers.Dense(8)
def get_config(self):
config = super().get_config()
config.update(
{
"first_layer": self.first_layer,
"second_layer": self.second_layer,
}
)
return config
@classmethod
def from_config(cls, config):
# Note that you can also use `keras.saving.deserialize_keras_object` here
config["first_layer"] = keras.layers.deserialize(config["first_layer"])
config["second_layer"] = keras.layers.deserialize(config["second_layer"])
return cls(**config)
def call(self, inputs):
return self.first_layer(self.second_layer(inputs))
# Let's make our first layer the custom layer from the previous example (MyDense)
inputs = keras.Input((32,))
outputs = CustomModel(first_layer=layer)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config)
```
<a name="custom_object_serialization"></a>
#### How custom objects are serialized
The serialization format has a special key for custom objects registered via
`@keras.saving.register_keras_serializable`. This `registered_name` key allows for easy
retrieval at loading/deserialization time while also allowing users to add custom naming.
Let's take a look at the config from serializing the custom layer `MyDense` we defined
above.
**Example**:
```python
layer = MyDense(
units=16,
kernel_regularizer=keras.regularizers.L1L2(l1=1e-5, l2=1e-4),
kernel_initializer="ones",
)
config = keras.layers.serialize(layer)
print(config)
```
<div class="k-default-codeblock">
```
{'module': None, 'class_name': 'MyDense', 'config': {'name': 'my_dense_2', 'trainable': True, 'dtype': 'float32', 'units': 16, 'kernel_regularizer': {'module': 'keras.src.regularizers.regularizers', 'class_name': 'L1L2', 'config': {'l1': 1e-05, 'l2': 0.0001}, 'registered_name': 'L1L2'}, 'kernel_initializer': 'ones', 'nested_model': None}, 'registered_name': 'MyLayers>KernelMult'}
```
</div>
As shown, the `registered_name` key contains the lookup information for the Keras master
list, including the package `MyLayers` and the custom name `KernelMult` that we gave in
the `@keras.saving.register_keras_serializable` decorator. Take a look again at the custom
class definition/registration [here](#registration_example).
Note that the `class_name` key contains the original name of the class, allowing for
proper re-initialization in `from_config`.
Additionally, note that the `module` key is `None` since this is a custom object.
| keras-io/guides/md/serialization_and_saving.md/0 | {
"file_path": "keras-io/guides/md/serialization_and_saving.md",
"repo_id": "keras-io",
"token_count": 10788
} | 113 |
"""
Title: Writing a training loop from scratch in TensorFlow
Author: [fchollet](https://twitter.com/fchollet)
Date created: 2019/03/01
Last modified: 2023/06/25
Description: Writing low-level training & evaluation loops in TensorFlow.
Accelerator: None
"""
"""
## Setup
"""
import time
import os
# This guide can only be run with the TensorFlow backend.
os.environ["KERAS_BACKEND"] = "tensorflow"
import tensorflow as tf
import keras
import numpy as np
"""
## Introduction
Keras provides default training and evaluation loops, `fit()` and `evaluate()`.
Their usage is covered in the guide
[Training & evaluation with the built-in methods](/guides/training_with_built_in_methods/).
If you want to customize the learning algorithm of your model while still leveraging
the convenience of `fit()`
(for instance, to train a GAN using `fit()`), you can subclass the `Model` class and
implement your own `train_step()` method, which
is called repeatedly during `fit()`.
Now, if you want very low-level control over training & evaluation, you should write
your own training & evaluation loops from scratch. This is what this guide is about.
"""
"""
## A first end-to-end example
Let's consider a simple MNIST model:
"""
def get_model():
inputs = keras.Input(shape=(784,), name="digits")
x1 = keras.layers.Dense(64, activation="relu")(inputs)
x2 = keras.layers.Dense(64, activation="relu")(x1)
outputs = keras.layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
model = get_model()
"""
Let's train it using mini-batch gradient with a custom training loop.
First, we're going to need an optimizer, a loss function, and a dataset:
"""
# Instantiate an optimizer.
optimizer = keras.optimizers.Adam(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 32
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
# Prepare the training dataset.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(batch_size)
"""
Calling a model inside a `GradientTape` scope enables you to retrieve the gradients of
the trainable weights of the layer with respect to a loss value. Using an optimizer
instance, you can use these gradients to update these variables (which you can
retrieve using `model.trainable_weights`).
Here's our training loop, step by step:
- We open a `for` loop that iterates over epochs
- For each epoch, we open a `for` loop that iterates over the dataset, in batches
- For each batch, we open a `GradientTape()` scope
- Inside this scope, we call the model (forward pass) and compute the loss
- Outside the scope, we retrieve the gradients of the weights
of the model with regard to the loss
- Finally, we use the optimizer to update the weights of the model based on the
gradients
"""
epochs = 3
for epoch in range(epochs):
print(f"\nStart of epoch {epoch}")
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply(grads, model.trainable_weights)
# Log every 100 batches.
if step % 100 == 0:
print(
f"Training loss (for 1 batch) at step {step}: {float(loss_value):.4f}"
)
print(f"Seen so far: {(step + 1) * batch_size} samples")
"""
## Low-level handling of metrics
Let's add metrics monitoring to this basic loop.
You can readily reuse the built-in metrics (or custom ones you wrote) in such training
loops written from scratch. Here's the flow:
- Instantiate the metric at the start of the loop
- Call `metric.update_state()` after each batch
- Call `metric.result()` when you need to display the current value of the metric
- Call `metric.reset_state()` when you need to clear the state of the metric
(typically at the end of an epoch)
Let's use this knowledge to compute `SparseCategoricalAccuracy` on training and
validation data at the end of each epoch:
"""
# Get a fresh model
model = get_model()
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.Adam(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
"""
Here's our training & evaluation loop:
"""
epochs = 2
for epoch in range(epochs):
print(f"\nStart of epoch {epoch}")
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply(grads, model.trainable_weights)
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 100 batches.
if step % 100 == 0:
print(
f"Training loss (for 1 batch) at step {step}: {float(loss_value):.4f}"
)
print(f"Seen so far: {(step + 1) * batch_size} samples")
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print(f"Training acc over epoch: {float(train_acc):.4f}")
# Reset training metrics at the end of each epoch
train_acc_metric.reset_state()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_state()
print(f"Validation acc: {float(val_acc):.4f}")
print(f"Time taken: {time.time() - start_time:.2f}s")
"""
## Speeding-up your training step with `tf.function`
The default runtime in TensorFlow is eager execution.
As such, our training loop above executes eagerly.
This is great for debugging, but graph compilation has a definite performance
advantage. Describing your computation as a static graph enables the framework
to apply global performance optimizations. This is impossible when
the framework is constrained to greedily execute one operation after another,
with no knowledge of what comes next.
You can compile into a static graph any function that takes tensors as input.
Just add a `@tf.function` decorator on it, like this:
"""
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply(grads, model.trainable_weights)
train_acc_metric.update_state(y, logits)
return loss_value
"""
Let's do the same with the evaluation step:
"""
@tf.function
def test_step(x, y):
val_logits = model(x, training=False)
val_acc_metric.update_state(y, val_logits)
"""
Now, let's re-run our training loop with this compiled training step:
"""
epochs = 2
for epoch in range(epochs):
print(f"\nStart of epoch {epoch}")
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(x_batch_train, y_batch_train)
# Log every 100 batches.
if step % 100 == 0:
print(
f"Training loss (for 1 batch) at step {step}: {float(loss_value):.4f}"
)
print(f"Seen so far: {(step + 1) * batch_size} samples")
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print(f"Training acc over epoch: {float(train_acc):.4f}")
# Reset training metrics at the end of each epoch
train_acc_metric.reset_state()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_acc_metric.reset_state()
print(f"Validation acc: {float(val_acc):.4f}")
print(f"Time taken: {time.time() - start_time:.2f}s")
"""
Much faster, isn't it?
"""
"""
## Low-level handling of losses tracked by the model
Layers & models recursively track any losses created during the forward pass
by layers that call `self.add_loss(value)`. The resulting list of scalar loss
values are available via the property `model.losses`
at the end of the forward pass.
If you want to be using these loss components, you should sum them
and add them to the main loss in your training step.
Consider this layer, that creates an activity regularization loss:
"""
class ActivityRegularizationLayer(keras.layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
"""
Let's build a really simple model that uses it:
"""
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = keras.layers.Dense(64, activation="relu")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
"""
Here's what our training step should look like now:
"""
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
# Add any extra losses created during the forward pass.
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply(grads, model.trainable_weights)
train_acc_metric.update_state(y, logits)
return loss_value
"""
## Summary
Now you know everything there is to know about using built-in training loops and
writing your own from scratch.
To conclude, here's a simple end-to-end example that ties together everything
you've learned in this guide: a DCGAN trained on MNIST digits.
"""
"""
## End-to-end example: a GAN training loop from scratch
You may be familiar with Generative Adversarial Networks (GANs). GANs can generate new
images that look almost real, by learning the latent distribution of a training
dataset of images (the "latent space" of the images).
A GAN is made of two parts: a "generator" model that maps points in the latent
space to points in image space, a "discriminator" model, a classifier
that can tell the difference between real images (from the training dataset)
and fake images (the output of the generator network).
A GAN training loop looks like this:
1) Train the discriminator.
- Sample a batch of random points in the latent space.
- Turn the points into fake images via the "generator" model.
- Get a batch of real images and combine them with the generated images.
- Train the "discriminator" model to classify generated vs. real images.
2) Train the generator.
- Sample random points in the latent space.
- Turn the points into fake images via the "generator" network.
- Get a batch of real images and combine them with the generated images.
- Train the "generator" model to "fool" the discriminator and classify the fake images
as real.
For a much more detailed overview of how GANs works, see
[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).
Let's implement this training loop. First, create the discriminator meant to classify
fake vs real digits:
"""
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
keras.layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
keras.layers.LeakyReLU(negative_slope=0.2),
keras.layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
keras.layers.LeakyReLU(negative_slope=0.2),
keras.layers.GlobalMaxPooling2D(),
keras.layers.Dense(1),
],
name="discriminator",
)
discriminator.summary()
"""
Then let's create a generator network,
that turns latent vectors into outputs of shape `(28, 28, 1)` (representing
MNIST digits):
"""
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
keras.layers.Dense(7 * 7 * 128),
keras.layers.LeakyReLU(negative_slope=0.2),
keras.layers.Reshape((7, 7, 128)),
keras.layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
keras.layers.LeakyReLU(negative_slope=0.2),
keras.layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
keras.layers.LeakyReLU(negative_slope=0.2),
keras.layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
"""
Here's the key bit: the training loop. As you can see it is quite straightforward. The
training step function only takes 17 lines.
"""
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer = keras.optimizers.Adam(learning_rate=0.0003)
g_optimizer = keras.optimizers.Adam(learning_rate=0.0004)
# Instantiate a loss function.
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
@tf.function
def train_step(real_images):
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Decode them to fake images
generated_images = generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(labels.shape)
# Train the discriminator
with tf.GradientTape() as tape:
predictions = discriminator(combined_images)
d_loss = loss_fn(labels, predictions)
grads = tape.gradient(d_loss, discriminator.trainable_weights)
d_optimizer.apply(grads, discriminator.trainable_weights)
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = discriminator(generator(random_latent_vectors))
g_loss = loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, generator.trainable_weights)
g_optimizer.apply(grads, generator.trainable_weights)
return d_loss, g_loss, generated_images
"""
Let's train our GAN, by repeatedly calling `train_step` on batches of images.
Since our discriminator and generator are convnets, you're going to want to
run this code on a GPU.
"""
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir = "./"
for epoch in range(epochs):
print(f"\nStart epoch {epoch}")
for step, real_images in enumerate(dataset):
# Train the discriminator & generator on one batch of real images.
d_loss, g_loss, generated_images = train_step(real_images)
# Logging.
if step % 100 == 0:
# Print metrics
print(f"discriminator loss at step {step}: {d_loss:.2f}")
print(f"adversarial loss at step {step}: {g_loss:.2f}")
# Save one generated image
img = keras.utils.array_to_img(generated_images[0] * 255.0, scale=False)
img.save(os.path.join(save_dir, f"generated_img_{step}.png"))
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if step > 10:
break
"""
That's it! You'll get nice-looking fake MNIST digits after just ~30s of training on the
Colab GPU.
"""
| keras-io/guides/writing_a_custom_training_loop_in_tensorflow.py/0 | {
"file_path": "keras-io/guides/writing_a_custom_training_loop_in_tensorflow.py",
"repo_id": "keras-io",
"token_count": 6399
} | 114 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/data_loading/'" />
| keras-io/redirects/api/preprocessing/index.html/0 | {
"file_path": "keras-io/redirects/api/preprocessing/index.html",
"repo_id": "keras-io",
"token_count": 33
} | 115 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/guides/sequential_model/'" />
| keras-io/redirects/getting-started/sequential-model-guide/index.html/0 | {
"file_path": "keras-io/redirects/getting-started/sequential-model-guide/index.html",
"repo_id": "keras-io",
"token_count": 35
} | 116 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/losses/'" />
| keras-io/redirects/losses/index.html/0 | {
"file_path": "keras-io/redirects/losses/index.html",
"repo_id": "keras-io",
"token_count": 32
} | 117 |
from kt_api_master import KT_API_MASTER
from cv_api_master import CV_API_MASTER
from nlp_api_master import NLP_API_MASTER
API_MASTER = {
"path": "api/",
"title": "Keras 3 API documentation",
"toc": True,
"children": [
{
"path": "models/",
"title": "Models API",
"toc": True,
"children": [
{
"path": "model",
"title": "The Model class",
"generate": [
"keras.Model",
"keras.Model.summary",
"keras.Model.get_layer",
],
},
{
"path": "sequential",
"title": "The Sequential class",
"generate": [
"keras.Sequential",
"keras.Sequential.add",
"keras.Sequential.pop",
],
},
{
"path": "model_training_apis",
"title": "Model training APIs",
"generate": [
"keras.Model.compile",
"keras.Model.fit",
"keras.Model.evaluate",
"keras.Model.predict",
"keras.Model.train_on_batch",
"keras.Model.test_on_batch",
"keras.Model.predict_on_batch",
],
},
{
"path": "model_saving_apis/",
"title": "Saving & serialization",
"toc": True,
"children": [
{
"path": "model_saving_and_loading",
"title": "Whole model saving & loading",
"generate": [
"keras.Model.save",
"keras.saving.save_model",
"keras.saving.load_model",
],
},
{
"path": "weights_saving_and_loading",
"title": "Weights-only saving & loading",
"generate": [
"keras.Model.save_weights",
"keras.Model.load_weights",
],
},
{
"path": "model_config_serialization",
"title": "Model config serialization",
"generate": [
"keras.Model.get_config",
"keras.Model.from_config",
"keras.models.clone_model",
],
},
{
"path": "export",
"title": "Model export for inference",
"generate": [
"keras.export.ExportArchive",
],
},
{
"path": "serialization_utils",
"title": "Serialization utilities",
"generate": [
"keras.saving.serialize_keras_object",
"keras.saving.deserialize_keras_object",
"keras.saving.custom_object_scope",
"keras.saving.get_custom_objects",
"keras.saving.register_keras_serializable",
],
},
],
},
],
},
{
"path": "layers/",
"title": "Layers API",
"toc": True,
"children": [
{
"path": "base_layer",
"title": "The base Layer class",
"generate": [
"keras.layers.Layer",
"keras.layers.Layer.weights",
"keras.layers.Layer.trainable_weights",
"keras.layers.Layer.non_trainable_weights",
"keras.layers.Layer.add_weight",
"keras.layers.Layer.trainable",
"keras.layers.Layer.get_weights",
"keras.layers.Layer.set_weights",
"keras.Model.get_config",
"keras.layers.Layer.add_loss",
"keras.layers.Layer.losses",
],
},
{
"path": "activations",
"title": "Layer activations",
"generate": [
"keras.activations.relu",
"keras.activations.sigmoid",
"keras.activations.softmax",
"keras.activations.softplus",
"keras.activations.softsign",
"keras.activations.tanh",
"keras.activations.selu",
"keras.activations.elu",
"keras.activations.exponential",
"keras.activations.leaky_relu",
"keras.activations.relu6",
"keras.activations.silu",
"keras.activations.hard_silu",
"keras.activations.gelu",
"keras.activations.hard_sigmoid",
"keras.activations.linear",
"keras.activations.mish",
"keras.activations.log_softmax",
],
},
{
"path": "initializers",
"title": "Layer weight initializers",
"generate": [
"keras.initializers.RandomNormal",
"keras.initializers.RandomUniform",
"keras.initializers.TruncatedNormal",
"keras.initializers.Zeros",
"keras.initializers.Ones",
"keras.initializers.GlorotNormal",
"keras.initializers.GlorotUniform",
"keras.initializers.HeNormal",
"keras.initializers.HeUniform",
"keras.initializers.Orthogonal",
"keras.initializers.Constant",
"keras.initializers.VarianceScaling",
"keras.initializers.LecunNormal",
"keras.initializers.LecunUniform",
"keras.initializers.IdentityInitializer",
],
},
{
"path": "regularizers",
"title": "Layer weight regularizers",
"generate": [
"keras.regularizers.Regularizer",
"keras.regularizers.L1",
"keras.regularizers.L2",
"keras.regularizers.L1L2",
"keras.regularizers.OrthogonalRegularizer",
],
},
{
"path": "constraints",
"title": "Layer weight constraints",
"generate": [
"keras.constraints.Constraint",
"keras.constraints.MaxNorm",
"keras.constraints.MinMaxNorm",
"keras.constraints.NonNeg",
"keras.constraints.UnitNorm",
],
},
{
"path": "core_layers/",
"title": "Core layers",
"toc": True,
"children": [
{
"path": "input",
"title": "Input object",
"generate": ["keras.Input"],
},
{
"path": "input_spec",
"title": "InputSpec object",
"generate": ["keras.InputSpec"],
},
{
"path": "dense",
"title": "Dense layer",
"generate": ["keras.layers.Dense"],
},
{
"path": "einsum_dense",
"title": "EinsumDense layer",
"generate": ["keras.layers.EinsumDense"],
},
{
"path": "activation",
"title": "Activation layer",
"generate": ["keras.layers.Activation"],
},
{
"path": "embedding",
"title": "Embedding layer",
"generate": ["keras.layers.Embedding"],
},
{
"path": "masking",
"title": "Masking layer",
"generate": ["keras.layers.Masking"],
},
{
"path": "lambda",
"title": "Lambda layer",
"generate": ["keras.layers.Lambda"],
},
{
"path": "identity",
"title": "Identity layer",
"generate": ["keras.layers.Identity"],
},
],
},
{
"path": "convolution_layers/",
"title": "Convolution layers",
"toc": True,
"children": [
{
"path": "convolution1d",
"title": "Conv1D layer",
"generate": ["keras.layers.Conv1D"],
},
{
"path": "convolution2d",
"title": "Conv2D layer",
"generate": ["keras.layers.Conv2D"],
},
{
"path": "convolution3d",
"title": "Conv3D layer",
"generate": ["keras.layers.Conv3D"],
},
{
"path": "separable_convolution1d",
"title": "SeparableConv1D layer",
"generate": ["keras.layers.SeparableConv1D"],
},
{
"path": "separable_convolution2d",
"title": "SeparableConv2D layer",
"generate": ["keras.layers.SeparableConv2D"],
},
{
"path": "depthwise_convolution1d",
"title": "DepthwiseConv1D layer",
"generate": ["keras.layers.DepthwiseConv1D"],
},
{
"path": "depthwise_convolution2d",
"title": "DepthwiseConv2D layer",
"generate": ["keras.layers.DepthwiseConv2D"],
},
{
"path": "convolution1d_transpose",
"title": "Conv1DTranspose layer",
"generate": ["keras.layers.Conv1DTranspose"],
},
{
"path": "convolution2d_transpose",
"title": "Conv2DTranspose layer",
"generate": ["keras.layers.Conv2DTranspose"],
},
{
"path": "convolution3d_transpose",
"title": "Conv3DTranspose layer",
"generate": ["keras.layers.Conv3DTranspose"],
},
],
},
{
"path": "pooling_layers/",
"title": "Pooling layers",
"toc": True,
"children": [
{
"path": "max_pooling1d",
"title": "MaxPooling1D layer",
"generate": ["keras.layers.MaxPooling1D"],
},
{
"path": "max_pooling2d",
"title": "MaxPooling2D layer",
"generate": ["keras.layers.MaxPooling2D"],
},
{
"path": "max_pooling3d",
"title": "MaxPooling3D layer",
"generate": ["keras.layers.MaxPooling3D"],
},
{
"path": "average_pooling1d",
"title": "AveragePooling1D layer",
"generate": ["keras.layers.AveragePooling1D"],
},
{
"path": "average_pooling2d",
"title": "AveragePooling2D layer",
"generate": ["keras.layers.AveragePooling2D"],
},
{
"path": "average_pooling3d",
"title": "AveragePooling3D layer",
"generate": ["keras.layers.AveragePooling3D"],
},
{
"path": "global_max_pooling1d",
"title": "GlobalMaxPooling1D layer",
"generate": ["keras.layers.GlobalMaxPooling1D"],
},
{
"path": "global_max_pooling2d",
"title": "GlobalMaxPooling2D layer",
"generate": ["keras.layers.GlobalMaxPooling2D"],
},
{
"path": "global_max_pooling3d",
"title": "GlobalMaxPooling3D layer",
"generate": ["keras.layers.GlobalMaxPooling3D"],
},
{
"path": "global_average_pooling1d",
"title": "GlobalAveragePooling1D layer",
"generate": ["keras.layers.GlobalAveragePooling1D"],
},
{
"path": "global_average_pooling2d",
"title": "GlobalAveragePooling2D layer",
"generate": ["keras.layers.GlobalAveragePooling2D"],
},
{
"path": "global_average_pooling3d",
"title": "GlobalAveragePooling3D layer",
"generate": ["keras.layers.GlobalAveragePooling3D"],
},
],
},
{
"path": "recurrent_layers/",
"title": "Recurrent layers",
"toc": True,
"children": [
{
"path": "lstm",
"title": "LSTM layer",
"generate": ["keras.layers.LSTM"],
},
{
"path": "lstm_cell",
"title": "LSTM cell layer",
"generate": ["keras.layers.LSTMCell"],
},
{
"path": "gru",
"title": "GRU layer",
"generate": ["keras.layers.GRU"],
},
{
"path": "gru_cell",
"title": "GRU Cell layer",
"generate": ["keras.layers.GRUCell"],
},
{
"path": "simple_rnn",
"title": "SimpleRNN layer",
"generate": ["keras.layers.SimpleRNN"],
},
{
"path": "time_distributed",
"title": "TimeDistributed layer",
"generate": ["keras.layers.TimeDistributed"],
},
{
"path": "bidirectional",
"title": "Bidirectional layer",
"generate": ["keras.layers.Bidirectional"],
},
{
"path": "conv_lstm1d",
"title": "ConvLSTM1D layer",
"generate": ["keras.layers.ConvLSTM1D"],
},
{
"path": "conv_lstm2d",
"title": "ConvLSTM2D layer",
"generate": ["keras.layers.ConvLSTM2D"],
},
{
"path": "conv_lstm3d",
"title": "ConvLSTM3D layer",
"generate": ["keras.layers.ConvLSTM3D"],
},
{
"path": "rnn",
"title": "Base RNN layer",
"generate": ["keras.layers.RNN"],
},
{
"path": "simple_rnn_cell",
"title": "Simple RNN cell layer",
"generate": ["keras.layers.SimpleRNNCell"],
},
{
"path": "stacked_rnn_cell",
"title": "Stacked RNN cell layer",
"generate": ["keras.layers.StackedRNNCells"],
},
],
},
{
"path": "preprocessing_layers/",
"title": "Preprocessing layers",
"toc": True,
"children": [
{
"path": "text/",
"title": "Text preprocessing",
"toc": True,
"children": [
{
"path": "text_vectorization",
"title": "TextVectorization layer",
"generate": ["keras.layers.TextVectorization"],
},
],
},
{
"path": "numerical/",
"title": "Numerical features preprocessing layers",
"toc": True,
"children": [
{
"path": "normalization",
"title": "Normalization layer",
"generate": ["keras.layers.Normalization"],
},
{
"path": "spectral_normalization",
"title": "Spectral Normalization layer",
"generate": ["keras.layers.SpectralNormalization"],
},
{
"path": "discretization",
"title": "Discretization layer",
"generate": ["keras.layers.Discretization"],
},
],
},
{
"path": "categorical/",
"title": "Categorical features preprocessing layers",
"toc": True,
"children": [
{
"path": "category_encoding",
"title": "CategoryEncoding layer",
"generate": ["keras.layers.CategoryEncoding"],
},
{
"path": "hashing",
"title": "Hashing layer",
"generate": ["keras.layers.Hashing"],
},
{
"path": "hashed_crossing",
"title": "HashedCrossing layer",
"generate": ["keras.layers.HashedCrossing"],
},
{
"path": "string_lookup",
"title": "StringLookup layer",
"generate": ["keras.layers.StringLookup"],
},
{
"path": "integer_lookup",
"title": "IntegerLookup layer",
"generate": ["keras.layers.IntegerLookup"],
},
],
},
{
"path": "image_preprocessing/",
"title": "Image preprocessing layers",
"toc": True,
"children": [
{
"path": "resizing",
"title": "Resizing layer",
"generate": ["keras.layers.Resizing"],
},
{
"path": "rescaling",
"title": "Rescaling layer",
"generate": ["keras.layers.Rescaling"],
},
{
"path": "center_crop",
"title": "CenterCrop layer",
"generate": ["keras.layers.CenterCrop"],
},
],
},
{
"path": "image_augmentation/",
"title": "Image augmentation layers",
"toc": True,
"children": [
{
"path": "random_crop",
"title": "RandomCrop layer",
"generate": ["keras.layers.RandomCrop"],
},
{
"path": "random_flip",
"title": "RandomFlip layer",
"generate": ["keras.layers.RandomFlip"],
},
{
"path": "random_translation",
"title": "RandomTranslation layer",
"generate": ["keras.layers.RandomTranslation"],
},
{
"path": "random_rotation",
"title": "RandomRotation layer",
"generate": ["keras.layers.RandomRotation"],
},
{
"path": "random_zoom",
"title": "RandomZoom layer",
"generate": ["keras.layers.RandomZoom"],
},
{
"path": "random_contrast",
"title": "RandomContrast layer",
"generate": ["keras.layers.RandomContrast"],
},
{
"path": "random_brightness",
"title": "RandomBrightness layer",
"generate": ["keras.layers.RandomBrightness"],
},
],
},
],
},
{
"path": "normalization_layers/",
"title": "Normalization layers",
"toc": True,
"children": [
{
"path": "batch_normalization",
"title": "BatchNormalization layer",
"generate": ["keras.layers.BatchNormalization"],
},
{
"path": "layer_normalization",
"title": "LayerNormalization layer",
"generate": ["keras.layers.LayerNormalization"],
},
{
"path": "unit_normalization",
"title": "UnitNormalization layer",
"generate": ["keras.layers.UnitNormalization"],
},
{
"path": "group_normalization",
"title": "GroupNormalization layer",
"generate": ["keras.layers.GroupNormalization"],
},
],
},
{
"path": "regularization_layers/",
"title": "Regularization layers",
"toc": True,
"children": [
{
"path": "dropout",
"title": "Dropout layer",
"generate": ["keras.layers.Dropout"],
},
{
"path": "spatial_dropout1d",
"title": "SpatialDropout1D layer",
"generate": ["keras.layers.SpatialDropout1D"],
},
{
"path": "spatial_dropout2d",
"title": "SpatialDropout2D layer",
"generate": ["keras.layers.SpatialDropout2D"],
},
{
"path": "spatial_dropout3d",
"title": "SpatialDropout3D layer",
"generate": ["keras.layers.SpatialDropout3D"],
},
{
"path": "gaussian_dropout",
"title": "GaussianDropout layer",
"generate": ["keras.layers.GaussianDropout"],
},
{
"path": "alpha_dropout",
"title": "AlphaDropout layer",
"generate": ["keras.layers.AlphaDropout"],
},
{
"path": "gaussian_noise",
"title": "GaussianNoise layer",
"generate": ["keras.layers.GaussianNoise"],
},
{
"path": "activity_regularization",
"title": "ActivityRegularization layer",
"generate": ["keras.layers.ActivityRegularization"],
},
],
},
{
"path": "attention_layers/",
"title": "Attention layers",
"toc": True,
"children": [
{
"path": "group_query_attention",
"title": "GroupQueryAttention",
"generate": ["keras.layers.GroupQueryAttention"],
},
{
"path": "multi_head_attention",
"title": "MultiHeadAttention layer",
"generate": ["keras.layers.MultiHeadAttention"],
},
{
"path": "attention",
"title": "Attention layer",
"generate": ["keras.layers.Attention"],
},
{
"path": "additive_attention",
"title": "AdditiveAttention layer",
"generate": ["keras.layers.AdditiveAttention"],
},
],
},
{
"path": "reshaping_layers/",
"title": "Reshaping layers",
"toc": True,
"children": [
{
"path": "reshape",
"title": "Reshape layer",
"generate": ["keras.layers.Reshape"],
},
{
"path": "flatten",
"title": "Flatten layer",
"generate": ["keras.layers.Flatten"],
},
{
"path": "repeat_vector",
"title": "RepeatVector layer",
"generate": ["keras.layers.RepeatVector"],
},
{
"path": "permute",
"title": "Permute layer",
"generate": ["keras.layers.Permute"],
},
{
"path": "cropping1d",
"title": "Cropping1D layer",
"generate": ["keras.layers.Cropping1D"],
},
{
"path": "cropping2d",
"title": "Cropping2D layer",
"generate": ["keras.layers.Cropping2D"],
},
{
"path": "cropping3d",
"title": "Cropping3D layer",
"generate": ["keras.layers.Cropping3D"],
},
{
"path": "up_sampling1d",
"title": "UpSampling1D layer",
"generate": ["keras.layers.UpSampling1D"],
},
{
"path": "up_sampling2d",
"title": "UpSampling2D layer",
"generate": ["keras.layers.UpSampling2D"],
},
{
"path": "up_sampling3d",
"title": "UpSampling3D layer",
"generate": ["keras.layers.UpSampling3D"],
},
{
"path": "zero_padding1d",
"title": "ZeroPadding1D layer",
"generate": ["keras.layers.ZeroPadding1D"],
},
{
"path": "zero_padding2d",
"title": "ZeroPadding2D layer",
"generate": ["keras.layers.ZeroPadding2D"],
},
{
"path": "zero_padding3d",
"title": "ZeroPadding3D layer",
"generate": ["keras.layers.ZeroPadding3D"],
},
],
},
{
"path": "merging_layers/",
"title": "Merging layers",
"toc": True,
"children": [
{
"path": "concatenate",
"title": "Concatenate layer",
"generate": ["keras.layers.Concatenate"],
},
{
"path": "average",
"title": "Average layer",
"generate": ["keras.layers.Average"],
},
{
"path": "maximum",
"title": "Maximum layer",
"generate": ["keras.layers.Maximum"],
},
{
"path": "minimum",
"title": "Minimum layer",
"generate": ["keras.layers.Minimum"],
},
{
"path": "add",
"title": "Add layer",
"generate": ["keras.layers.Add"],
},
{
"path": "subtract",
"title": "Subtract layer",
"generate": ["keras.layers.Subtract"],
},
{
"path": "multiply",
"title": "Multiply layer",
"generate": ["keras.layers.Multiply"],
},
{
"path": "dot",
"title": "Dot layer",
"generate": ["keras.layers.Dot"],
},
],
},
{
"path": "activation_layers/",
"title": "Activation layers",
"toc": True,
"children": [
{
"path": "relu",
"title": "ReLU layer",
"generate": ["keras.layers.ReLU"],
},
{
"path": "softmax",
"title": "Softmax layer",
"generate": ["keras.layers.Softmax"],
},
{
"path": "leaky_relu",
"title": "LeakyReLU layer",
"generate": ["keras.layers.LeakyReLU"],
},
{
"path": "prelu",
"title": "PReLU layer",
"generate": ["keras.layers.PReLU"],
},
{
"path": "elu",
"title": "ELU layer",
"generate": ["keras.layers.ELU"],
},
],
},
{
"path": "backend_specific_layers/",
"title": "Backend-specific layers",
"toc": True,
"children": [
{
"path": "torch_module_wrapper",
"title": "TorchModuleWrapper layer",
"generate": ["keras.layers.TorchModuleWrapper"],
},
{
"path": "tfsm_layer",
"title": "Tensorflow SavedModel layer",
"generate": ["keras.layers.TFSMLayer"],
},
],
},
],
},
{
"path": "callbacks/",
"title": "Callbacks API",
"toc": True,
"children": [
{
"path": "base_callback",
"title": "Base Callback class",
"generate": ["keras.callbacks.Callback"],
},
{
"path": "model_checkpoint",
"title": "ModelCheckpoint",
"generate": ["keras.callbacks.ModelCheckpoint"],
},
{
"path": "backup_and_restore",
"title": "BackupAndRestore",
"generate": ["keras.callbacks.BackupAndRestore"],
},
{
"path": "tensorboard",
"title": "TensorBoard",
"generate": ["keras.callbacks.TensorBoard"],
},
{
"path": "early_stopping",
"title": "EarlyStopping",
"generate": ["keras.callbacks.EarlyStopping"],
},
{
"path": "learning_rate_scheduler",
"title": "LearningRateScheduler",
"generate": ["keras.callbacks.LearningRateScheduler"],
},
{
"path": "reduce_lr_on_plateau",
"title": "ReduceLROnPlateau",
"generate": ["keras.callbacks.ReduceLROnPlateau"],
},
{
"path": "remote_monitor",
"title": "RemoteMonitor",
"generate": ["keras.callbacks.RemoteMonitor"],
},
{
"path": "lambda_callback",
"title": "LambdaCallback",
"generate": ["keras.callbacks.LambdaCallback"],
},
{
"path": "terminate_on_nan",
"title": "TerminateOnNaN",
"generate": ["keras.callbacks.TerminateOnNaN"],
},
{
"path": "csv_logger",
"title": "CSVLogger",
"generate": ["keras.callbacks.CSVLogger"],
},
{
"path": "progbar_logger",
"title": "ProgbarLogger",
"generate": ["keras.callbacks.ProgbarLogger"],
},
{
"path": "swap_ema_weights",
"title": "SwapEMAWeights",
"generate": ["keras.callbacks.SwapEMAWeights"],
},
],
},
{
"path": "ops/", # TODO: improve
"title": "Ops API",
"toc": True,
"children": [
{
"path": "numpy/",
"title": "NumPy ops",
"generate": [
"keras.ops.absolute",
"keras.ops.add",
"keras.ops.all",
"keras.ops.amax",
"keras.ops.amin",
"keras.ops.any",
"keras.ops.append",
"keras.ops.arange",
"keras.ops.arccos",
"keras.ops.arccosh",
"keras.ops.arcsin",
"keras.ops.arcsinh",
"keras.ops.arctan",
"keras.ops.arctan2",
"keras.ops.arctanh",
"keras.ops.argmax",
"keras.ops.argmin",
"keras.ops.argsort",
"keras.ops.array",
"keras.ops.average",
"keras.ops.bincount",
"keras.ops.broadcast_to",
"keras.ops.ceil",
"keras.ops.clip",
"keras.ops.concatenate",
"keras.ops.conj",
"keras.ops.conjugate",
"keras.ops.copy",
"keras.ops.cos",
"keras.ops.cosh",
"keras.ops.count_nonzero",
"keras.ops.cross",
"keras.ops.cumprod",
"keras.ops.cumsum",
"keras.ops.diag",
"keras.ops.diagonal",
"keras.ops.diff",
"keras.ops.digitize",
"keras.ops.divide",
"keras.ops.divide_no_nan",
"keras.ops.dot",
"keras.ops.einsum",
"keras.ops.empty",
"keras.ops.equal",
"keras.ops.exp",
"keras.ops.expand_dims",
"keras.ops.expm1",
"keras.ops.eye",
"keras.ops.flip",
"keras.ops.floor",
"keras.ops.floor_divide",
"keras.ops.full",
"keras.ops.full_like",
"keras.ops.get_item",
"keras.ops.greater",
"keras.ops.greater_equal",
"keras.ops.hstack",
"keras.ops.identity",
"keras.ops.imag",
"keras.ops.isclose",
"keras.ops.isfinite",
"keras.ops.isinf",
"keras.ops.isnan",
"keras.ops.less",
"keras.ops.less_equal",
"keras.ops.linspace",
"keras.ops.log",
"keras.ops.log10",
"keras.ops.log1p",
"keras.ops.log2",
"keras.ops.logaddexp",
"keras.ops.logical_and",
"keras.ops.logical_not",
"keras.ops.logical_or",
"keras.ops.logical_xor",
"keras.ops.logspace",
"keras.ops.matmul",
"keras.ops.max",
"keras.ops.maximum",
"keras.ops.mean",
"keras.ops.median",
"keras.ops.meshgrid",
"keras.ops.min",
"keras.ops.minimum",
"keras.ops.mod",
"keras.ops.moveaxis",
"keras.ops.multiply",
"keras.ops.nan_to_num",
"keras.ops.ndim",
"keras.ops.negative",
"keras.ops.nonzero",
"keras.ops.norm",
"keras.ops.not_equal",
"keras.ops.ones",
"keras.ops.ones_like",
"keras.ops.outer",
"keras.ops.pad",
"keras.ops.power",
"keras.ops.prod",
"keras.ops.quantile",
"keras.ops.ravel",
"keras.ops.real",
"keras.ops.reciprocal",
"keras.ops.repeat",
"keras.ops.reshape",
"keras.ops.roll",
"keras.ops.round",
"keras.ops.sign",
"keras.ops.sin",
"keras.ops.sinh",
"keras.ops.size",
"keras.ops.sort",
"keras.ops.split",
"keras.ops.sqrt",
"keras.ops.square",
"keras.ops.squeeze",
"keras.ops.stack",
"keras.ops.std",
"keras.ops.subtract",
"keras.ops.sum",
"keras.ops.swapaxes",
"keras.ops.take",
"keras.ops.take_along_axis",
"keras.ops.tan",
"keras.ops.tanh",
"keras.ops.tensordot",
"keras.ops.tile",
"keras.ops.trace",
"keras.ops.transpose",
"keras.ops.tri",
"keras.ops.tril",
"keras.ops.triu",
"keras.ops.true_divide",
"keras.ops.var",
"keras.ops.vdot",
"keras.ops.vstack",
"keras.ops.where",
"keras.ops.zeros",
"keras.ops.zeros_like",
],
},
{
"path": "nn/",
"title": "NN ops",
"generate": [
"keras.ops.average_pool",
"keras.ops.batch_normalization",
"keras.ops.binary_crossentropy",
"keras.ops.categorical_crossentropy",
"keras.ops.conv",
"keras.ops.conv_transpose",
"keras.ops.depthwise_conv",
"keras.ops.elu",
"keras.ops.gelu",
"keras.ops.hard_sigmoid",
"keras.ops.leaky_relu",
"keras.ops.log_sigmoid",
"keras.ops.log_softmax",
"keras.ops.max_pool",
"keras.ops.moments",
"keras.ops.multi_hot",
"keras.ops.one_hot",
"keras.ops.relu",
"keras.ops.relu6",
"keras.ops.selu",
"keras.ops.separable_conv",
"keras.ops.sigmoid",
"keras.ops.silu",
"keras.ops.hard_silu",
"keras.ops.softmax",
"keras.ops.softplus",
"keras.ops.softsign",
"keras.ops.sparse_categorical_crossentropy",
"keras.ops.swish",
"keras.ops.hard_swish",
],
},
{
"path": "linalg/",
"title": "Linear algebra ops",
"generate": [
"keras.ops.cholesky",
"keras.ops.det",
"keras.ops.eig",
"keras.ops.inv",
"keras.ops.lu_factor",
"keras.ops.norm",
"keras.ops.qr",
"keras.ops.solve",
"keras.ops.solve_triangular",
"keras.ops.svd",
],
},
{
"path": "core/",
"title": "Core ops",
"generate": [
"keras.ops.cast",
"keras.ops.cond",
"keras.ops.convert_to_numpy",
"keras.ops.convert_to_tensor",
"keras.ops.erf",
"keras.ops.erfinv",
"keras.ops.extract_sequences",
"keras.ops.fori_loop",
"keras.ops.in_top_k",
"keras.ops.is_tensor",
"keras.ops.logsumexp",
"keras.ops.rsqrt",
"keras.ops.scatter",
"keras.ops.scatter_update",
"keras.ops.segment_max",
"keras.ops.segment_sum",
"keras.ops.shape",
"keras.ops.slice",
"keras.ops.slice_update",
"keras.ops.stop_gradient",
"keras.ops.top_k",
"keras.ops.unstack",
"keras.ops.vectorized_map",
"keras.ops.while_loop",
],
},
{
"path": "image/",
"title": "Image ops",
"generate": [
"keras.ops.image.affine_transform",
"keras.ops.image.extract_patches",
"keras.ops.image.map_coordinates",
"keras.ops.image.pad_images",
"keras.ops.image.resize",
],
},
{
"path": "fft/",
"title": "FFT ops",
"generate": [
"keras.ops.fft",
"keras.ops.fft2",
"keras.ops.rfft",
"keras.ops.stft",
"keras.ops.irfft",
"keras.ops.istft",
],
},
],
},
{
"path": "optimizers/",
"title": "Optimizers",
"toc": True,
"generate": [
"keras.optimizers.Optimizer",
"keras.optimizers.Optimizer.apply_gradients",
"keras.optimizers.Optimizer.variables",
],
"children": [
{
"path": "sgd",
"title": "SGD",
"generate": ["keras.optimizers.SGD"],
},
{
"path": "rmsprop",
"title": "RMSprop",
"generate": ["keras.optimizers.RMSprop"],
},
{
"path": "adam",
"title": "Adam",
"generate": ["keras.optimizers.Adam"],
},
{
"path": "adamw",
"title": "AdamW",
"generate": ["keras.optimizers.AdamW"],
},
{
"path": "adadelta",
"title": "Adadelta",
"generate": ["keras.optimizers.Adadelta"],
},
{
"path": "adagrad",
"title": "Adagrad",
"generate": ["keras.optimizers.Adagrad"],
},
{
"path": "adamax",
"title": "Adamax",
"generate": ["keras.optimizers.Adamax"],
},
{
"path": "adafactor",
"title": "Adafactor",
"generate": ["keras.optimizers.Adafactor"],
},
{
"path": "Nadam",
"title": "Nadam",
"generate": ["keras.optimizers.Nadam"],
},
{
"path": "ftrl",
"title": "Ftrl",
"generate": ["keras.optimizers.Ftrl"],
},
{
"path": "lion",
"title": "Lion",
"generate": ["keras.optimizers.Lion"],
},
{
"path": "loss_scale_optimizer",
"title": "Loss Scale Optimizer",
"generate": ["keras.optimizers.LossScaleOptimizer"],
},
{
"path": "learning_rate_schedules/",
"title": "Learning rate schedules API",
"toc": True,
"skip_from_toc": True,
"children": [
{
"path": "learning_rate_schedule",
"title": "LearningRateSchedule",
"generate": [
"keras.optimizers.schedules.LearningRateSchedule"
],
},
{
"path": "exponential_decay",
"title": "ExponentialDecay",
"generate": ["keras.optimizers.schedules.ExponentialDecay"],
},
{
"path": "piecewise_constant_decay",
"title": "PiecewiseConstantDecay",
"generate": [
"keras.optimizers.schedules.PiecewiseConstantDecay"
],
},
{
"path": "polynomial_decay",
"title": "PolynomialDecay",
"generate": ["keras.optimizers.schedules.PolynomialDecay"],
},
{
"path": "inverse_time_decay",
"title": "InverseTimeDecay",
"generate": ["keras.optimizers.schedules.InverseTimeDecay"],
},
{
"path": "cosine_decay",
"title": "CosineDecay",
"generate": ["keras.optimizers.schedules.CosineDecay"],
},
{
"path": "cosine_decay_restarts",
"title": "CosineDecayRestarts",
"generate": [
"keras.optimizers.schedules.CosineDecayRestarts"
],
},
],
},
],
},
{
"path": "metrics/",
"title": "Metrics",
"toc": True,
"children": [
{
"path": "base_metric",
"title": "Base Metric class",
"generate": [
"keras.metrics.Metric",
],
},
{
"path": "accuracy_metrics",
"title": "Accuracy metrics",
"generate": [
"keras.metrics.Accuracy",
"keras.metrics.BinaryAccuracy",
"keras.metrics.CategoricalAccuracy",
"keras.metrics.SparseCategoricalAccuracy",
"keras.metrics.TopKCategoricalAccuracy",
"keras.metrics.SparseTopKCategoricalAccuracy",
],
},
{
"path": "probabilistic_metrics",
"title": "Probabilistic metrics",
"generate": [
"keras.metrics.BinaryCrossentropy",
"keras.metrics.CategoricalCrossentropy",
"keras.metrics.SparseCategoricalCrossentropy",
"keras.metrics.KLDivergence",
"keras.metrics.Poisson",
],
},
{
"path": "regression_metrics",
"title": "Regression metrics",
"generate": [
"keras.metrics.MeanSquaredError",
"keras.metrics.RootMeanSquaredError",
"keras.metrics.MeanAbsoluteError",
"keras.metrics.MeanAbsolutePercentageError",
"keras.metrics.MeanSquaredLogarithmicError",
"keras.metrics.CosineSimilarity",
"keras.metrics.LogCoshError",
"keras.metrics.R2Score",
],
},
{
"path": "classification_metrics",
"title": "Classification metrics based on True/False positives & negatives",
"generate": [
"keras.metrics.AUC",
"keras.metrics.Precision",
"keras.metrics.Recall",
"keras.metrics.TruePositives",
"keras.metrics.TrueNegatives",
"keras.metrics.FalsePositives",
"keras.metrics.FalseNegatives",
"keras.metrics.PrecisionAtRecall",
"keras.metrics.RecallAtPrecision",
"keras.metrics.SensitivityAtSpecificity",
"keras.metrics.SpecificityAtSensitivity",
"keras.metrics.F1Score",
"keras.metrics.FBetaScore",
],
},
{
"path": "segmentation_metrics",
"title": "Image segmentation metrics",
"generate": [
"keras.metrics.IoU",
"keras.metrics.BinaryIoU",
"keras.metrics.OneHotIoU",
"keras.metrics.OneHotMeanIoU",
"keras.metrics.MeanIoU",
],
},
{
"path": "hinge_metrics",
"title": 'Hinge metrics for "maximum-margin" classification',
"generate": [
"keras.metrics.Hinge",
"keras.metrics.SquaredHinge",
"keras.metrics.CategoricalHinge",
],
},
{
"path": "metrics_wrappers",
"title": "Metric wrappers and reduction metrics",
"generate": [
"keras.metrics.MeanMetricWrapper",
"keras.metrics.Mean",
"keras.metrics.Sum",
],
},
],
},
{
"path": "losses/",
"title": "Losses",
"toc": True,
"children": [
{
"path": "probabilistic_losses",
"title": "Probabilistic losses",
"generate": [
"keras.losses.BinaryCrossentropy",
"keras.losses.BinaryFocalCrossentropy",
"keras.losses.CategoricalCrossentropy",
"keras.losses.CategoricalFocalCrossentropy",
"keras.losses.SparseCategoricalCrossentropy",
"keras.losses.Poisson",
"keras.losses.binary_crossentropy",
"keras.losses.categorical_crossentropy",
"keras.losses.sparse_categorical_crossentropy",
"keras.losses.poisson",
"keras.losses.KLDivergence",
"keras.losses.kl_divergence",
"keras.losses.CTC",
],
},
{
"path": "regression_losses",
"title": "Regression losses",
"generate": [
"keras.losses.MeanSquaredError",
"keras.losses.MeanAbsoluteError",
"keras.losses.MeanAbsolutePercentageError",
"keras.losses.MeanSquaredLogarithmicError",
"keras.losses.CosineSimilarity",
"keras.losses.mean_squared_error",
"keras.losses.mean_absolute_error",
"keras.losses.mean_absolute_percentage_error",
"keras.losses.mean_squared_logarithmic_error",
"keras.losses.cosine_similarity",
"keras.losses.Huber",
"keras.losses.huber",
"keras.losses.LogCosh",
"keras.losses.log_cosh",
],
},
{
"path": "hinge_losses",
"title": 'Hinge losses for "maximum-margin" classification',
"generate": [
"keras.losses.Hinge",
"keras.losses.SquaredHinge",
"keras.losses.CategoricalHinge",
"keras.losses.hinge",
"keras.losses.squared_hinge",
"keras.losses.categorical_hinge",
],
},
],
},
{
"path": "data_loading/",
"title": "Data loading",
"toc": True,
"children": [
{
"path": "image",
"title": "Image data loading",
"generate": [
"keras.utils.image_dataset_from_directory",
"keras.utils.load_img",
"keras.utils.img_to_array",
"keras.utils.save_img",
"keras.utils.array_to_img",
],
},
{
"path": "timeseries",
"title": "Timeseries data loading",
"generate": [
"keras.utils.timeseries_dataset_from_array",
"keras.utils.pad_sequences", # LEGACY
],
},
{
"path": "text",
"title": "Text data loading",
"generate": [
"keras.utils.text_dataset_from_directory",
],
},
{
"path": "audio",
"title": "Audio data loading",
"generate": [
"keras.utils.audio_dataset_from_directory",
],
},
],
},
{
"path": "datasets/",
"title": "Built-in small datasets",
"toc": True,
"children": [
{
"path": "mnist",
"title": "MNIST digits classification dataset",
"generate": ["keras.datasets.mnist.load_data"],
},
{
"path": "cifar10",
"title": "CIFAR10 small images classification dataset",
"generate": ["keras.datasets.cifar10.load_data"],
},
{
"path": "cifar100",
"title": "CIFAR100 small images classification dataset",
"generate": ["keras.datasets.cifar100.load_data"],
},
{
"path": "imdb",
"title": "IMDB movie review sentiment classification dataset",
"generate": [
"keras.datasets.imdb.load_data",
"keras.datasets.imdb.get_word_index",
],
},
{
"path": "reuters",
"title": "Reuters newswire classification dataset",
"generate": [
"keras.datasets.reuters.load_data",
"keras.datasets.reuters.get_word_index",
],
},
{
"path": "fashion_mnist",
"title": "Fashion MNIST dataset, an alternative to MNIST",
"generate": ["keras.datasets.fashion_mnist.load_data"],
},
{
"path": "california_housing",
"title": "California Housing price regression dataset",
"generate": ["keras.datasets.california_housing.load_data"],
},
],
},
{
"path": "applications/",
"title": "Keras Applications",
"children": [
{
"path": "xception",
"title": "Xception",
"generate": ["keras.applications.Xception"],
},
{
"path": "efficientnet",
"title": "EfficientNet B0 to B7",
"generate": [
"keras.applications.EfficientNetB0",
"keras.applications.EfficientNetB1",
"keras.applications.EfficientNetB2",
"keras.applications.EfficientNetB3",
"keras.applications.EfficientNetB4",
"keras.applications.EfficientNetB5",
"keras.applications.EfficientNetB6",
"keras.applications.EfficientNetB7",
],
},
{
"path": "efficientnet_v2",
"title": "EfficientNetV2 B0 to B3 and S, M, L",
"generate": [
"keras.applications.EfficientNetV2B0",
"keras.applications.EfficientNetV2B1",
"keras.applications.EfficientNetV2B2",
"keras.applications.EfficientNetV2B3",
"keras.applications.EfficientNetV2S",
"keras.applications.EfficientNetV2M",
"keras.applications.EfficientNetV2L",
],
},
{
"path": "convnext",
"title": "ConvNeXt Tiny, Small, Base, Large, XLarge",
"generate": [
"keras.applications.ConvNeXtTiny",
"keras.applications.ConvNeXtSmall",
"keras.applications.ConvNeXtBase",
"keras.applications.ConvNeXtLarge",
"keras.applications.ConvNeXtXLarge",
],
},
{
"path": "vgg",
"title": "VGG16 and VGG19",
"generate": [
"keras.applications.VGG16",
"keras.applications.VGG19",
],
},
{
"path": "resnet",
"title": "ResNet and ResNetV2",
"generate": [
"keras.applications.ResNet50",
"keras.applications.ResNet101",
"keras.applications.ResNet152",
"keras.applications.ResNet50V2",
"keras.applications.ResNet101V2",
"keras.applications.ResNet152V2",
],
},
{
"path": "mobilenet",
"title": "MobileNet, MobileNetV2, and MobileNetV3",
"generate": [
"keras.applications.MobileNet",
"keras.applications.MobileNetV2",
"keras.applications.MobileNetV3Small",
"keras.applications.MobileNetV3Large",
],
},
{
"path": "densenet",
"title": "DenseNet",
"generate": [
"keras.applications.DenseNet121",
"keras.applications.DenseNet169",
"keras.applications.DenseNet201",
],
},
{
"path": "nasnet",
"title": "NasNetLarge and NasNetMobile",
"generate": [
"keras.applications.NASNetLarge",
"keras.applications.NASNetMobile",
],
},
{
"path": "inceptionv3",
"title": "InceptionV3",
"generate": [
"keras.applications.InceptionV3",
],
},
{
"path": "inceptionresnetv2",
"title": "InceptionResNetV2",
"generate": [
"keras.applications.InceptionResNetV2",
],
},
],
},
{
"path": "mixed_precision/",
"title": "Mixed precision",
"toc": True,
"children": [
{
"path": "policy",
"title": "Mixed precision policy API",
"generate": [
"keras.mixed_precision.DTypePolicy",
"keras.mixed_precision.dtype_policy",
"keras.mixed_precision.set_dtype_policy",
],
},
],
},
{
"path": "distribution/",
"title": "Multi-device distribution",
"toc": True,
"children": [
{
"path": "layout_map",
"title": "LayoutMap API",
"generate": [
"keras.distribution.LayoutMap",
"keras.distribution.DeviceMesh",
"keras.distribution.TensorLayout",
"keras.distribution.distribute_tensor",
],
},
{
"path": "data_parallel",
"title": "DataParallel API",
"generate": [
"keras.distribution.DataParallel",
],
},
{
"path": "model_parallel",
"title": "ModelParallel API",
"generate": [
"keras.distribution.ModelParallel",
],
},
{
"path": "model_parallel",
"title": "ModelParallel API",
"generate": [
"keras.distribution.ModelParallel",
],
},
{
"path": "distribution_utils",
"title": "Distribution utilities",
"generate": [
"keras.distribution.set_distribution",
"keras.distribution.distribution",
"keras.distribution.list_devices",
"keras.distribution.initialize",
],
},
],
},
{
"path": "random/",
"title": "RNG API",
"toc": True,
"children": [
{
"path": "seed_generator",
"title": "SeedGenerator class",
"generate": ["keras.random.SeedGenerator"],
},
{
"path": "random_ops",
"title": "Random operations",
"generate": [
"keras.random.categorical",
"keras.random.dropout",
"keras.random.gamma",
"keras.random.normal",
"keras.random.randint",
"keras.random.shuffle",
"keras.random.truncated_normal",
"keras.random.uniform",
],
},
],
},
{
"path": "utils/",
"title": "Utilities",
"toc": True,
"children": [
{
"path": "model_plotting_utils",
"title": "Model plotting utilities",
"generate": [
"keras.utils.plot_model",
"keras.utils.model_to_dot",
],
},
{
"path": "feature_space",
"title": "Structured data preprocessing utilities",
"generate": [
"keras.utils.FeatureSpace",
],
},
{
"path": "tensor_utils",
"title": "Tensor utilities",
"generate": [
"keras.utils.get_source_inputs",
"keras.utils.is_keras_tensor",
# "keras.backend.standardize_dtype", # TODO: enable later
# "keras.backend.is_float_dtype",
# "keras.backend.is_int_dtype",
],
},
{
"path": "python_utils",
"title": "Python & NumPy utilities",
"generate": [
"keras.utils.set_random_seed",
"keras.utils.split_dataset",
"keras.utils.pack_x_y_sample_weight",
"keras.utils.get_file",
"keras.utils.Progbar",
"keras.utils.PyDataset",
"keras.utils.to_categorical",
"keras.utils.normalize",
],
},
{
"path": "config_utils",
"title": "Keras configuration utilities",
"generate": [
"keras.version",
"keras.utils.clear_session",
"keras.config.enable_traceback_filtering",
"keras.config.disable_traceback_filtering",
"keras.config.is_traceback_filtering_enabled",
"keras.config.enable_interactive_logging",
"keras.config.disable_interactive_logging",
"keras.config.is_interactive_logging_enabled",
"keras.config.enable_unsafe_deserialization",
"keras.config.floatx",
"keras.config.set_floatx",
"keras.config.image_data_format",
"keras.config.set_image_data_format",
"keras.config.epsilon",
"keras.config.set_epsilon",
"keras.config.backend",
],
},
],
},
KT_API_MASTER,
CV_API_MASTER,
NLP_API_MASTER,
],
}
| keras-io/scripts/api_master.py/0 | {
"file_path": "keras-io/scripts/api_master.py",
"repo_id": "keras-io",
"token_count": 54606
} | 118 |
"""Script to upload post-build `site/` contents to GCS.
Prerequisite steps:
```
gcloud auth login
gcloud config set project keras-io
```
The site can be previewed at http://keras.io.storage.googleapis.com/index.html
NOTE that when previewing through the storage.googleapis.com URL,
there is no redirect to `index.html` or `404.html`; you'd have to navigate directly
to these pages. From the docs:
```
The MainPageSuffix and NotFoundPage website configurations
are only used for requests that come to Cloud Storage through a CNAME or A redirect.
For example, a request to www.example.com shows the index page,
but an equivalent request to storage.googleapis.com/www.example.com does not.
```
After upload, you may need to invalidate the CDN cache.
"""
import os
import pathlib
bucket = "keras.io" # Bucket under `keras-io` project
scripts_path = pathlib.Path(os.path.dirname(__file__))
base_path = scripts_path.parent
site_path = base_path / "site"
site_dir = site_path.absolute()
os.system(f"gsutil -m rsync -R {site_dir} gs://{bucket}")
| keras-io/scripts/upload_to_gcs.py/0 | {
"file_path": "keras-io/scripts/upload_to_gcs.py",
"repo_id": "keras-io",
"token_count": 331
} | 119 |
# DistilBERT
Models, tokenizers, and preprocessing layers for DistilBERT,
as described in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108).
For a full list of available **presets**, see the
[models page](/api/keras_nlp/models).
{{toc}}
| keras-io/templates/api/keras_nlp/models/distil_bert/index.md/0 | {
"file_path": "keras-io/templates/api/keras_nlp/models/distil_bert/index.md",
"repo_id": "keras-io",
"token_count": 102
} | 120 |
# Losses
The purpose of loss functions is to compute the quantity that a model should seek
to minimize during training.
## Available losses
Note that all losses are available both via a class handle and via a function handle.
The class handles enable you to pass configuration arguments to the constructor
(e.g.
`loss_fn = CategoricalCrossentropy(from_logits=True)`),
and they perform reduction by default when used in a standalone way (see details below).
{{toc}}
---
## Usage of losses with `compile()` & `fit()`
A loss function is one of the two arguments required for compiling a Keras model:
```python
import keras
from keras import layers
model = keras.Sequential()
model.add(layers.Dense(64, kernel_initializer='uniform', input_shape=(10,)))
model.add(layers.Activation('softmax'))
loss_fn = keras.losses.SparseCategoricalCrossentropy()
model.compile(loss=loss_fn, optimizer='adam')
```
All built-in loss functions may also be passed via their string identifier:
```python
# pass optimizer by name: default parameters will be used
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam')
```
Loss functions are typically created by instantiating a loss class (e.g. `keras.losses.SparseCategoricalCrossentropy`).
All losses are also provided as function handles (e.g. `keras.losses.sparse_categorical_crossentropy`).
Using classes enables you to pass configuration arguments at instantiation time, e.g.:
```python
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
```
---
## Standalone usage of losses
A loss is a callable with arguments `loss_fn(y_true, y_pred, sample_weight=None)`:
- **y_true**: Ground truth values, of shape `(batch_size, d0, ... dN)`. For
sparse loss functions, such as sparse categorical crossentropy, the shape
should be `(batch_size, d0, ... dN-1)`
- **y_pred**: The predicted values, of shape `(batch_size, d0, .. dN)`.
- **sample_weight**: Optional `sample_weight` acts as reduction weighting
coefficient for the per-sample losses. If a scalar is provided, then the loss is
simply scaled by the given value. If `sample_weight` is a tensor of size
`[batch_size]`, then the total loss for each sample of the batch is
rescaled by the corresponding element in the `sample_weight` vector. If
the shape of `sample_weight` is `(batch_size, d0, ... dN-1)` (or can be
broadcasted to this shape), then each loss element of `y_pred` is scaled
by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss
functions reduce by 1 dimension, usually `axis=-1`.)
By default, loss functions return one scalar loss value per input sample, e.g.
```
>>> keras.losses.mean_squared_error(tf.ones((2, 2,)), tf.zeros((2, 2)))
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>
```
However, loss class instances feature a `reduction` constructor argument,
which defaults to `"sum_over_batch_size"` (i.e. average). Allowable values are
"sum_over_batch_size", "sum", and "none":
- "sum_over_batch_size" means the loss instance will return the average
of the per-sample losses in the batch.
- "sum" means the loss instance will return the sum of the per-sample losses in the batch.
- "none" means the loss instance will return the full array of per-sample losses.
```
>>> loss_fn = keras.losses.MeanSquaredError(reduction='sum_over_batch_size')
>>> loss_fn(tf.ones((2, 2,)), tf.zeros((2, 2)))
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>
```
```
>>> loss_fn = keras.losses.MeanSquaredError(reduction='sum')
>>> loss_fn(tf.ones((2, 2,)), tf.zeros((2, 2)))
<tf.Tensor: shape=(), dtype=float32, numpy=2.0>
```
```
>>> loss_fn = keras.losses.MeanSquaredError(reduction='none')
>>> loss_fn(tf.ones((2, 2,)), tf.zeros((2, 2)))
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>
```
Note that this is an important difference between loss functions like `keras.losses.mean_squared_error`
and default loss class instances like `keras.losses.MeanSquaredError`: the function version
does not perform reduction, but by default the class instance does.
```
>>> loss_fn = keras.losses.mean_squared_error
>>> loss_fn(tf.ones((2, 2,)), tf.zeros((2, 2)))
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>
```
```
>>> loss_fn = keras.losses.MeanSquaredError()
>>> loss_fn(tf.ones((2, 2,)), tf.zeros((2, 2)))
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>
```
When using `fit()`, this difference is irrelevant since reduction is handled by the framework.
Here's how you would use a loss class instance as part of a simple training loop:
```python
loss_fn = keras.losses.CategoricalCrossentropy(from_logits=True)
optimizer = keras.optimizers.Adam()
# Iterate over the batches of a dataset.
for x, y in dataset:
with tf.GradientTape() as tape:
logits = model(x)
# Compute the loss value for this batch.
loss_value = loss_fn(y, logits)
# Update the weights of the model to minimize the loss value.
gradients = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
```
---
## Creating custom losses
Any callable with the signature `loss_fn(y_true, y_pred)`
that returns an array of losses (one of sample in the input batch) can be passed to `compile()` as a loss.
Note that sample weighting is automatically supported for any such loss.
Here's a simple example:
```python
from keras import ops
def my_loss_fn(y_true, y_pred):
squared_difference = ops.square(y_true - y_pred)
return ops.mean(squared_difference, axis=-1) # Note the `axis=-1`
model.compile(optimizer='adam', loss=my_loss_fn)
```
---
## The `add_loss()` API
Loss functions applied to the output of a model aren't the only way to
create losses.
When writing the `call` method of a custom layer or a subclassed model,
you may want to compute scalar quantities that you want to minimize during
training (e.g. regularization losses). You can use the `add_loss()` layer method
to keep track of such loss terms.
Here's an example of a layer that adds a sparsity regularization loss based on the L2 norm of the inputs:
```python
from keras import ops
class MyActivityRegularizer(keras.layers.Layer):
"""Layer that creates an activity sparsity regularization loss."""
def __init__(self, rate=1e-2):
super().__init__()
self.rate = rate
def call(self, inputs):
# We use `add_loss` to create a regularization loss
# that depends on the inputs.
self.add_loss(self.rate * ops.sum(ops.square(inputs)))
return inputs
```
Loss values added via `add_loss` can be retrieved in the `.losses` list property of any `Layer` or `Model`
(they are recursively retrieved from every underlying layer):
```python
from keras import layers
from keras import ops
class SparseMLP(layers.Layer):
"""Stack of Linear layers with a sparsity regularization loss."""
def __init__(self, output_dim):
super().__init__()
self.dense_1 = layers.Dense(32, activation=ops.relu)
self.regularization = MyActivityRegularizer(1e-2)
self.dense_2 = layers.Dense(output_dim)
def call(self, inputs):
x = self.dense_1(inputs)
x = self.regularization(x)
return self.dense_2(x)
mlp = SparseMLP(1)
y = mlp(ops.ones((10, 10)))
print(mlp.losses) # List containing one float32 scalar
```
These losses are cleared by the top-level layer at the start of each forward pass -- they don't accumulate.
So `layer.losses` always contain only the losses created during the last forward pass.
You would typically use these losses by summing them before computing your gradients when writing a training loop.
```python
# Losses correspond to the *last* forward pass.
mlp = SparseMLP(1)
mlp(ops.ones((10, 10)))
assert len(mlp.losses) == 1
mlp(ops.ones((10, 10)))
assert len(mlp.losses) == 1 # No accumulation.
```
When using `model.fit()`, such loss terms are handled automatically.
When writing a custom training loop, you should retrieve these terms
by hand from `model.losses`, like this:
```python
loss_fn = keras.losses.CategoricalCrossentropy(from_logits=True)
optimizer = keras.optimizers.Adam()
# Iterate over the batches of a dataset.
for x, y in dataset:
with tf.GradientTape() as tape:
# Forward pass.
logits = model(x)
# Loss value for this batch.
loss_value = loss_fn(y, logits)
# Add extra loss terms to the loss value.
loss_value += sum(model.losses)
# Update the weights of the model to minimize the loss value.
gradients = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
```
See [the `add_loss()` documentation](/api/layers/base_layer/#add_loss-method) for more details.
| keras-io/templates/api/losses/index.md/0 | {
"file_path": "keras-io/templates/api/losses/index.md",
"repo_id": "keras-io",
"token_count": 2991
} | 121 |
# Keras FAQ
A list of frequently Asked Keras Questions.
## General questions
- [How can I train a Keras model on multiple GPUs (on a single machine)?](#how-can-i-train-a-keras-model-on-multiple-gpus-on-a-single-machine)
- [How can I train a Keras model on TPU?](#how-can-i-train-a-keras-model-on-tpu)
- [Where is the Keras configuration file stored?](#where-is-the-keras-configuration-file-stored)
- [How to do hyperparameter tuning with Keras?](#how-to-do-hyperparameter-tuning-with-keras)
- [How can I obtain reproducible results using Keras during development?](#how-can-i-obtain-reproducible-results-using-keras-during-development)
- [What are my options for saving models?](#what-are-my-options-for-saving-models)
- [How can I install HDF5 or h5py to save my models?](#how-can-i-install-hdf5-or-h5py-to-save-my-models)
- [How should I cite Keras?](#how-should-i-cite-keras)
## Training-related questions
- [What do "sample", "batch", and "epoch" mean?](#what-do-sample-batch-and-epoch-mean)
- [Why is my training loss much higher than my testing loss?](#why-is-my-training-loss-much-higher-than-my-testing-loss)
- [How can I ensure my training run can recover from program interruptions?](#how-can-i-ensure-my-training-run-can-recover-from-program-interruptions)
- [How can I interrupt training when the validation loss isn't decreasing anymore?](#how-can-i-interrupt-training-when-the-validation-loss-isnt-decreasing-anymore)
- [How can I freeze layers and do fine-tuning?](#how-can-i-freeze-layers-and-do-finetuning)
- [What's the difference between the `training` argument in `call()` and the `trainable` attribute?](#whats-the-difference-between-the-training-argument-in-call-and-the-trainable-attribute)
- [In `fit()`, how is the validation split computed?](#in-fit-how-is-the-validation-split-computed)
- [In `fit()`, is the data shuffled during training?](#in-fit-is-the-data-shuffled-during-training)
- [What's the recommended way to monitor my metrics when training with `fit()`?](#whats-the-recommended-way-to-monitor-my-metrics-when-training-with-fit)
- [What if I need to customize what `fit()` does?](#what-if-i-need-to-customize-what-fit-does)
- [What's the difference between `Model` methods `predict()` and `__call__()`?](#whats-the-difference-between-model-methods-predict-and-call)
## Modeling-related questions
- [How can I obtain the output of an intermediate layer (feature extraction)?](#how-can-i-obtain-the-output-of-an-intermediate-layer-feature-extraction)
- [How can I use pre-trained models in Keras?](#how-can-i-use-pre-trained-models-in-keras)
- [How can I use stateful RNNs?](#how-can-i-use-stateful-rnns)
---
## General questions
### How can I train a Keras model on multiple GPUs (on a single machine)?
There are two ways to run a single model on multiple GPUs: **data parallelism** and **device parallelism**.
Keras covers both.
For data parallelism, Keras supports the built-in data parallel distribution APIs of
JAX, TensorFlow, and PyTorch. See the following guides:
- [Multi-GPU distributed training with JAX](/guides/distributed_training_with_jax/)
- [Multi-GPU distributed training with TensorFlow](/guides/distributed_training_with_tensorflow/)
- [Multi-GPU distributed training with PyTorch](/guides/distributed_training_with_torch/)
For model parallelism, Keras has its own distribution API, which is currently only support by the JAX backend.
See [the documentation for the `LayoutMap` API](/api/distribution/).
---
### How can I train a Keras model on TPU?
TPUs are a fast & efficient hardware accelerator for deep learning that is publicly available on Google Cloud.
You can use TPUs via Colab, Kaggle notebooks, and GCP Deep Learning VMs (provided the `TPU_NAME` environment variable is set on the VM).
All Keras backends (JAX, TensorFlow, PyTorch) are supported on TPU, but we recommend JAX or TensorFlow in this case.
**Using JAX:**
When connected to a TPU runtime, just insert this code snippet before model construction:
```python
import jax
distribution = keras.distribution.DataParallel(devices=jax.devices())
keras.distribution.set_distribution(distribution)
```
**Using TensorFlow:**
When connected to a TPU runtime, use `TPUClusterResolver` to detect the TPU.
Then, create `TPUStrategy` and construct your model in the strategy scope:
```python
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
print("Device:", tpu.master())
strategy = tf.distribute.TPUStrategy(tpu)
except:
strategy = tf.distribute.get_strategy()
print("Number of replicas:", strategy.num_replicas_in_sync)
with strategy.scope():
# Create your model here.
...
```
Importantly, you should:
- Make sure you are able to read your data fast enough to keep the TPU utilized.
- Consider running multiple steps of gradient descent per graph execution in order to keep the TPU utilized.
You can do this via the `experimental_steps_per_execution` argument `compile()`. It will yield a significant speed up for small models.
---
### Where is the Keras configuration file stored?
The default directory where all Keras data is stored is:
`$HOME/.keras/`
For instance, for me, on a MacBook Pro, it's `/Users/fchollet/.keras/`.
Note that Windows users should replace `$HOME` with `%USERPROFILE%`.
In case Keras cannot create the above directory (e.g. due to permission issues), `/tmp/.keras/` is used as a backup.
The Keras configuration file is a JSON file stored at `$HOME/.keras/keras.json`. The default configuration file looks like this:
```
{
"image_data_format": "channels_last",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "tensorflow"
}
```
It contains the following fields:
- The image data format to be used as default by image processing layers and utilities (either `channels_last` or `channels_first`).
- The `epsilon` numerical fuzz factor to be used to prevent division by zero in some operations.
- The default float data type.
- The default backend. It can be one of `"jax"`, `"tensorflow"`, `"torch"`, or `"numpy"`.
Likewise, cached dataset files, such as those downloaded with [`get_file()`](/utils/#get_file), are stored by default in `$HOME/.keras/datasets/`,
and cached model weights files from Keras Applications are stored by default in `$HOME/.keras/models/`.
---
### How to do hyperparameter tuning with Keras?
We recommend using [KerasTuner](https://keras.io/keras_tuner/).
---
### How can I obtain reproducible results using Keras during development?
There are four sources of randomness to consider:
1. Keras itself (e.g. `keras.random` ops or random layers from `keras.layers`).
2. The current Keras backend (e.g. JAX, TensorFlow, or PyTorch).
3. The Python runtime.
4. The CUDA runtime. When running on a GPU, some operations have non-deterministic outputs. This is due to the fact that GPUs run many operations in parallel, so the order of execution is not always guaranteed. Due to the limited precision of floats, even adding several numbers together may give slightly different results depending on the order in which you add them.
To make both Keras and the current backend framework deterministic, use this:
```python
keras.utils.set_random_seed(1337)
```
To make Python deterministic, you need to set the `PYTHONHASHSEED` environment variable to `0` before the program starts (not within the program itself). This is necessary in Python 3.2.3 onwards to have reproducible behavior for certain hash-based operations (e.g., the item order in a set or a dict, see [Python's documentation](https://docs.python.org/3.7/using/cmdline.html#envvar-PYTHONHASHSEED)).
To make the CUDA runtime deterministic: if using the TensorFlow backend, call `tf.config.experimental.enable_op_determinism`. Note that this will have a performance cost. What to do for other backends may vary -- check the documentation of your backend framework directly.
---
### What are my options for saving models?
*Note: it is not recommended to use pickle or cPickle to save a Keras model.*
**1) Whole-model saving (configuration + weights)**
Whole-model saving means creating a file that will contain:
- the architecture of the model, allowing you to re-create the model
- the weights of the model
- the training configuration (loss, optimizer)
- the state of the optimizer, allowing you to resume training exactly where you left off.
The default and recommended way to save a whole model is to just do: `model.save(your_file_path.keras)`.
After saving a model in either format, you can reinstantiate it via `model = keras.models.load_model(your_file_path.keras)`.
**Example:**
```python
from keras.saving import load_model
model.save('my_model.keras')
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.keras')
```
**2) Weights-only saving**
If you need to save the **weights of a model**, you can do so in HDF5 with the code below, using the file extension `.weights.h5`:
```python
model.save_weights('my_model.weights.h5')
```
Assuming you have code for instantiating your model, you can then load the weights you saved into a model with the *same* architecture:
```python
model.load_weights('my_model.weights.h5')
```
If you need to load the weights into a *different* architecture (with some layers in common), for instance for fine-tuning or transfer-learning, you can load them by *layer name*:
```python
model.load_weights('my_model.weights.h5', by_name=True)
```
Example:
```python
"""
Assuming the original model looks like this:
model = Sequential()
model.add(Dense(2, input_dim=3, name='dense_1'))
model.add(Dense(3, name='dense_2'))
...
model.save_weights(fname)
"""
# new model
model = Sequential()
model.add(Dense(2, input_dim=3, name='dense_1')) # will be loaded
model.add(Dense(10, name='new_dense')) # will not be loaded
# load weights from the first model; will only affect the first layer, dense_1.
model.load_weights(fname, by_name=True)
```
See also [How can I install HDF5 or h5py to save my models?](#how-can-i-install-hdf5-or-h5py-to-save-my-models) for instructions on how to install `h5py`.
**3) Configuration-only saving (serialization)**
If you only need to save the **architecture of a model**, and not its weights or its training configuration, you can do:
```python
# save as JSON
json_string = model.to_json()
```
The generated JSON file is human-readable and can be manually edited if needed.
You can then build a fresh model from this data:
```python
# model reconstruction from JSON:
from keras.models import model_from_json
model = model_from_json(json_string)
```
**4) Handling custom layers (or other custom objects) in saved models**
If the model you want to load includes custom layers or other custom classes or functions,
you can pass them to the loading mechanism via the `custom_objects` argument:
```python
from keras.models import load_model
# Assuming your model includes instance of an "AttentionLayer" class
model = load_model('my_model.h5', custom_objects={'AttentionLayer': AttentionLayer})
```
Alternatively, you can use a [custom object scope](https://keras.io/utils/#customobjectscope):
```python
from keras.utils import CustomObjectScope
with CustomObjectScope({'AttentionLayer': AttentionLayer}):
model = load_model('my_model.h5')
```
Custom objects handling works the same way for `load_model` & `model_from_json`:
```python
from keras.models import model_from_json
model = model_from_json(json_string, custom_objects={'AttentionLayer': AttentionLayer})
```
---
### How can I install HDF5 or h5py to save my models?
In order to save your Keras models as HDF5 files, Keras uses the h5py Python package. It is
a dependency of Keras and should be installed by default. On Debian-based
distributions, you will have to additionally install `libhdf5`:
<div class="k-default-code-block">
```
sudo apt-get install libhdf5-serial-dev
```
</div>
If you are unsure if h5py is installed you can open a Python shell and load the
module via
```
import h5py
```
If it imports without error it is installed, otherwise you can find
[detailed installation instructions here](http://docs.h5py.org/en/latest/build.html).
---
### How should I cite Keras?
Please cite Keras in your publications if it helps your research. Here is an example BibTeX entry:
<code style="color: gray;">
@misc{chollet2015keras,<br>
title={Keras},<br>
author={Chollet, Fran\c{c}ois and others},<br>
year={2015},<br>
howpublished={\url{https://keras.io}},<br>
}
</code>
---
## Training-related questions
### What do "sample", "batch", and "epoch" mean?
Below are some common definitions that are necessary to know and understand to correctly utilize Keras `fit()`:
- **Sample**: one element of a dataset. For instance, one image is a **sample** in a convolutional network. One audio snippet is a **sample** for a speech recognition model.
- **Batch**: a set of *N* samples. The samples in a **batch** are processed independently, in parallel. If training, a batch results in only one update to the model. A **batch** generally approximates the distribution of the input data better than a single input. The larger the batch, the better the approximation; however, it is also true that the batch will take longer to process and will still result in only one update. For inference (evaluate/predict), it is recommended to pick a batch size that is as large as you can afford without going out of memory (since larger batches will usually result in faster evaluation/prediction).
- **Epoch**: an arbitrary cutoff, generally defined as "one pass over the entire dataset", used to separate training into distinct phases, which is useful for logging and periodic evaluation.
When using `validation_data` or `validation_split` with the `fit` method of Keras models, evaluation will be run at the end of every **epoch**.
Within Keras, there is the ability to add [callbacks](/api/callbacks/) specifically designed to be run at the end of an **epoch**. Examples of these are learning rate changes and model checkpointing (saving).
---
### Why is my training loss much higher than my testing loss?
A Keras model has two modes: training and testing. Regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time.
They are reflected in the training time loss but not in the test time loss.
Besides, the training loss that Keras displays is the average of the losses for each batch of training data, **over the current epoch**.
Because your model is changing over time, the loss over the first batches of an epoch is generally higher than over the last batches.
This can bring the epoch-wise average down.
On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the epoch, resulting in a lower loss.
---
### How can I ensure my training run can recover from program interruptions?
To ensure the ability to recover from an interrupted training run at any time (fault tolerance),
you should use a `keras.callbacks.BackupAndRestore` callback that regularly saves your training progress,
including the epoch number and weights, to disk, and loads it the next time you call `Model.fit()`.
```python
import numpy as np
import keras
class InterruptingCallback(keras.callbacks.Callback):
"""A callback to intentionally introduce interruption to training."""
def on_epoch_end(self, epoch, log=None):
if epoch == 15:
raise RuntimeError('Interruption')
model = keras.Sequential([keras.layers.Dense(10)])
optimizer = keras.optimizers.SGD()
model.compile(optimizer, loss="mse")
x = np.random.random((24, 10))
y = np.random.random((24,))
backup_callback = keras.callbacks.experimental.BackupAndRestore(
backup_dir='/tmp/backup')
try:
model.fit(x, y, epochs=20, steps_per_epoch=5,
callbacks=[backup_callback, InterruptingCallback()])
except RuntimeError:
print('***Handling interruption***')
# This continues at the epoch where it left off.
model.fit(x, y, epochs=20, steps_per_epoch=5,
callbacks=[backup_callback])
```
Find out more in the [callbacks documentation](/api/callbacks/).
---
### How can I interrupt training when the validation loss isn't decreasing anymore?
You can use an `EarlyStopping` callback:
```python
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', patience=2)
model.fit(x, y, validation_split=0.2, callbacks=[early_stopping])
```
Find out more in the [callbacks documentation](/api/callbacks/).
---
### How can I freeze layers and do fine-tuning?
**Setting the `trainable` attribute**
All layers & models have a `layer.trainable` boolean attribute:
```shell
>>> layer = Dense(3)
>>> layer.trainable
True
```
On all layers & models, the `trainable` attribute can be set (to True or False).
When set to `False`, the `layer.trainable_weights` attribute is empty:
```python
>>> layer = Dense(3)
>>> layer.build(input_shape=(None, 3)) # Create the weights of the layer
>>> layer.trainable
True
>>> layer.trainable_weights
[<KerasVariable shape=(3, 3), dtype=float32, path=dense/kernel>, <KerasVariable shape=(3,), dtype=float32, path=dense/bias>]
>>> layer.trainable = False
>>> layer.trainable_weights
[]
```
Setting the `trainable` attribute on a layer recursively sets it on all children layers (contents of `self.layers`).
**1) When training with `fit()`:**
To do fine-tuning with `fit()`, you would:
- Instantiate a base model and load pre-trained weights
- Freeze that base model
- Add trainable layers on top
- Call `compile()` and `fit()`
Like this:
```python
model = Sequential([
ResNet50Base(input_shape=(32, 32, 3), weights='pretrained'),
Dense(10),
])
model.layers[0].trainable = False # Freeze ResNet50Base.
assert model.layers[0].trainable_weights == [] # ResNet50Base has no trainable weights.
assert len(model.trainable_weights) == 2 # Just the bias & kernel of the Dense layer.
model.compile(...)
model.fit(...) # Train Dense while excluding ResNet50Base.
```
You can follow a similar workflow with the Functional API or the model subclassing API.
Make sure to call `compile()` *after* changing the value of `trainable` in order for your
changes to be taken into account. Calling `compile()` will freeze the state of the training step of the model.
**2) When using a custom training loop:**
When writing a training loop, make sure to only update
weights that are part of `model.trainable_weights` (and not all `model.weights`).
Here's a simple TensorFlow example:
```python
model = Sequential([
ResNet50Base(input_shape=(32, 32, 3), weights='pretrained'),
Dense(10),
])
model.layers[0].trainable = False # Freeze ResNet50Base.
# Iterate over the batches of a dataset.
for inputs, targets in dataset:
# Open a GradientTape.
with tf.GradientTape() as tape:
# Forward pass.
predictions = model(inputs)
# Compute the loss value for this batch.
loss_value = loss_fn(targets, predictions)
# Get gradients of loss wrt the *trainable* weights.
gradients = tape.gradient(loss_value, model.trainable_weights)
# Update the weights of the model.
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
```
**Interaction between `trainable` and `compile()`**
Calling `compile()` on a model is meant to "freeze" the behavior of that model. This implies that the `trainable`
attribute values at the time the model is compiled should be preserved throughout the lifetime of that model,
until `compile` is called again. Hence, if you change `trainable`, make sure to call `compile()` again on your
model for your changes to be taken into account.
For instance, if two models A & B share some layers, and:
- Model A gets compiled
- The `trainable` attribute value on the shared layers is changed
- Model B is compiled
Then model A and B are using different `trainable` values for the shared layers. This mechanism is
critical for most existing GAN implementations, which do:
```python
discriminator.compile(...) # the weights of `discriminator` should be updated when `discriminator` is trained
discriminator.trainable = False
gan.compile(...) # `discriminator` is a submodel of `gan`, which should not be updated when `gan` is trained
```
---
### What's the difference between the `training` argument in `call()` and the `trainable` attribute?
`training` is a boolean argument in `call` that determines whether the call
should be run in inference mode or training mode. For example, in training mode,
a `Dropout` layer applies random dropout and rescales the output. In inference mode, the same
layer does nothing. Example:
```python
y = Dropout(0.5)(x, training=True) # Applies dropout at training time *and* inference time
```
`trainable` is a boolean layer attribute that determines the trainable weights
of the layer should be updated to minimize the loss during training. If `layer.trainable` is set to `False`,
then `layer.trainable_weights` will always be an empty list. Example:
```python
model = Sequential([
ResNet50Base(input_shape=(32, 32, 3), weights='pretrained'),
Dense(10),
])
model.layers[0].trainable = False # Freeze ResNet50Base.
assert model.layers[0].trainable_weights == [] # ResNet50Base has no trainable weights.
assert len(model.trainable_weights) == 2 # Just the bias & kernel of the Dense layer.
model.compile(...)
model.fit(...) # Train Dense while excluding ResNet50Base.
```
As you can see, "inference mode vs training mode" and "layer weight trainability" are two very different concepts.
You could imagine the following: a dropout layer where the scaling factor is learned during training, via
backpropagation. Let's name it `AutoScaleDropout`.
This layer would have simultaneously a trainable state, and a different behavior in inference and training.
Because the `trainable` attribute and the `training` call argument are independent, you can do the following:
```python
layer = AutoScaleDropout(0.5)
# Applies dropout at training time *and* inference time
# *and* learns the scaling factor during training
y = layer(x, training=True)
assert len(layer.trainable_weights) == 1
```
```python
# Applies dropout at training time *and* inference time
# with a *frozen* scaling factor
layer = AutoScaleDropout(0.5)
layer.trainable = False
y = layer(x, training=True)
```
***Special case of the `BatchNormalization` layer***
For a `BatchNormalization` layer, setting `bn.trainable = False` will also make its `training` call argument
default to `False`, meaning that the layer will no update its state during training.
This behavior only applies for `BatchNormalization`. For every other layer, weight trainability and
"inference vs training mode" remain independent.
---
### In `fit()`, how is the validation split computed?
If you set the `validation_split` argument in `model.fit` to e.g. 0.1, then the validation data used will be the *last 10%* of the data. If you set it to 0.25, it will be the last 25% of the data, etc. Note that the data isn't shuffled before extracting the validation split, so the validation is literally just the *last* x% of samples in the input you passed.
The same validation set is used for all epochs (within the same call to `fit`).
Note that the `validation_split` option is only available if your data is passed as Numpy arrays (not `tf.data.Datasets`, which are not indexable).
---
### In `fit()`, is the data shuffled during training?
If you pass your data as NumPy arrays and if the `shuffle` argument in `model.fit()` is set to `True` (which is the default), the training data will be globally randomly shuffled at each epoch.
If you pass your data as a `tf.data.Dataset` object and if the `shuffle` argument in `model.fit()` is set to `True`, the dataset will be locally shuffled (buffered shuffling).
When using `tf.data.Dataset` objects, prefer shuffling your data beforehand (e.g. by calling `dataset = dataset.shuffle(buffer_size)`) so as to be in control of the buffer size.
Validation data is never shuffled.
---
### What's the recommended way to monitor my metrics when training with `fit()`?
Loss values and metric values are reported via the default progress bar displayed by calls to `fit()`.
However, staring at changing ascii numbers in a console is not an optimal metric-monitoring experience.
We recommend the use of [TensorBoard](https://www.tensorflow.org/tensorboard), which will display nice-looking graphs of your training and validation metrics, regularly
updated during training, which you can access from your browser.
You can use TensorBoard with `fit()` via the [`TensorBoard` callback](/api/callbacks/tensorboard/).
---
### What if I need to customize what `fit()` does?
You have two options:
**1) Subclass the `Model` class and override the `train_step` (and `test_step`) methods**
This is a better option if you want to use custom update rules but still want to leverage the functionality provided by `fit()`,
such as callbacks, efficient step fusing, etc.
Note that this pattern does not prevent you from building models with the
Functional API, in which case you will use the class you created to instantiate
the model with the `inputs` and `outputs`. Same goes for Sequential models, in
which case you will subclass `keras.Sequential` and override its `train_step`
instead of `keras.Model`.
See the following guides:
- [Writing a custom train step in JAX](/guides/custom_train_step_in_jax/)
- [Writing a custom train step in TensorFlow](/guides/custom_train_step_in_tensorflow/)
- [Writing a custom train step in PyTorch](/guides/custom_train_step_in_torch/)
**2) Write a low-level custom training loop**
This is a good option if you want to be in control of every last little detail -- though it can be somewhat verbose.
See the following guides:
- [Writing a custom training loop in JAX](/guides/writing_a_custom_training_loop_in_jax/)
- [Writing a custom training loop in TensorFlow](/guides/writing_a_custom_training_loop_in_tensorflow/)
- [Writing a custom training loop in PyTorch](/guides/writing_a_custom_training_loop_in_torch/)
---
### What's the difference between `Model` methods `predict()` and `__call__()`?
Let's answer with an extract from
[Deep Learning with Python, Second Edition](https://www.manning.com/books/deep-learning-with-python-second-edition?a_aid=keras):
> Both `y = model.predict(x)` and `y = model(x)` (where `x` is an array of input data)
> mean "run the model on `x` and retrieve the output `y`." Yet they aren't exactly
> the same thing.
> `predict()` loops over the data in batches
> (in fact, you can specify the batch size via `predict(x, batch_size=64)`),
> and it extracts the NumPy value of the outputs. It's schematically equivalent to this:
```python
def predict(x):
y_batches = []
for x_batch in get_batches(x):
y_batch = model(x).numpy()
y_batches.append(y_batch)
return np.concatenate(y_batches)
```
> This means that `predict()` calls can scale to very large arrays. Meanwhile,
> `model(x)` happens in-memory and doesn't scale.
> On the other hand, `predict()` is not differentiable: you cannot retrieve its gradient
> if you call it in a `GradientTape` scope.
> You should use `model(x)` when you need to retrieve the gradients of the model call,
> and you should use `predict()` if you just need the output value. In other words,
> always use `predict()` unless you're in the middle of writing a low-level gradient
> descent loop (as we are now).
---
## Modeling-related questions
### How can I obtain the output of an intermediate layer (feature extraction)?
In the Functional API and Sequential API, if a layer has been called exactly once, you can retrieve its output via `layer.output` and its input via `layer.input`.
This enables you do quickly instantiate feature-extraction models, like this one:
```python
import keras
from keras import layers
model = Sequential([
layers.Conv2D(32, 3, activation='relu'),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(2),
layers.Conv2D(32, 3, activation='relu'),
layers.Conv2D(32, 3, activation='relu'),
layers.GlobalMaxPooling2D(),
layers.Dense(10),
])
extractor = keras.Model(inputs=model.inputs,
outputs=[layer.output for layer in model.layers])
features = extractor(data)
```
Naturally, this is not possible with models that are subclasses of `Model` that override `call`.
Here's another example: instantiating a `Model` that returns the output of a specific named layer:
```python
model = ... # create the original model
layer_name = 'my_layer'
intermediate_layer_model = keras.Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model(data)
```
---
### How can I use pre-trained models in Keras?
You could leverage the [models available in `keras.applications`](/api/applications/),
or the models available in [KerasCV](/keras_cv/) and [KerasNLP](/keras_nlp/).
---
### How can I use stateful RNNs?
Making a RNN stateful means that the states for the samples of each batch will be reused as initial states for the samples in the next batch.
When using stateful RNNs, it is therefore assumed that:
- all batches have the same number of samples
- If `x1` and `x2` are successive batches of samples, then `x2[i]` is the follow-up sequence to `x1[i]`, for every `i`.
To use statefulness in RNNs, you need to:
- explicitly specify the batch size you are using, by passing a `batch_size` argument to the first layer in your model. E.g. `batch_size=32` for a 32-samples batch of sequences of 10 timesteps with 16 features per timestep.
- set `stateful=True` in your RNN layer(s).
- specify `shuffle=False` when calling `fit()`.
To reset the states accumulated:
- use `model.reset_states()` to reset the states of all layers in the model
- use `layer.reset_states()` to reset the states of a specific stateful RNN layer
Example:
```python
import keras
from keras import layers
import numpy as np
x = np.random.random((32, 21, 16)) # this is our input data, of shape (32, 21, 16)
# we will feed it to our model in sequences of length 10
model = keras.Sequential()
model.add(layers.LSTM(32, input_shape=(10, 16), batch_size=32, stateful=True))
model.add(layers.Dense(16, activation='softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
# we train the network to predict the 11th timestep given the first 10:
model.train_on_batch(x[:, :10, :], np.reshape(x[:, 10, :], (32, 16)))
# the state of the network has changed. We can feed the follow-up sequences:
model.train_on_batch(x[:, 10:20, :], np.reshape(x[:, 20, :], (32, 16)))
# let's reset the states of the LSTM layer:
model.reset_states()
# another way to do it in this case:
model.layers[0].reset_states()
```
Note that the methods `predict`, `fit`, `train_on_batch`, etc. will *all* update the states of the stateful layers in a model. This allows you to do not only stateful training, but also stateful prediction.
---
| keras-io/templates/getting_started/faq.md/0 | {
"file_path": "keras-io/templates/getting_started/faq.md",
"repo_id": "keras-io",
"token_count": 9443
} | 122 |
body {
font-family: 'Open Sans', sans-serif;
}
h1,
h2,
h3,
h4,
h5,
h6 {
font-family: 'Open Sans', sans-serif;
font-weight: bold;
}
header.masthead {
position: relative;
background-size: cover;
padding-top: 8rem;
padding-bottom: 8rem;
}
b {
font-weight: 400;
font-style: italic;
}
header.masthead .overlay {
position: absolute;
height: 100%;
width: 100%;
top: 0;
left: 0;
opacity: 0.3;
}
@media (min-width: 768px) {
header.masthead {
padding-top: 4rem;
padding-bottom: 4rem;
}
header.masthead h1 {
font-size: 2.2rem;
}
}
a {
color: #d00000;
}
a:hover {
color: #ff0000;
}
.smooth-white-bg a {
color: #d00000;
}
.smooth-black-bg a {
color: #ff6666;
}
.btn {
margin-top: 1rem;
margin-bottom: 1rem;
}
.btn-primary {
color: #fff !important;
background-color: #d00000;
border-color: #8a0000;
}
.btn-primary:hover {
color: #fff;
background-color: #ff0000;
border-color: #8a0000;
}
.btn-primary:focus, .btn-primary.focus {
box-shadow: 0 0 0 0.2rem rgba(38, 143, 255, 0.5);
}
.btn-primary.disabled, .btn-primary:disabled {
color: #fff;
background-color: #d00000;
border-color: #8a0000;
}
.btn-primary:not(:disabled):not(.disabled):active, .btn-primary:not(:disabled):not(.disabled).active,
.show > .btn-primary.dropdown-toggle {
color: #fff;
background-color: #ff0000;
border-color: #8a0000;
}
.btn-primary:not(:disabled):not(.disabled):active:focus, .btn-primary:not(:disabled):not(.disabled).active:focus,
.show > .btn-primary.dropdown-toggle:focus {
box-shadow: 0 0 0 0.2rem rgba(38, 143, 255, 0.5);
}
.btn-secondary {
background-color: white;
color: #3f3f3f;
border-color: #bbb;
}
.btn-secondary:hover {
color: #3f3f3f;
background-color: #f0f0f0;
border-color: #bbb;
}
.btn-secondary:focus, .btn-secondary.focus {
box-shadow: 0 0 0 0.2rem rgba(130, 138, 145, 0.5);
}
.btn-secondary.disabled, .btn-secondary:disabled {
color: #3f3f3f;
background-color: #f0f0f0;
border-color: #bbb;
}
.btn-secondary:not(:disabled):not(.disabled):active, .btn-secondary:not(:disabled):not(.disabled).active,
.show > .btn-secondary.dropdown-toggle {
color: #3f3f3f;
background-color: #f0f0f0;
border-color: #bbb;
}
.btn-secondary:not(:disabled):not(.disabled):active:focus, .btn-secondary:not(:disabled):not(.disabled).active:focus,
.show > .btn-secondary.dropdown-toggle:focus {
box-shadow: 0 0 0 0.2rem rgba(130, 138, 145, 0.5);
}
.logo {
width: 28rem;
max-width: 100%;
margin-bottom: 2rem;
}
.smooth-black-bg {
background-image: linear-gradient(to right, black, #3f3f3f, #3f3f3f, black);
color: white;
}
.smooth-white-bg {
background-image: linear-gradient(to right, #e6e6e6, white, white, #e6e6e6);
color: #3f3f3f;
}
.showcase .showcase-text {
padding: 3rem;
}
.showcase .showcase-img {
min-height: 20rem;
background-size: cover;
}
@media (min-width: 768px) {
.showcase .showcase-text {
padding: 4rem;
}
}
.lead {
font-size: 1.2rem;
font-weight: 100;
}
.testimonials {
padding-top: 4rem;
padding-bottom: 4rem;
}
.testimonials .testimonial-item {
max-width: 18rem;
}
.testimonials .testimonial-item img {
max-width: 12rem;
box-shadow: 0px 5px 5px 0px #adb5bd;
}
.quote-content {
font-style: italic;
}
.quote-title {
font-size: 1rem;
}
.quote-name {
font-size: 1.2rem;
font-weight: bold;
}
.testimonial-item h5 {
margin-top: 1em;
}
footer {
float: left;
width: 100%;
padding: 1em;
border-top: solid 1px #bbb;
}
.bottom-border {
border-bottom: solid 0.7em black;
}
#announcement-box {
text-shadow: 1px 2px black;
font-size: 2.2rem;
}
#announcement-link {
font-size: 2.7rem;
} | keras-io/theme/css/landing.css/0 | {
"file_path": "keras-io/theme/css/landing.css",
"repo_id": "keras-io",
"token_count": 1590
} | 123 |
FROM mcr.microsoft.com/vscode/devcontainers/python:3.9
COPY setup.sh /setup.sh
| keras-nlp/.devcontainer/Dockerfile/0 | {
"file_path": "keras-nlp/.devcontainer/Dockerfile",
"repo_id": "keras-nlp",
"token_count": 34
} | 124 |
# API Design Guide
Before reading this document, please read the
[Keras API design guidelines](https://github.com/keras-team/governance/blob/master/keras_api_design_guidelines.md).
Below are some design considerations specific to KerasNLP.
## Philosophy
- **Let user needs be our compass.** Any modular building block that NLP
practitioners need is in scope, whether it's data loading, augmentation, model
building, evaluation metrics, or visualization utils.
- **Be resolutely high-level.** Even if something is easy to do by hand in 5
lines, package it as a one liner.
- **Balance ease of use and flexibility.** Simple things should be easy, and
arbitrarily advanced use cases should be possible. There should always be a
"we need to go deeper" path available to our most expert users.
- **Grow as a platform and as a community.** KerasNLP development should be
driven by the community, with feature and release planning happening in
the open on GitHub.
## Avoid new dependencies
The core dependencies of KerasNLP are Keras, NumPy, TensorFlow, and
[Tensorflow Text](https://www.tensorflow.org/text).
We strive to keep KerasNLP as self-contained as possible, and avoid adding
dependencies to projects (for example NLTK or spaCy) for text preprocessing.
In rare cases, particularly with tokenizers and metrics, we may need to add
an external dependency for compatibility with the "canonical" implementation
of a certain technique. In these cases, avoid adding a new package dependency,
and add installation instructions for the specific symbol:
```python
try:
import rouge_score
except ImportError:
rouge_score = None
class Rouge(keras.metrics.Metric):
def __init__(self):
if rouge_score is None:
raise ImportError(
"ROUGE metric requires the `rouge_score` package."
"Please install it with `pip install rouge_score`."
)
```
Additionally, to ensure that unit tests don't fail, please add the corresponding
library to the `extras_require["tests"]` list in `setup.py`.
## Keep computation inside TensorFlow graph
Our layers, metrics, and tokenizers should be fast and efficient, which means
running inside the
[TensorFlow graph](https://www.tensorflow.org/guide/intro_to_graphs)
whenever possible. This means you should be able to wrap annotate a function
calling a layer, metric or loss with `@tf.function` without running into issues.
[tf.strings](https://www.tensorflow.org/api_docs/python/tf/strings) and
[tf.text](https://www.tensorflow.org/text/api_docs/python/text) provides a large
surface on TensorFlow operations that manipulate strings. If an low-level (c++)
operation we need is missing, we should add it in collaboration with core
TensorFlow or TensorFlow Text. KerasNLP is a python-only library.
We should also strive to keep computation XLA compilable wherever possible (e.g.
`tf.function(jit_compile=True)`). For trainable modeling components this is
particularly important due to the performance gains offered by XLA. For
preprocessing and postprocessing, XLA compilation is not a requirement.
## Support tf.data for text preprocessing and augmentation
In general, our preprocessing tools should be runnable inside a
[tf.data](https://www.tensorflow.org/guide/data) pipeline, and any augmentation
to training data should be dynamic (runnable on the fly during training rather
than precomputed).
We should design our preprocessing workflows with tf.data in mind, and support
both batched and unbatched data as input to preprocessing layers.
## Prioritize multi-lingual support
We strive to keep KerasNLP a friendly and useful library for speakers of all
languages. In general, prefer designing workflows that are language agnostic,
and do not involve logic (e.g. stemming) that need to be rewritten
per-language.
It is OK for new workflows to not come with of the box support for all
languages in a first release, but a design that does not include a plan for
multi-lingual support will be rejected.
| keras-nlp/API_DESIGN_GUIDE.md/0 | {
"file_path": "keras-nlp/API_DESIGN_GUIDE.md",
"repo_id": "keras-nlp",
"token_count": 1110
} | 125 |
# BERT with KerasNLP
This example demonstrates how to train a Bidirectional Encoder
Representations from Transformers (BERT) model end-to-end using the KerasNLP
library. This README contains instructions on how to run pretraining directly
from raw data, followed by finetuning and evaluation on the GLUE dataset.
## Quickly test out the code
To exercise the code in this directory by training a tiny BERT model, you can
run the following commands from the base directory of the repository. This can
be useful to validate any code changes, but note that a useful BERT model would
need to be trained for much longer on a much larger dataset.
```shell
OUTPUT_DIR=~/bert_test_output
DATA_URL=https://storage.googleapis.com/tensorflow/keras-nlp/examples/bert
# Download example data.
wget ${DATA_URL}/bert_vocab_uncased.txt -O $OUTPUT_DIR/bert_vocab_uncased.txt
wget ${DATA_URL}/wiki_example_data.txt -O $OUTPUT_DIR/wiki_example_data.txt
# Parse input data and split into sentences.
python3 examples/tools/split_sentences.py \
--input_files $OUTPUT_DIR/wiki_example_data.txt \
--output_directory $OUTPUT_DIR/sentence-split-data
# Preprocess input for pretraining.
python3 examples/bert_pretraining/bert_create_pretraining_data.py \
--input_files $OUTPUT_DIR/sentence-split-data/ \
--vocab_file $OUTPUT_DIR/bert_vocab_uncased.txt \
--output_file $OUTPUT_DIR/pretraining-data/pretraining.tfrecord
# Run pretraining for 100 train steps only.
python3 examples/bert_pretraining/bert_pretrain.py \
--input_directory $OUTPUT_DIR/pretraining-data/ \
--vocab_file $OUTPUT_DIR/bert_vocab_uncased.txt \
--saved_model_output $OUTPUT_DIR/model/ \
--num_train_steps 100
```
## Installing dependencies
This example needs a few extra dependencies to run (e.g. wikiextractor for
using wikipedia downloads). You can install these into a KerasNLP development
environment with:
```shell
pip install -r "examples/bert_pretraining/requirements.txt"
```
## Pretraining BERT
Training a BERT model happens in two stages. First, the model is "pretrained" on
a large corpus of input text. This is computationally expensive. After
pretraining, the model can be "finetuned" on a downstream task with a much
smaller amount of labeled data.
### Downloading pretraining data
The GLUE pretraining data (Wikipedia + BooksCorpus) is fairly large. The raw
input data takes roughly ~20GB of space, and after preprocessing, the full
corpus will take ~400GB.
The latest wikipedia dump can be downloaded
[at this link](https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2),
or via command line:
```shell
curl -O https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2
```
The dump can be extracted with the `wikiextractor` tool.
```shell
python3 -m wikiextractor.WikiExtractor enwiki-latest-pages-articles.xml.bz2
```
BooksCorpus is no longer hosted by
[its creators](https://yknzhu.wixsite.com/mbweb), but you can find instructions
for downloading or reproducing the corpus in
[this repository](https://github.com/soskek/bookcorpus). We suggest the pre-made file
downloads listed at the top of the README. Alternatively, you can forgo it
entirely and pretrain solely on wikipedia.
Preparing the pretraining data will happen in two stages. First, raw text needs
to be split into lists of sentences per document. Second, this sentence split
data needs to use to create training examples with both masked words and
next sentence predictions.
### Splitting raw text into sentences
Next, use `examples/tools/split_sentences.py` to process raw input files and
split them into output files where each line contains a sentence, and a blank
line marks the start of a new document. We need this for the next-sentence
prediction task used by BERT.
For example, if Wikipedia files are located in `~/datasets/wikipedia` and
bookscorpus in `~/datasets/bookscorpus`, the following command will output
sentence split documents to a configurable number of output file shards:
```shell
python3 examples/tools/split_sentences.py \
--input_files ~/datasets/wikipedia,~/datasets/bookscorpus \
--output_directory ~/datasets/sentence-split-data
```
### Computing a WordPiece vocabulary
The easiest and best approach when training BERT is to use the official
vocabularies from the original project, which have become somewhat standard.
You can download the English uncased vocabulary
[here](https://storage.googleapis.com/tensorflow/keras-nlp/examples/bert/bert_vocab_uncased.txt),
or in your terminal run:
```shell
curl -O https://storage.googleapis.com/tensorflow/keras-nlp/examples/bert/bert_vocab_uncased.txt
```
You can also use `examples/tools/train_word_piece_vocab.py` to train your own.
### Tokenize, mask, and combine sentences into training examples
The ` bert_create_pretraining_data.py` script will take in a set of sentence split files, and
set up training examples for the next sentence prediction and masked word tasks.
The output of the script will be TFRecord files with a number of fields per
example. Below shows a complete output example with the addition of a string
`tokens` field for clarity. The actual script will only serialize the token ids
to conserve disk space.
```python
tokens: ['[CLS]', 'resin', '##s', 'are', 'today', '[MASK]', 'produced', 'by',
'ang', '##ios', '##per', '##ms', ',', 'and', 'tend', 'to', '[SEP]',
'[MASK]', 'produced', 'a', '[MASK]', '[MASK]', 'of', 'resin', ',',
'which', '[MASK]', 'often', 'found', 'as', 'amber', '[SEP]']
input_ids: [101, 24604, 2015, 2024, 2651, 103, 2550, 2011, 17076, 10735, 4842,
5244, 1010, 1998, 7166, 2000, 102, 103, 2550, 1037, 103, 103, 1997,
24604, 1010, 2029, 103, 2411, 2179, 2004, 8994, 102]
input_mask: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
segment_ids: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
masked_lm_positions: [5, 17, 20, 21, 26]
masked_lm_ids: [2069, 3619, 2353, 2828, 2003]
masked_lm_weights: [1.0, 1.0, 1.0, 1.0, 1.0]
next_sentence_labels: [0]
```
In order to set up the next sentence prediction task, the script will load the
entire input into memory. As such, it is recommended to run this script on a
subset of the input data at a time.
For example, you can run the script on each file shard in a directory
with the following:
```shell
for file in path/to/sentence-split-data/*; do
output="path/to/pretraining-data/$(basename -- "$file" .txt).tfrecord"
python3 examples/bert_pretraining/bert_create_pretraining_data.py \
--input_files ${file} \
--vocab_file bert_vocab_uncased.txt \
--output_file ${output}
done
```
If enough memory is available, this could be further sped up by running this script
multiple times in parallel. The following will take 3-4 hours on the entire dataset
on an 8 core machine.
```shell
NUM_JOBS=5
for file in path/to/sentence-split-data/*; do
output="path/to/pretraining-data/$(basename -- "$file" .txt).tfrecord"
echo python3 examples/bert_pretraining/bert_create_pretraining_data.py \
--input_files ${file} \
--vocab_file bert_vocab_uncased.txt \
--output_file ${output}
done | parallel -j ${NUM_JOBS}
```
To preview a sample of generated data files, you can run the command below:
```shell
python3 -c "from examples.utils.data_utils import preview_tfrecord; preview_tfrecord('path/to/tfrecord_file')"
```
### Running BERT pretraining
After preprocessing, we can run pretraining with the `bert_pretrain.py`
script. This will train a model and save it to the `--saved_model_output`
directory. If you are willing to train from data stored on google cloud storage bucket (GCS), you can do it by setting the file path to
the URL of GCS bucket. For example, `--input_directory=gs://your-bucket-name/you-data-path`. You can also save models directly to GCS by the same approach.
```shell
python3 examples/bert_pretraining/bert_pretrain.py \
--input_directory path/to/data/ \
--vocab_file path/to/bert_vocab_uncased.txt \
--model_size tiny \
--saved_model_output path/to/model/
```
| keras-nlp/examples/bert_pretraining/README.md/0 | {
"file_path": "keras-nlp/examples/bert_pretraining/README.md",
"repo_id": "keras-nlp",
"token_count": 2780
} | 126 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
import tensorflow as tf
from keras_nlp.backend import config as backend_config
from keras_nlp.backend import keras
def pytest_addoption(parser):
parser.addoption(
"--run_large",
action="store_true",
default=False,
help="run large tests",
)
parser.addoption(
"--run_extra_large",
action="store_true",
default=False,
help="run extra_large tests",
)
parser.addoption(
"--docstring_module",
action="store",
default="",
help="restrict docs testing to modules whose name matches this flag",
)
parser.addoption(
"--check_gpu",
action="store_true",
default=False,
help="fail if a gpu is not present",
)
def pytest_configure(config):
# Verify that device has GPU and detected by backend
if config.getoption("--check_gpu"):
found_gpu = False
backend = backend_config.backend()
if backend == "jax":
import jax
try:
found_gpu = bool(jax.devices("gpu"))
except RuntimeError:
found_gpu = False
elif backend == "tensorflow":
found_gpu = bool(tf.config.list_logical_devices("GPU"))
elif backend == "torch":
import torch
found_gpu = bool(torch.cuda.device_count())
if not found_gpu:
pytest.fail(f"No GPUs discovered on the {backend} backend.")
config.addinivalue_line(
"markers",
"large: mark test as being slow or requiring a network",
)
config.addinivalue_line(
"markers",
"extra_large: mark test as being too large to run continuously",
)
config.addinivalue_line(
"markers",
"tf_only: mark test as a tf only test",
)
config.addinivalue_line(
"markers",
"keras_3_only: mark test as a keras 3 only test",
)
def pytest_collection_modifyitems(config, items):
run_extra_large_tests = config.getoption("--run_extra_large")
# Run large tests for --run_extra_large or --run_large.
run_large_tests = config.getoption("--run_large") or run_extra_large_tests
# Messages to annotate skipped tests with.
skip_large = pytest.mark.skipif(
not run_large_tests,
reason="need --run_large option to run",
)
skip_extra_large = pytest.mark.skipif(
not run_extra_large_tests,
reason="need --run_extra_large option to run",
)
tf_only = pytest.mark.skipif(
not backend_config.backend() == "tensorflow",
reason="tests only run on tf backend",
)
keras_3_only = pytest.mark.skipif(
not backend_config.keras_3(),
reason="tests only run on with multi-backend keras",
)
for item in items:
if "large" in item.keywords:
item.add_marker(skip_large)
if "extra_large" in item.keywords:
item.add_marker(skip_extra_large)
if "tf_only" in item.keywords:
item.add_marker(tf_only)
if "keras_3_only" in item.keywords:
item.add_marker(keras_3_only)
# Disable traceback filtering for quicker debugging of tests failures.
tf.debugging.disable_traceback_filtering()
if backend_config.keras_3():
keras.config.disable_traceback_filtering()
| keras-nlp/keras_nlp/conftest.py/0 | {
"file_path": "keras-nlp/keras_nlp/conftest.py",
"repo_id": "keras-nlp",
"token_count": 1616
} | 127 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.backend import keras
from keras_nlp.backend import ops
from keras_nlp.backend import random
from keras_nlp.layers.modeling.rotary_embedding import RotaryEmbedding
from keras_nlp.tests.test_case import TestCase
class RotaryEmbeddingTest(TestCase):
def test_layer_behaviors(self):
self.run_layer_test(
cls=RotaryEmbedding,
init_kwargs={
"max_wavelength": 1000,
"scaling_factor": 2.0,
"sequence_axis": 1,
"feature_axis": -1,
},
input_data=random.uniform(shape=(2, 4, 6)),
expected_output_shape=(2, 4, 6),
)
def test_layer_behaviors_4d(self):
self.run_layer_test(
cls=RotaryEmbedding,
init_kwargs={
"max_wavelength": 1000,
},
input_data=random.uniform(shape=(2, 8, 4, 6)),
expected_output_shape=(2, 8, 4, 6),
)
def test_dynamic_layer_output_shape(self):
embedding_layer = RotaryEmbedding()
hidden_size = 32
inputs = keras.Input(shape=(None, hidden_size))
outputs = embedding_layer(inputs)
# When using dynamic positional encoding shapes, the output is expected
# to be the same as the input shape in all dimensions but may be None.
expected_output_shape = (None, None, hidden_size)
self.assertEqual(expected_output_shape, outputs.shape)
# do multi dimension before sequence length
def test_multi_dimension_layer_output_shape(self):
embedding_layer = RotaryEmbedding()
seq_length = 100
hidden_size = 32
inputs = keras.Input(shape=(None, seq_length, hidden_size))
outputs = embedding_layer(inputs)
# When using multiple dimensions before sequence length, the output is
# expected to be the same as the input shape in all dimensions.
expected_output_shape = (None, None, seq_length, hidden_size)
self.assertEqual(expected_output_shape, outputs.shape)
def test_output_correct_values(self):
embedding_layer = RotaryEmbedding()
model = keras.Sequential(
[
keras.Input(shape=(4, 6)),
embedding_layer,
]
)
input = ops.ones(shape=[1, 4, 6])
output = model(input)
# comapre position encoding values for position 0 and 3
expected_0 = [1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000]
expected_3 = [-1.1311, 0.8515, 0.9935, -0.8489, 1.1291, 1.0064]
self.assertAllClose(output[0, 0, :], expected_0, atol=0.01, rtol=0.01)
self.assertAllClose(output[0, 3, :], expected_3, atol=0.01, rtol=0.01)
def test_start_index(self):
batch_size, seq_length, feature_size = 2, 3, 4
layer = RotaryEmbedding(seq_length)
data = random.uniform(shape=(batch_size, seq_length, feature_size))
full_output = layer(data)
sequential_output = ops.zeros((batch_size, seq_length, feature_size))
for i in range(seq_length):
parial_output = layer(data[:, i : i + 1, :], start_index=i)
sequential_output = ops.slice_update(
sequential_output, (0, i, 0), parial_output
)
self.assertAllClose(full_output, sequential_output)
def test_permuted_axes(self):
batch_size, seq_length, feature_size = 2, 3, 4
data = random.uniform(shape=(batch_size, seq_length, feature_size))
layer = RotaryEmbedding(seq_length)
outputs = layer(data)
permuted_data = ops.transpose(data, (0, 2, 1))
permuted_layer = RotaryEmbedding(
seq_length, sequence_axis=-1, feature_axis=-2
)
permuted_outputs = permuted_layer(permuted_data)
self.assertAllClose(outputs, ops.transpose(permuted_outputs, (0, 2, 1)))
def test_float16_dtype(self):
embedding_layer = RotaryEmbedding(dtype="float16")
seq_length = 100
hidden_size = 32
inputs = keras.Input(shape=(seq_length, hidden_size))
outputs = embedding_layer(inputs)
# output dtype for this layer should be float16.
self.assertEqual(outputs.dtype, "float16")
| keras-nlp/keras_nlp/layers/modeling/rotary_embedding_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/layers/modeling/rotary_embedding_test.py",
"repo_id": "keras-nlp",
"token_count": 2079
} | 128 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import tree
from keras_nlp.backend import config
from keras_nlp.backend import keras
from keras_nlp.utils.tensor_utils import (
convert_to_backend_tensor_or_python_list,
)
class PreprocessingLayer(keras.layers.Layer):
"""Preprocessing layer base class."""
def __init__(self, **kwargs):
super().__init__(**kwargs)
self._convert_input_args = False
self._allow_non_tensor_positional_args = True
# Most pre-preprocessing has no build.
if not hasattr(self, "build"):
self.built = True
def get_build_config(self):
return None
def __call__(self, *args, **kwargs):
# Always place on CPU for preprocessing, to avoid expensive back and
# forth copies to GPU before the trainable model.
with tf.device("cpu"):
outputs = super().__call__(*args, **kwargs)
# Jax and Torch lack native string and ragged types.
# If we are running on those backends and not running with tf.data
# (we are outside a tf.function), we covert all ragged and string
# tensor to pythonic types.
is_tf_backend = config.backend() == "tensorflow"
is_in_tf_graph = not tf.executing_eagerly()
if not is_tf_backend and not is_in_tf_graph:
outputs = tree.map_structure(
convert_to_backend_tensor_or_python_list, outputs
)
return outputs
| keras-nlp/keras_nlp/layers/preprocessing/preprocessing_layer.py/0 | {
"file_path": "keras-nlp/keras_nlp/layers/preprocessing/preprocessing_layer.py",
"repo_id": "keras-nlp",
"token_count": 788
} | 129 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from keras_nlp.metrics.rouge_l import RougeL
from keras_nlp.tests.test_case import TestCase
class RougeLTest(TestCase):
def test_initialization(self):
rouge = RougeL()
result = rouge.result()
self.assertDictEqual(
result,
{"precision": 0.0, "recall": 0.0, "f1_score": 0.0},
)
def test_string_input(self):
rouge = RougeL(use_stemmer=False)
y_true = "the tiny little cat was found under the big funny bed"
y_pred = "the cat was under the bed"
rouge_val = rouge(y_true, y_pred)
self.assertAllClose(
rouge_val,
{"precision": 1.0, "recall": 0.545454, "f1_score": 0.705882},
)
def test_string_list_input(self):
rouge = RougeL(use_stemmer=False)
y_true = [
"the tiny little cat was found under the big funny bed",
"i really love contributing to KerasNLP",
]
y_pred = [
"the cat was under the bed",
"i love contributing to KerasNLP",
]
rouge_val = rouge(y_true, y_pred)
self.assertAllClose(
rouge_val,
{"precision": 1.0, "recall": 0.689393, "f1_score": 0.807486},
)
def test_tensor_input(self):
rouge = RougeL(use_stemmer=False)
y_true = tf.constant(
[
"the tiny little cat was found under the big funny bed",
"i really love contributing to KerasNLP",
]
)
y_pred = tf.constant(
["the cat was under the bed", "i love contributing to KerasNLP"]
)
rouge_val = rouge(y_true, y_pred)
self.assertAllClose(
rouge_val,
{"precision": 1.0, "recall": 0.689393, "f1_score": 0.807486},
)
def test_reset_state(self):
rouge = RougeL()
y_true = ["hey, this is great fun", "i love contributing to KerasNLP"]
y_pred = [
"great fun indeed",
"KerasNLP is awesome, i love contributing to it",
]
rouge.update_state(y_true, y_pred)
rouge_val = rouge.result()
self.assertNotAllClose(
rouge_val,
{"precision": 0.0, "recall": 0.0, "f1_score": 0.0},
)
rouge.reset_state()
rouge_val = rouge.result()
self.assertDictEqual(
rouge_val,
{"precision": 0.0, "recall": 0.0, "f1_score": 0.0},
)
def test_update_state(self):
rouge = RougeL()
y_true_1 = [
"the tiny little cat was found under the big funny bed",
"i really love contributing to KerasNLP",
]
y_pred_1 = [
"the cat was under the bed",
"i love contributing to KerasNLP",
]
rouge.update_state(y_true_1, y_pred_1)
rouge_val = rouge.result()
self.assertAllClose(
rouge_val,
{"precision": 1.0, "recall": 0.689393, "f1_score": 0.807486},
)
y_true_2 = ["what is your favourite show"]
y_pred_2 = ["my favourite show is silicon valley"]
rouge.update_state(y_true_2, y_pred_2)
rouge_val = rouge.result()
self.assertAllClose(
rouge_val,
{"precision": 0.777777, "recall": 0.592929, "f1_score": 0.659536},
)
def test_get_config(self):
rouge = RougeL(
use_stemmer=True,
dtype="float32",
name="rouge_l_test",
)
config = rouge.get_config()
expected_config_subset = {
"use_stemmer": True,
}
self.assertEqual(config, {**config, **expected_config_subset})
| keras-nlp/keras_nlp/metrics/rouge_l_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/metrics/rouge_l_test.py",
"repo_id": "keras-nlp",
"token_count": 2087
} | 130 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.layers.modeling.position_embedding import PositionEmbedding
from keras_nlp.layers.modeling.reversible_embedding import ReversibleEmbedding
from keras_nlp.layers.modeling.transformer_encoder import TransformerEncoder
from keras_nlp.models.backbone import Backbone
from keras_nlp.models.bert.bert_presets import backbone_presets
from keras_nlp.utils.keras_utils import gelu_approximate
from keras_nlp.utils.python_utils import classproperty
def bert_kernel_initializer(stddev=0.02):
return keras.initializers.TruncatedNormal(stddev=stddev)
@keras_nlp_export("keras_nlp.models.BertBackbone")
class BertBackbone(Backbone):
"""A BERT encoder network.
This class implements a bi-directional Transformer-based encoder as
described in ["BERT: Pre-training of Deep Bidirectional Transformers for
Language Understanding"](https://arxiv.org/abs/1810.04805). It includes the
embedding lookups and transformer layers, but not the masked language model
or next sentence prediction heads.
The default constructor gives a fully customizable, randomly initialized
BERT encoder with any number of layers, heads, and embedding dimensions. To
load preset architectures and weights, use the `from_preset()` constructor.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind.
Args:
vocabulary_size: int. The size of the token vocabulary.
num_layers: int. The number of transformer layers.
num_heads: int. The number of attention heads for each transformer.
The hidden size must be divisible by the number of attention heads.
hidden_dim: int. The size of the transformer encoding and pooler layers.
intermediate_dim: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer.
dropout: float. Dropout probability for the Transformer encoder.
max_sequence_length: int. The maximum sequence length that this encoder
can consume. If None, `max_sequence_length` uses the value from
sequence length. This determines the variable shape for positional
embeddings.
num_segments: int. The number of types that the 'segment_ids' input can
take.
dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.
Examples:
```python
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]]),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}
# Pretrained BERT encoder.
model = keras_nlp.models.BertBackbone.from_preset("bert_base_en_uncased")
model(input_data)
# Randomly initialized BERT encoder with a custom config.
model = keras_nlp.models.BertBackbone(
vocabulary_size=30552,
num_layers=4,
num_heads=4,
hidden_dim=256,
intermediate_dim=512,
max_sequence_length=128,
)
model(input_data)
```
"""
def __init__(
self,
vocabulary_size,
num_layers,
num_heads,
hidden_dim,
intermediate_dim,
dropout=0.1,
max_sequence_length=512,
num_segments=2,
dtype=None,
**kwargs,
):
# === Layers ===
self.token_embedding = ReversibleEmbedding(
input_dim=vocabulary_size,
output_dim=hidden_dim,
embeddings_initializer=bert_kernel_initializer(),
dtype=dtype,
name="token_embedding",
)
self.position_embedding = PositionEmbedding(
initializer=bert_kernel_initializer(),
sequence_length=max_sequence_length,
dtype=dtype,
name="position_embedding",
)
self.segment_embedding = keras.layers.Embedding(
input_dim=num_segments,
output_dim=hidden_dim,
embeddings_initializer=bert_kernel_initializer(),
dtype=dtype,
name="segment_embedding",
)
self.embeddings_add = keras.layers.Add(
dtype=dtype,
name="embeddings_add",
)
self.embeddings_layer_norm = keras.layers.LayerNormalization(
axis=-1,
epsilon=1e-12,
dtype=dtype,
name="embeddings_layer_norm",
)
self.embeddings_dropout = keras.layers.Dropout(
dropout,
dtype=dtype,
name="embeddings_dropout",
)
self.transformer_layers = []
for i in range(num_layers):
layer = TransformerEncoder(
num_heads=num_heads,
intermediate_dim=intermediate_dim,
activation=gelu_approximate,
dropout=dropout,
layer_norm_epsilon=1e-12,
kernel_initializer=bert_kernel_initializer(),
dtype=dtype,
name=f"transformer_layer_{i}",
)
self.transformer_layers.append(layer)
self.pooled_dense = keras.layers.Dense(
hidden_dim,
kernel_initializer=bert_kernel_initializer(),
activation="tanh",
dtype=dtype,
name="pooled_dense",
)
# === Functional Model ===
token_id_input = keras.Input(
shape=(None,), dtype="int32", name="token_ids"
)
segment_id_input = keras.Input(
shape=(None,), dtype="int32", name="segment_ids"
)
padding_mask_input = keras.Input(
shape=(None,), dtype="int32", name="padding_mask"
)
# Embed tokens, positions, and segment ids.
tokens = self.token_embedding(token_id_input)
positions = self.position_embedding(tokens)
segments = self.segment_embedding(segment_id_input)
# Sum, normalize and apply dropout to embeddings.
x = self.embeddings_add((tokens, positions, segments))
x = self.embeddings_layer_norm(x)
x = self.embeddings_dropout(x)
for transformer_layer in self.transformer_layers:
x = transformer_layer(x, padding_mask=padding_mask_input)
# Construct the two BERT outputs. The pooled output is a dense layer on
# top of the [CLS] token.
sequence_output = x
cls_token_index = 0
pooled_output = self.pooled_dense(x[:, cls_token_index, :])
super().__init__(
inputs={
"token_ids": token_id_input,
"segment_ids": segment_id_input,
"padding_mask": padding_mask_input,
},
outputs={
"sequence_output": sequence_output,
"pooled_output": pooled_output,
},
**kwargs,
)
# === Config ===
self.vocabulary_size = vocabulary_size
self.num_layers = num_layers
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.intermediate_dim = intermediate_dim
self.dropout = dropout
self.max_sequence_length = max_sequence_length
self.num_segments = num_segments
self.cls_token_index = cls_token_index
def get_config(self):
config = super().get_config()
config.update(
{
"vocabulary_size": self.vocabulary_size,
"num_layers": self.num_layers,
"num_heads": self.num_heads,
"hidden_dim": self.hidden_dim,
"intermediate_dim": self.intermediate_dim,
"dropout": self.dropout,
"max_sequence_length": self.max_sequence_length,
"num_segments": self.num_segments,
}
)
return config
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/bert/bert_backbone.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/bert/bert_backbone.py",
"repo_id": "keras-nlp",
"token_count": 3906
} | 131 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from keras_nlp.backend import ops
from keras_nlp.models.bloom.bloom_backbone import BloomBackbone
from keras_nlp.tests.test_case import TestCase
class BloomBackboneTest(TestCase):
def setUp(self):
self.init_kwargs = {
"vocabulary_size": 10,
"num_layers": 2,
"num_heads": 4,
"hidden_dim": 8,
"intermediate_dim": 32,
"max_sequence_length": 10,
}
self.input_data = {
"token_ids": ops.ones((2, 5), dtype="int32"),
"padding_mask": ops.ones((2, 5), dtype="int32"),
}
def test_backbone_basics(self):
self.run_backbone_test(
cls=BloomBackbone,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output_shape=(2, 5, 8),
)
@pytest.mark.large
def test_saved_model(self):
self.run_model_saving_test(
cls=BloomBackbone,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
)
@pytest.mark.large
def test_smallest_preset(self):
self.run_preset_test(
cls=BloomBackbone,
preset="bloom_560m_multi",
input_data={
"token_ids": ops.array([[101, 1996, 4248, 102]], dtype="int32"),
"padding_mask": ops.ones((1, 4), dtype="int32"),
},
expected_output_shape=(1, 4, 1024),
# The forward pass from a preset should be stable!
expected_partial_output=ops.array(
[2.4394186, 1.4131186, -2.7810357, -6.330823, -1.0599766]
),
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in BloomBackbone.presets:
self.run_preset_test(
cls=BloomBackbone,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/bloom/bloom_backbone_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/bloom/bloom_backbone_test.py",
"repo_id": "keras-nlp",
"token_count": 1187
} | 132 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pytest
from keras_nlp.models.deberta_v3.deberta_v3_masked_lm_preprocessor import (
DebertaV3MaskedLMPreprocessor,
)
from keras_nlp.models.deberta_v3.deberta_v3_tokenizer import DebertaV3Tokenizer
from keras_nlp.tests.test_case import TestCase
class DebertaV3MaskedLMPreprocessorTest(TestCase):
def setUp(self):
self.tokenizer = DebertaV3Tokenizer(
# Generated using create_deberta_v3_test_proto.py
proto=os.path.join(
self.get_test_data_dir(), "deberta_v3_test_vocab.spm"
)
)
self.init_kwargs = {
"tokenizer": self.tokenizer,
# Simplify our testing by masking every available token.
"mask_selection_rate": 1.0,
"mask_token_rate": 1.0,
"random_token_rate": 0.0,
"mask_selection_length": 4,
"sequence_length": 12,
}
self.input_data = ["the quick brown fox"]
def test_preprocessor_basics(self):
self.run_preprocessor_test(
cls=DebertaV3MaskedLMPreprocessor,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output=(
{
"token_ids": [[1, 4, 4, 4, 4, 2, 0, 0, 0, 0, 0, 0]],
"padding_mask": [[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0]],
"mask_positions": [[1, 2, 3, 4]],
},
[[5, 10, 6, 8]],
[[1.0, 1.0, 1.0, 1.0]],
),
)
def test_no_masking_zero_rate(self):
no_mask_preprocessor = DebertaV3MaskedLMPreprocessor(
self.tokenizer,
mask_selection_rate=0.0,
mask_selection_length=4,
sequence_length=12,
)
input_data = ["the quick brown fox"]
self.assertAllClose(
no_mask_preprocessor(input_data),
(
{
"token_ids": [[1, 5, 10, 6, 8, 2, 0, 0, 0, 0, 0, 0]],
"padding_mask": [[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0]],
"mask_positions": [[0, 0, 0, 0]],
},
[[0, 0, 0, 0]],
[[0.0, 0.0, 0.0, 0.0]],
),
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in DebertaV3MaskedLMPreprocessor.presets:
self.run_preset_test(
cls=DebertaV3MaskedLMPreprocessor,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/deberta_v3/deberta_v3_masked_lm_preprocessor_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/deberta_v3/deberta_v3_masked_lm_preprocessor_test.py",
"repo_id": "keras-nlp",
"token_count": 1616
} | 133 |
# Copyright 2022 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.models.f_net.f_net_backbone import FNetBackbone
from keras_nlp.models.f_net.f_net_backbone import f_net_kernel_initializer
from keras_nlp.models.f_net.f_net_preprocessor import FNetPreprocessor
from keras_nlp.models.f_net.f_net_presets import backbone_presets
from keras_nlp.models.task import Task
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.FNetClassifier")
class FNetClassifier(Task):
"""An end-to-end f_net model for classification tasks.
This model attaches a classification head to a
`keras_nlp.model.FNetBackbone` instance, mapping from the backbone outputs
to logits suitable for a classification task. For usage of this model with
pre-trained weights, use the `from_preset()` constructor.
This model can optionally be configured with a `preprocessor` layer, in
which case it will automatically apply preprocessing to raw inputs during
`fit()`, `predict()`, and `evaluate()`. This is done by default when
creating the model with `from_preset()`.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind.
Args:
backbone: A `keras_nlp.models.FNetBackbone` instance.
num_classes: int. Number of classes to predict.
preprocessor: A `keras_nlp.models.FNetPreprocessor` or `None`. If
`None`, this model will not apply preprocessing, and inputs should
be preprocessed before calling the model.
activation: Optional `str` or callable. The
activation function to use on the model outputs. Set
`activation="softmax"` to return output probabilities.
Defaults to `None`.
hidden_dim: int. The size of the pooler layer.
dropout: float. The dropout probability value, applied after the dense
layer.
Examples:
Raw string data.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]
# Pretrained classifier.
classifier = keras_nlp.models.FNetClassifier.from_preset(
"f_net_base_en",
num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
classifier.predict(x=features, batch_size=2)
# Re-compile (e.g., with a new learning rate).
classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
```
Preprocessed integer data.
```python
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
labels = [0, 3]
# Pretrained classifier without preprocessing.
classifier = keras_nlp.models.FNetClassifier.from_preset(
"f_net_base_en",
num_classes=4,
preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)
```
"""
def __init__(
self,
backbone,
num_classes,
preprocessor=None,
activation=None,
dropout=0.1,
**kwargs,
):
# === Layers ===
self.backbone = backbone
self.preprocessor = preprocessor
self.output_dropout = keras.layers.Dropout(
dropout,
dtype=backbone.dtype_policy,
name="output_dropout",
)
self.output_dense = keras.layers.Dense(
num_classes,
kernel_initializer=f_net_kernel_initializer(),
activation=activation,
dtype=backbone.dtype_policy,
name="logits",
)
# === Functional Model ===
inputs = backbone.input
pooled = backbone(inputs)["pooled_output"]
pooled = self.output_dropout(pooled)
outputs = self.output_dense(pooled)
super().__init__(
inputs=inputs,
outputs=outputs,
**kwargs,
)
# === Config ===
self.num_classes = num_classes
self.activation = keras.activations.get(activation)
self.dropout = dropout
# === Default compilation ===
logit_output = self.activation == keras.activations.linear
self.compile(
loss=keras.losses.SparseCategoricalCrossentropy(
from_logits=logit_output
),
optimizer=keras.optimizers.Adam(5e-5),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
def get_config(self):
config = super().get_config()
config.update(
{
"num_classes": self.num_classes,
"dropout": self.dropout,
"activation": keras.activations.serialize(self.activation),
}
)
return config
@classproperty
def backbone_cls(cls):
return FNetBackbone
@classproperty
def preprocessor_cls(cls):
return FNetPreprocessor
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/f_net/f_net_classifier.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/f_net/f_net_classifier.py",
"repo_id": "keras-nlp",
"token_count": 2477
} | 134 |
# Copyright 2024 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from absl import logging
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import ops
from keras_nlp.models.gemma.gemma_preprocessor import GemmaPreprocessor
from keras_nlp.utils.keras_utils import (
convert_inputs_to_list_of_tensor_segments,
)
from keras_nlp.utils.keras_utils import pack_x_y_sample_weight
@keras_nlp_export("keras_nlp.models.GemmaCausalLMPreprocessor")
class GemmaCausalLMPreprocessor(GemmaPreprocessor):
"""Gemma Causal LM preprocessor.
This preprocessing layer is meant for use with
`keras_nlp.models.GemmaCausalLM`. By default, it will take in batches of
strings, and return outputs in a `(x, y, sample_weight)` format, where the
`y` label is the next token id in the `x` sequence.
For use with generation, the layer also exposes two methods
`generate_preprocess()` and `generate_postprocess()`. When this preprocessor
is attached to a `keras_nlp.models.GemmaCausalLM` instance, these methods
will be called implicitly in `generate()`. They can also be called
standalone (e.g. to precompute preprocessing inputs for generation in a
separate process).
Args:
tokenizer: A `keras_nlp.models.GemmaTokenizer` instance.
sequence_length: The length of the packed inputs.
add_start_token: If `True`, the preprocessor will prepend the tokenizer
start token to each input sequence.
add_end_token: If `True`, the preprocessor will append the tokenizer
end token to each input sequence.
Call arguments:
x: A string, `tf.Tensor` or list of python strings.
y: Label data. Should always be `None` as the layer generates labels.
sample_weight: Label weights. Should always be `None` as the layer
generates label weights.
sequence_length: Pass to override the configured `sequence_length` of
the layer.
Examples:
```python
# Load the preprocessor from a preset.
preprocessor = keras_nlp.models.GemmaCausalLMPreprocessor.from_preset(
"gemma_2b_en"
)
# Tokenize and pack a single sentence.
preprocessor("The quick brown fox jumped.")
# Tokenize a batch of sentences.
preprocessor(["The quick brown fox jumped.", "Call me Ishmael."])
# Apply tokenization to a `tf.data.Dataset`.
features = tf.constant(["The quick brown fox.", "Call me Ishmael."])
ds = tf.data.Dataset.from_tensor_slices(features)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Prepare tokens for generation (no end token).
preprocessor.generate_preprocess(["The quick brown fox jumped."])
# Map generation outputs back to strings.
preprocessor.generate_postprocess({
'token_ids': np.array([[2, 714, 4320, 8426, 25341, 32292, 235265, 0]]),
'padding_mask': np.array([[ 1, 1, 1, 1, 1, 1, 1, 0]]),
})
```
"""
def call(
self,
x,
y=None,
sample_weight=None,
sequence_length=None,
):
if y is not None or sample_weight is not None:
logging.warning(
"`GemmaCausalLMPreprocessor` generates `y` and `sample_weight` "
"based on your input data, but your data already contains `y` "
"or `sample_weight`. Your `y` and `sample_weight` will be "
"ignored."
)
sequence_length = sequence_length or self.sequence_length
x = convert_inputs_to_list_of_tensor_segments(x)[0]
x = self.tokenizer(x)
# Pad with one extra token to account for the truncation below.
token_ids, padding_mask = self.packer(
x,
sequence_length=sequence_length + 1,
add_start_value=self.add_start_token,
add_end_value=self.add_end_token,
)
# The last token does not have a next token, so we truncate it out.
x = {
"token_ids": token_ids[..., :-1],
"padding_mask": padding_mask[..., :-1],
}
# Target `y` will be the next token.
y, sample_weight = token_ids[..., 1:], padding_mask[..., 1:]
return pack_x_y_sample_weight(x, y, sample_weight)
def generate_preprocess(
self,
x,
sequence_length=None,
):
"""Covert strings to integer token input for generation.
Similar to calling the layer for training, this method takes in strings
or tensor strings, tokenizes and packs the input, and computes a padding
mask masking all inputs not filled in with a padded value.
Unlike calling the layer for training, this method does not compute
labels and will never append a `tokenizer.end_token_id` to the end of
the sequence (as generation is expected to continue at the end of the
inputted prompt).
"""
if not self.built:
self.build(None)
x = convert_inputs_to_list_of_tensor_segments(x)[0]
x = self.tokenizer(x)
token_ids, padding_mask = self.packer(
x, sequence_length=sequence_length, add_end_value=False
)
return {
"token_ids": token_ids,
"padding_mask": padding_mask,
}
def generate_postprocess(
self,
x,
):
"""Covert integer token output to strings for generation.
This method reverses `generate_preprocess()`, by first removing all
padding and start/end tokens, and then converting the integer sequence
back to a string.
"""
if not self.built:
self.build(None)
token_ids, padding_mask = x["token_ids"], x["padding_mask"]
token_ids = ops.convert_to_numpy(token_ids)
mask = ops.convert_to_numpy(padding_mask)
# Also strip any special tokens during detokenization (e.g. the start
# and end markers). In the future we could make this configurable.
mask = mask & (token_ids != self.tokenizer.start_token_id)
mask = mask & (token_ids != self.tokenizer.pad_token_id)
mask = mask & (token_ids != self.tokenizer.end_token_id)
token_ids = tf.ragged.boolean_mask(token_ids, mask)
return self.tokenizer.detokenize(token_ids)
| keras-nlp/keras_nlp/models/gemma/gemma_causal_lm_preprocessor.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/gemma/gemma_causal_lm_preprocessor.py",
"repo_id": "keras-nlp",
"token_count": 2695
} | 135 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.backend import keras
from keras_nlp.backend import ops
from keras_nlp.layers.modeling.transformer_layer_utils import (
compute_causal_mask,
)
from keras_nlp.layers.modeling.transformer_layer_utils import (
merge_padding_and_attention_mask,
)
from keras_nlp.models.gpt_neo_x.gpt_neo_x_attention import GPTNeoXAttention
from keras_nlp.utils.keras_utils import clone_initializer
class GPTNeoXDecoder(keras.layers.Layer):
"""GPTNeoX decoder.
This class follows the architecture of the GPT-NeoX decoder layer in the
paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745).
Users can instantiate multiple instances of this class to stack up a decoder.
This layer will always apply a causal mask to the decoder attention layer.
Args:
intermediate_dim: int, the hidden size of feedforward network.
num_heads: int, the number of heads for multi-head attention.
dropout: float. the dropout value, shared by
the multi-head attention and feedforward layers.
activation: string or `keras.activations`. the activation function of
feedforward network.
layer_norm_epsilon: float. The epsilon value in layer
normalization components.
kernel_initializer: string or `keras.initializers` initializer. The
kernel initializer for the dense and multi-head attention layers.
bias_initializer: string or `keras.initializers` initializer. The bias
initializer for the dense and multi-head attention layers.
rotary_max_wavelength: int. The maximum angular wavelength of the
sine/cosine curves, for rotary embeddings.
rotary_percentage: float. The percentage by which query, key, value
matrices are to be rotated.
max_sequence_length: int. The maximum sequence length that this encoder
can consume. If `None`, `max_sequence_length` uses the value from
sequence length. This determines the variable shape for positional
embeddings.
name: string. The name of the layer.
"""
def __init__(
self,
intermediate_dim,
num_heads,
dropout=0.0,
activation="relu",
layer_norm_epsilon=1e-5,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
rotary_percentage=0.25,
rotary_max_wavelength=10000,
max_sequence_length=512,
**kwargs,
):
super().__init__(**kwargs)
self.intermediate_dim = intermediate_dim
self.num_heads = num_heads
self.dropout = dropout
self.rotary_percentage = rotary_percentage
self.rotary_max_wavelength = rotary_max_wavelength
self.max_sequence_length = max_sequence_length
self.activation = keras.activations.get(activation)
self.layer_norm_epsilon = layer_norm_epsilon
self.kernel_initializer = keras.initializers.get(kernel_initializer)
self.bias_initializer = keras.initializers.get(bias_initializer)
self.supports_masking = True
self.rotary_percentage = rotary_percentage
self._decoder_sequence_shape = None
def build(self, decoder_sequence_shape):
self._decoder_sequence_shape = decoder_sequence_shape
hidden_dim = decoder_sequence_shape[-1]
# Self attention layers.
self._self_attention_layer = GPTNeoXAttention(
num_heads=self.num_heads,
hidden_dim=hidden_dim,
dropout=self.dropout,
rotary_percentage=self.rotary_percentage,
rotary_max_wavelength=self.rotary_max_wavelength,
max_sequence_length=self.max_sequence_length,
kernel_initializer=clone_initializer(self.kernel_initializer),
bias_initializer=clone_initializer(self.bias_initializer),
dtype=self.dtype_policy,
name="self_attention",
)
self._self_attention_layer.build(decoder_sequence_shape)
self._self_attention_layer_norm = keras.layers.LayerNormalization(
epsilon=self.layer_norm_epsilon,
dtype=self.dtype_policy,
name="self_attention_layer_norm",
)
self._self_attention_layer_norm.build(decoder_sequence_shape)
self._self_attention_dropout = keras.layers.Dropout(
rate=self.dropout,
dtype=self.dtype_policy,
name="self_attention_dropout",
)
# Feedforward layers.
self._feedforward_intermediate_dense = keras.layers.Dense(
self.intermediate_dim,
activation=self.activation,
kernel_initializer=clone_initializer(self.kernel_initializer),
bias_initializer=clone_initializer(self.bias_initializer),
dtype=self.dtype_policy,
name="feedforward_intermediate_dense",
)
self._feedforward_intermediate_dense.build(decoder_sequence_shape)
self._feedforward_output_dense = keras.layers.Dense(
hidden_dim,
kernel_initializer=clone_initializer(self.kernel_initializer),
bias_initializer=clone_initializer(self.bias_initializer),
dtype=self.dtype_policy,
name="feedforward_output_dense",
)
intermediate_shape = list(decoder_sequence_shape)
intermediate_shape[-1] = self.intermediate_dim
self._feedforward_output_dense.build(tuple(intermediate_shape))
self._feedforward_layer_norm = keras.layers.LayerNormalization(
epsilon=self.layer_norm_epsilon,
dtype=self.dtype_policy,
name="feedforward_layer_norm",
)
self._feedforward_layer_norm.build(decoder_sequence_shape)
self._feedforward_dropout = keras.layers.Dropout(
rate=self.dropout,
dtype=self.dtype_policy,
name="feedforward_dropout",
)
self.built = True
def call(
self,
decoder_sequence,
decoder_padding_mask=None,
decoder_attention_mask=None,
self_attention_cache=None,
self_attention_cache_update_index=None,
):
self_attention_mask = self._compute_self_attention_mask(
decoder_sequence=decoder_sequence,
decoder_padding_mask=decoder_padding_mask,
decoder_attention_mask=decoder_attention_mask,
self_attention_cache=self_attention_cache,
self_attention_cache_update_index=self_attention_cache_update_index,
)
residual = decoder_sequence
x = self._self_attention_layer_norm(decoder_sequence)
# Self attention block.
x, self_attention_cache = self._self_attention_layer(
hidden_states=x,
attention_mask=self_attention_mask,
cache=self_attention_cache,
cache_update_index=self_attention_cache_update_index,
)
x = self._self_attention_dropout(x)
attention_output = x
x = self._feedforward_layer_norm(decoder_sequence)
x = self._feedforward_intermediate_dense(x)
x = self._feedforward_output_dense(x)
feedforward_output = x
x = feedforward_output + attention_output + residual
if self_attention_cache is not None:
return (x, self_attention_cache)
else:
return x
def _compute_self_attention_mask(
self,
decoder_sequence,
decoder_padding_mask,
decoder_attention_mask,
self_attention_cache=None,
self_attention_cache_update_index=None,
):
decoder_mask = merge_padding_and_attention_mask(
decoder_sequence, decoder_padding_mask, decoder_attention_mask
)
batch_size = ops.shape(decoder_sequence)[0]
input_length = output_length = ops.shape(decoder_sequence)[1]
# We need to handle a rectangular causal mask when doing cached
# decoding. For generative inference, `decoder_sequence` will
# generally be length 1, and `cache` will be the full generation length.
if self_attention_cache is not None:
input_length = ops.shape(self_attention_cache)[2]
causal_mask = compute_causal_mask(
batch_size,
input_length,
output_length,
(
0
if self_attention_cache_update_index is None
else self_attention_cache_update_index
),
)
return (
ops.minimum(decoder_mask, causal_mask)
if decoder_mask is not None
else causal_mask
)
def get_config(self):
config = super().get_config()
config.update(
{
"intermediate_dim": self.intermediate_dim,
"num_heads": self.num_heads,
"dropout": self.dropout,
"rotary_percentage": self.rotary_percentage,
"rotary_max_wavelength": self.rotary_max_wavelength,
"max_sequence_length": self.max_sequence_length,
"activation": keras.activations.serialize(self.activation),
"layer_norm_epsilon": self.layer_norm_epsilon,
"kernel_initializer": keras.initializers.serialize(
self.kernel_initializer
),
"bias_initializer": keras.initializers.serialize(
self.bias_initializer
),
"decoder_sequence_shape": self._decoder_sequence_shape,
}
)
return config
def compute_output_shape(self, decoder_sequence_shape):
return decoder_sequence_shape
| keras-nlp/keras_nlp/models/gpt_neo_x/gpt_neo_x_decoder.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/gpt_neo_x/gpt_neo_x_decoder.py",
"repo_id": "keras-nlp",
"token_count": 4534
} | 136 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from keras_nlp.backend import ops
from keras_nlp.models.mistral.mistral_backbone import MistralBackbone
from keras_nlp.tests.test_case import TestCase
class MistralBackboneTest(TestCase):
def setUp(self):
self.init_kwargs = {
"vocabulary_size": 10,
"num_layers": 2,
"num_query_heads": 8,
"num_key_value_heads": 4,
"hidden_dim": 16,
"intermediate_dim": 8,
"sliding_window": 2,
}
self.input_data = {
"token_ids": ops.ones((2, 5), dtype="int32"),
"padding_mask": ops.ones((2, 5), dtype="int32"),
}
def test_backbone_basics(self):
self.run_backbone_test(
cls=MistralBackbone,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output_shape=(2, 5, 16),
)
@pytest.mark.large
def test_saved_model(self):
self.run_model_saving_test(
cls=MistralBackbone,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
)
def test_num_parameters(self):
model = MistralBackbone(**self.init_kwargs)
# Reference value calculated using the PyTorch model
self.assertEqual(model.count_params(), 2704)
@pytest.mark.extra_large
def test_smallest_preset(self):
self.run_preset_test(
cls=MistralBackbone,
preset="mistral_7b_en",
input_data={
"token_ids": ops.array([[1, 1824, 349, 524, 11234, 28804]]),
"padding_mask": ops.ones((1, 6), dtype="int32"),
},
expected_output_shape=(1, 6, 4096),
# The forward pass from a preset should be stable!
# Reference values computed using PyTorch HF model.
expected_partial_output=ops.array(
[-1.6875, 0.5117, -1.7188, 2.3125, -0.0996]
),
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in MistralBackbone.presets:
self.run_preset_test(
cls=MistralBackbone,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/mistral/mistral_backbone_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/mistral/mistral_backbone_test.py",
"repo_id": "keras-nlp",
"token_count": 1315
} | 137 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from keras_nlp.models.roberta.roberta_masked_lm_preprocessor import (
RobertaMaskedLMPreprocessor,
)
from keras_nlp.models.roberta.roberta_tokenizer import RobertaTokenizer
from keras_nlp.tests.test_case import TestCase
class RobertaMaskedLMPreprocessorTest(TestCase):
def setUp(self):
self.vocab = ["<s>", "<pad>", "</s>", "air", "ฤ air", "plane", "ฤ at"]
self.vocab += ["port", "<mask>"]
self.vocab = dict([(token, i) for i, token in enumerate(self.vocab)])
self.merges = ["ฤ a", "ฤ t", "ฤ i", "ฤ b", "a i", "p l", "n e"]
self.merges += ["ฤ a t", "p o", "r t", "ฤ t h", "ai r", "pl a", "po rt"]
self.merges += ["ฤ ai r", "ฤ a i", "pla ne"]
self.tokenizer = RobertaTokenizer(
vocabulary=self.vocab, merges=self.merges
)
self.init_kwargs = {
"tokenizer": self.tokenizer,
# Simplify our testing by masking every available token.
"mask_selection_rate": 1.0,
"mask_token_rate": 1.0,
"random_token_rate": 0.0,
"mask_selection_length": 4,
"sequence_length": 12,
}
self.input_data = [" airplane airport"]
def test_preprocessor_basics(self):
self.run_preprocessor_test(
cls=RobertaMaskedLMPreprocessor,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output=(
{
"token_ids": [[0, 8, 8, 8, 8, 2, 1, 1, 1, 1, 1, 1]],
"padding_mask": [[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0]],
"mask_positions": [[1, 2, 3, 4]],
},
[[4, 5, 4, 7]],
[[1.0, 1.0, 1.0, 1.0]],
),
)
def test_no_masking_zero_rate(self):
no_mask_preprocessor = RobertaMaskedLMPreprocessor(
self.tokenizer,
mask_selection_rate=0.0,
mask_selection_length=4,
sequence_length=12,
)
input_data = [" airplane airport"]
self.assertAllClose(
no_mask_preprocessor(input_data),
(
{
"token_ids": [[0, 4, 5, 4, 7, 2, 1, 1, 1, 1, 1, 1]],
"padding_mask": [[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0]],
"mask_positions": [[0, 0, 0, 0]],
},
[[0, 0, 0, 0]],
[[0.0, 0.0, 0.0, 0.0]],
),
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in RobertaMaskedLMPreprocessor.presets:
self.run_preset_test(
cls=RobertaMaskedLMPreprocessor,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/roberta/roberta_masked_lm_preprocessor_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/roberta/roberta_masked_lm_preprocessor_test.py",
"repo_id": "keras-nlp",
"token_count": 1710
} | 138 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from rich import console as rich_console
from rich import markup
from rich import table as rich_table
from keras_nlp.backend import config
from keras_nlp.backend import keras
from keras_nlp.utils.keras_utils import print_msg
from keras_nlp.utils.pipeline_model import PipelineModel
from keras_nlp.utils.preset_utils import check_preset_class
from keras_nlp.utils.preset_utils import load_from_preset
from keras_nlp.utils.python_utils import classproperty
from keras_nlp.utils.python_utils import format_docstring
@keras.saving.register_keras_serializable(package="keras_nlp")
class Task(PipelineModel):
"""Base class for Task models."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._functional_layer_ids = set(
id(layer) for layer in self._flatten_layers()
)
self._initialized = True
def __dir__(self):
if config.keras_3():
return super().__dir__()
# Temporary fixes for Keras 2 saving. This mimics the following PR for
# older version of Keras: https://github.com/keras-team/keras/pull/18982
def filter_fn(attr):
if attr in [
"_layer_checkpoint_dependencies",
"transformer_layers",
"encoder_transformer_layers",
"decoder_transformer_layers",
]:
return False
return id(getattr(self, attr)) not in self._functional_layer_ids
return filter(filter_fn, super().__dir__())
def _check_for_loss_mismatch(self, loss):
"""Check for a softmax/from_logits mismatch after compile.
We cannot handle this in the general case, but we can handle this for
the extremely common case of a single `SparseCategoricalCrossentropy`
loss, and a `None` or `"softmax"` activation.
"""
# Only handle a single loss.
if isinstance(loss, (dict, list, tuple)):
return
# Only handle tasks with activation.
if not hasattr(self, "activation"):
return
loss = keras.losses.get(loss)
activation = keras.activations.get(self.activation)
if isinstance(loss, keras.losses.SparseCategoricalCrossentropy):
from_logits = loss.get_config()["from_logits"]
elif loss == keras.losses.sparse_categorical_crossentropy:
from_logits = False
else:
# Only handle sparse categorical crossentropy.
return
softmax_output = activation == keras.activations.softmax
logit_output = activation == keras.activations.linear
if softmax_output and from_logits:
raise ValueError(
"The `loss` passed to `compile()` expects logit output, but "
"the model is configured to output softmax probabilities "
"(`activation='softmax'`). This will not converge! Pass "
"`from_logits=False` to your loss, e.g. "
"`loss=keras.losses.SparseCategoricalCrossentropy(from_logits=False)`. "
)
if logit_output and not from_logits:
raise ValueError(
"The `loss` passed to `compile()` expects softmax probability "
"output, but the model is configured to output logits "
"(`activation=None`). This will not converge! Pass "
"`from_logits=True` to your loss, e.g. "
"`loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True)`. "
)
def compile(self, optimizer="rmsprop", loss=None, **kwargs):
# Temporarily disable jit compilation on torch.
if config.backend() == "torch":
kwargs["jit_compile"] = False
self._check_for_loss_mismatch(loss)
super().compile(optimizer=optimizer, loss=loss, **kwargs)
def preprocess_samples(self, x, y=None, sample_weight=None):
if self.preprocessor is not None:
return self.preprocessor(x, y=y, sample_weight=sample_weight)
else:
return super().preprocess_samples(x, y, sample_weight)
def __setattr__(self, name, value):
# Work around setattr issues for Keras 2 and Keras 3 torch backend.
# Since all our state is covered by functional model we can route
# around custom setattr calls.
is_property = isinstance(getattr(type(self), name, None), property)
is_unitialized = not hasattr(self, "_initialized")
is_torch = config.backend() == "torch"
is_keras_2 = not config.keras_3()
if is_torch and (is_property or is_unitialized):
return object.__setattr__(self, name, value)
if is_keras_2 and is_unitialized:
return object.__setattr__(self, name, value)
return super().__setattr__(name, value)
@property
def backbone(self):
"""A `keras.Model` instance providing the backbone sub-model."""
return self._backbone
@backbone.setter
def backbone(self, value):
self._backbone = value
@property
def preprocessor(self):
"""A `keras.layers.Layer` instance used to preprocess inputs."""
return self._preprocessor
@preprocessor.setter
def preprocessor(self, value):
self._preprocessor = value
def get_config(self):
# Don't chain to super here. The default `get_config()` for functional
# models is nested and cannot be passed to our Task constructors.
return {
"backbone": keras.layers.serialize(self.backbone),
"preprocessor": keras.layers.serialize(self.preprocessor),
"name": self.name,
}
@classmethod
def from_config(cls, config):
# The default `from_config()` for functional models will return a
# vanilla `keras.Model`. We override it to get a subclass instance back.
if "backbone" in config and isinstance(config["backbone"], dict):
config["backbone"] = keras.layers.deserialize(config["backbone"])
if "preprocessor" in config and isinstance(
config["preprocessor"], dict
):
config["preprocessor"] = keras.layers.deserialize(
config["preprocessor"]
)
return cls(**config)
@classproperty
def backbone_cls(cls):
return None
@classproperty
def preprocessor_cls(cls):
return None
@classproperty
def presets(cls):
return {}
@classmethod
def from_preset(
cls,
preset,
load_weights=True,
**kwargs,
):
"""Instantiate {{model_task_name}} model from preset architecture and weights.
Args:
preset: string. Must be one of "{{preset_names}}".
load_weights: Whether to load pre-trained weights into model.
Defaults to `True`.
Examples:
```python
# Load architecture and weights from preset
model = {{model_task_name}}.from_preset("{{example_preset_name}}")
# Load randomly initialized model from preset architecture
model = {{model_task_name}}.from_preset(
"{{example_preset_name}}",
load_weights=False
)
```
"""
if "backbone" in kwargs:
raise ValueError(
"You cannot pass a `backbone` argument to the `from_preset` "
f"method. Instead, call the {cls.__name__} default "
"constructor with a `backbone` argument. "
f"Received: backbone={kwargs['backbone']}."
)
# We support short IDs for official presets, e.g. `"bert_base_en"`.
# Map these to a Kaggle Models handle.
if preset in cls.presets:
preset = cls.presets[preset]["kaggle_handle"]
preset_cls = check_preset_class(preset, (cls, cls.backbone_cls))
# Backbone case.
if preset_cls == cls.backbone_cls:
# Forward dtype to the backbone.
config_overrides = {}
if "dtype" in kwargs:
config_overrides["dtype"] = kwargs.pop("dtype")
backbone = load_from_preset(
preset,
load_weights=load_weights,
config_overrides=config_overrides,
)
if "preprocessor" in kwargs:
preprocessor = kwargs.pop("preprocessor")
else:
tokenizer = load_from_preset(
preset,
config_file="tokenizer.json",
)
preprocessor = cls.preprocessor_cls(tokenizer=tokenizer)
return cls(backbone=backbone, preprocessor=preprocessor, **kwargs)
# Task case.
return load_from_preset(
preset,
load_weights=load_weights,
config_overrides=kwargs,
)
def __init_subclass__(cls, **kwargs):
# Use __init_subclass__ to setup a correct docstring for from_preset.
super().__init_subclass__(**kwargs)
# If the subclass does not define `from_preset`, assign a wrapper so that
# each class can have a distinct docstring.
if "from_preset" not in cls.__dict__:
def from_preset(calling_cls, *args, **kwargs):
return super(cls, calling_cls).from_preset(*args, **kwargs)
cls.from_preset = classmethod(from_preset)
# Format and assign the docstring unless the subclass has overridden it.
if cls.from_preset.__doc__ is None:
cls.from_preset.__func__.__doc__ = Task.from_preset.__doc__
format_docstring(
model_task_name=cls.__name__,
example_preset_name=next(iter(cls.presets), ""),
preset_names='", "'.join(cls.presets),
)(cls.from_preset.__func__)
@property
def layers(self):
# Remove preprocessor from layers so it does not show up in the summary.
layers = super().layers
if self.preprocessor and self.preprocessor in layers:
layers.remove(self.preprocessor)
return layers
def summary(
self,
line_length=None,
positions=None,
print_fn=None,
**kwargs,
):
"""Override `model.summary()` to show a preprocessor if set."""
# Compat fixes for tf.keras.
if not hasattr(self, "compiled"):
self.compiled = getattr(self.optimizer, "_is_compiled", False)
if (
self.compiled
and self.optimizer
and not hasattr(self.optimizer, "built")
):
self.optimizer.built = getattr(self.optimizer, "_built", False)
# Below is copied from keras-core for now.
# We should consider an API contract.
line_length = line_length or 108
if not print_fn and not keras.utils.is_interactive_logging_enabled():
print_fn = print_msg
def highlight_number(x):
return f"[color(45)]{x}[/]" if x is None else f"[color(34)]{x}[/]"
def highlight_symbol(x):
return f"[color(33)]{x}[/]"
def bold_text(x):
return f"[bold]{x}[/]"
if self.preprocessor:
# Create a rich console for printing. Capture for non-interactive logging.
if print_fn:
console = rich_console.Console(
highlight=False, force_terminal=False, color_system=None
)
console.begin_capture()
else:
console = rich_console.Console(highlight=False)
column_1 = rich_table.Column(
"Tokenizer (type)",
justify="left",
width=int(0.5 * line_length),
)
column_2 = rich_table.Column(
"Vocab #",
justify="right",
width=int(0.5 * line_length),
)
table = rich_table.Table(
column_1, column_2, width=line_length, show_lines=True
)
tokenizer = self.preprocessor.tokenizer
tokenizer_name = markup.escape(tokenizer.name)
tokenizer_class = highlight_symbol(
markup.escape(tokenizer.__class__.__name__)
)
table.add_row(
f"{tokenizer_name} ({tokenizer_class})",
highlight_number(f"{tokenizer.vocabulary_size():,}"),
)
# Print the to the console.
preprocessor_name = markup.escape(self.preprocessor.name)
console.print(bold_text(f'Preprocessor: "{preprocessor_name}"'))
console.print(table)
# Output captured summary for non-interactive logging.
if print_fn:
print_fn(console.end_capture(), line_break=False)
# Avoid `tf.keras.Model.summary()`, so the above output matches.
if config.keras_3():
super().summary(
line_length=line_length,
positions=positions,
print_fn=print_fn,
**kwargs,
)
else:
import keras_core
keras_core.Model.summary(
self,
line_length=line_length,
positions=positions,
print_fn=print_fn,
**kwargs,
)
| keras-nlp/keras_nlp/models/task.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/task.py",
"repo_id": "keras-nlp",
"token_count": 6374
} | 139 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.models.roberta import roberta_backbone
from keras_nlp.models.xlm_roberta.xlm_roberta_presets import backbone_presets
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.XLMRobertaBackbone")
class XLMRobertaBackbone(roberta_backbone.RobertaBackbone):
"""An XLM-RoBERTa encoder network.
This class implements a bi-directional Transformer-based encoder as
described in ["Unsupervised Cross-lingual Representation Learning at Scale"](https://arxiv.org/abs/1911.02116).
It includes the embedding lookups and transformer layers, but it does not
include the masked language modeling head used during pretraining.
The default constructor gives a fully customizable, randomly initialized
RoBERTa encoder with any number of layers, heads, and embedding dimensions.
To load preset architectures and weights, use the `from_preset()`
constructor.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/facebookresearch/fairseq).
Args:
vocabulary_size: int. The size of the token vocabulary.
num_layers: int. The number of transformer layers.
num_heads: int. The number of attention heads for each transformer.
The hidden size must be divisible by the number of attention heads.
hidden_dim: int. The size of the transformer encoding layer.
intermediate_dim: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer.
dropout: float. Dropout probability for the Transformer encoder.
max_sequence_length: int. The maximum sequence length this encoder can
consume. The sequence length of the input must be less than
`max_sequence_length` default value. This determines the variable
shape for positional embeddings.
dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.
Examples:
```python
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}
# Pretrained XLM-R encoder.
model = keras_nlp.models.XLMRobertaBackbone.from_preset(
"xlm_roberta_base_multi",
)
model(input_data)
# Randomly initialized XLM-R model with custom config.
model = keras_nlp.models.XLMRobertaBackbone(
vocabulary_size=250002,
num_layers=4,
num_heads=4,
hidden_dim=256,
intermediate_dim=512,
max_sequence_length=128
)
model(input_data)
```
"""
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/xlm_roberta/xlm_roberta_backbone.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/xlm_roberta/xlm_roberta_backbone.py",
"repo_id": "keras-nlp",
"token_count": 1283
} | 140 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
import tensorflow as tf
from absl.testing import parameterized
from keras_nlp.backend import ops
from keras_nlp.samplers.top_k_sampler import TopKSampler
from keras_nlp.tests.test_case import TestCase
class TopKSamplerTest(TestCase):
def setUp(self):
super().setUp()
# Use a simple alphabet of lowercase characters to [0, 26).
self.int_lookup = {i: chr(i + ord("a")) for i in range(26)}
self.char_lookup = {v: k for k, v in self.int_lookup.items()}
self.batch_size = 1
self.length = 12
self.vocab_size = len(self.int_lookup)
def next(prompt, cache, index):
# Dummy hidden states.
hidden_states = ops.ones([self.batch_size, 5])
# Return a distribution favoring the next char in cache.
logits = ops.one_hot(cache[:, index], self.vocab_size) * 1e9
return logits, hidden_states, cache
self.next = next
self.sampler = TopKSampler(k=5, temperature=1.0)
def join_as_string(self, x):
x = ops.convert_to_numpy(x)
return ["".join([self.int_lookup[i] for i in s]) for s in x]
def test_stateless_call(self):
def next(prompt, cache, index):
# Dummy hidden states.
hidden_states = ops.ones([self.batch_size, 5])
# Return a distribution favoring the first token in the vocab.
logits = (
ops.one_hot(
ops.zeros(self.batch_size, dtype="int32"),
self.vocab_size,
)
* 1e9
)
return logits, hidden_states, cache
prompt = ops.full((self.batch_size, self.length), self.char_lookup["z"])
output = self.sampler(
next=next,
prompt=prompt,
index=5,
)
self.assertEqual(self.join_as_string(output), ["zzzzzaaaaaaa"])
def test_stateful_call(self):
cache_chars = list("sequentially")
cache = ops.array([[self.char_lookup[c] for c in cache_chars]])
prompt = ops.full((self.batch_size, self.length), self.char_lookup["z"])
output = self.sampler(
next=self.next,
prompt=prompt,
cache=cache,
)
self.assertEqual(self.join_as_string(output), ["sequentially"])
def test_early_stopping(self):
cache_chars = list("sequentially")
cache = ops.array([[self.char_lookup[c] for c in cache_chars]])
prompt = ops.full((self.batch_size, self.length), self.char_lookup["z"])
output = self.sampler(
next=self.next,
prompt=prompt,
cache=cache,
end_token_id=self.char_lookup["t"],
)
self.assertEqual(self.join_as_string(output), ["sequentzzzzz"])
def test_outputs_in_top_k(self):
def next(prompt, cache, index):
# Dummy hidden states.
hidden_states = ops.ones([self.batch_size, 5])
# Return a distribution where each id is progressively less likely.
logits = ops.arange(self.vocab_size, 0, -1, dtype="float32")
logits = ops.repeat(logits[None, :], self.batch_size, axis=0)
return logits, hidden_states, cache
prompt = ops.full((self.batch_size, self.length), self.char_lookup["z"])
output = self.sampler(
next=next,
prompt=prompt,
)
output_ids = set(ops.convert_to_numpy(output[0]))
self.assertContainsSubset(output_ids, range(5))
@parameterized.named_parameters(
("jit_compile_false", False), ("jit_compile_true", True)
)
@pytest.mark.tf_only
def test_compilation(self, jit_compile):
cache_chars = list("sequentially")
cache = ops.array([[self.char_lookup[c] for c in cache_chars]])
prompt = ops.full((self.batch_size, self.length), self.char_lookup["z"])
@tf.function(jit_compile=jit_compile)
def generate(prompt, cache):
return self.sampler(self.next, prompt=prompt, cache=cache)
output = generate(prompt, cache)
self.assertEqual(self.join_as_string(output), ["sequentially"])
| keras-nlp/keras_nlp/samplers/top_k_sampler_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/samplers/top_k_sampler_test.py",
"repo_id": "keras-nlp",
"token_count": 2133
} | 141 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from typing import Iterable
from typing import List
import tensorflow as tf
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.tokenizers import tokenizer
from keras_nlp.utils.preset_utils import check_preset_class
from keras_nlp.utils.preset_utils import load_from_preset
from keras_nlp.utils.python_utils import classproperty
from keras_nlp.utils.python_utils import format_docstring
from keras_nlp.utils.tensor_utils import assert_tf_text_installed
from keras_nlp.utils.tensor_utils import convert_to_ragged_batch
from keras_nlp.utils.tensor_utils import is_int_dtype
from keras_nlp.utils.tensor_utils import is_string_dtype
try:
import tensorflow_text as tf_text
except ImportError:
tf_text = None
VOCAB_FILENAME = "vocabulary.txt"
# Matches whitespace and control characters.
WHITESPACE_REGEX = r"|".join(
[
r"\s",
# Invisible control characters
r"\p{Cc}",
r"\p{Cf}",
]
)
# Matches punctuation compatible with the original bert implementation.
PUNCTUATION_REGEX = r"|".join(
[
# Treat all non-letter/number ASCII as punctuation.
# Characters such as "^", "$", and "`" are not in the Unicode
# Punctuation class but we treat them as punctuation anyways.
r"[!-/]",
r"[:-@]",
r"[\[-`]",
r"[{-~]",
# Unicode punctuation class.
r"[\p{P}]",
]
)
# Matches CJK characters. Obtained from
# https://github.com/google-research/bert/blob/master/tokenization.py#L251.
CJK_REGEX = r"|".join(
[
r"[\x{4E00}-\x{9FFF}]",
r"[\x{3400}-\x{4DBF}]",
r"[\x{20000}-\x{2A6DF}]",
r"[\x{2A700}-\x{2B73F}]",
r"[\x{2B740}-\x{2B81F}]",
r"[\x{2B820}-\x{2CEAF}]",
r"[\x{F900}-\x{FAFF}]",
r"[\x{2F800}-\x{2FA1F}]",
]
)
# Matches both whitespace and punctuation.
WHITESPACE_AND_PUNCTUATION_REGEX = r"|".join(
[
WHITESPACE_REGEX,
PUNCTUATION_REGEX,
]
)
# Matches punctuation and CJK characters.
PUNCTUATION_AND_CJK_REGEX = r"|".join(
[
PUNCTUATION_REGEX,
CJK_REGEX,
]
)
# Matches whitespace, punctuation, and CJK characters.
WHITESPACE_PUNCTUATION_AND_CJK_REGEX = r"|".join(
[
WHITESPACE_AND_PUNCTUATION_REGEX,
CJK_REGEX,
]
)
def pretokenize(
text,
lowercase=False,
strip_accents=True,
split=True,
split_on_cjk=True,
):
"""Helper function that takes in a dataset element and pretokenizes it.
Args:
text: `tf.Tensor` or `tf.RaggedTensor`. Input to be pretokenized.
lowercase: bool. If True, the input text will be
lowercased before tokenization. Defaults to `True`.
strip_accents: bool. If `True`, all accent marks will
be removed from text before tokenization. Defaults to `True`.
split: bool. If `True`, input will be split on
whitespace and punctuation marks, and all punctuation marks will be
kept as tokens. If `False`, input should be split ("pre-tokenized")
before calling the tokenizer, and passed as a dense or ragged tensor
of whole words. Defaults to `True`.
split_on_cjk: bool. If `True`, input will be split
on CJK characters, i.e., Chinese, Japanese, Korean and Vietnamese
characters (https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)).
Note that this is applicable only when `split` is `True`. Defaults to `True`.
Returns:
A tensor containing the pre-processed and pre-tokenized `text`.
"""
# Check for correct types.
if not is_string_dtype(text.dtype):
raise ValueError(
"The dataset elements in `data` must have string dtype. "
f"Received: {text.dtype}."
)
# Preprocess, lowercase, strip and split input data.
if text.shape.rank == 0:
text = tf.expand_dims(text, 0)
if split_on_cjk and split:
text = tf.strings.regex_replace(text, CJK_REGEX, r" \0 ")
if lowercase:
text = tf_text.case_fold_utf8(text)
if strip_accents:
# Normalize unicode to NFD, which splits out accent mark characters.
text = tf_text.normalize_utf8(text, "NFD")
# Remove the accent marks.
text = tf.strings.regex_replace(text, r"\p{Mn}", "")
if split:
if split_on_cjk:
split_pattern = WHITESPACE_PUNCTUATION_AND_CJK_REGEX
keep_split_pattern = PUNCTUATION_AND_CJK_REGEX
else:
split_pattern = WHITESPACE_AND_PUNCTUATION_REGEX
keep_split_pattern = PUNCTUATION_REGEX
text = tf_text.regex_split(
text,
delim_regex_pattern=split_pattern,
keep_delim_regex_pattern=keep_split_pattern,
)
return text
@keras_nlp_export("keras_nlp.tokenizers.WordPieceTokenizer")
class WordPieceTokenizer(tokenizer.Tokenizer):
"""A WordPiece tokenizer layer.
This layer provides an efficient, in graph, implementation of the WordPiece
algorithm used by BERT and other models.
To make this layer more useful out of the box, the layer will pre-tokenize
the input, which will optionally lower-case, strip accents, and split the
input on whitespace and punctuation. Each of these pre-tokenization steps is
not reversible. The `detokenize` method will join words with a space, and
will not invert `tokenize` exactly.
If a more custom pre-tokenization step is desired, the layer can be
configured to apply only the strict WordPiece algorithm by passing
`lowercase=False`, `strip_accents=False` and `split=False`. In
this case, inputs should be pre-split string tensors or ragged tensors.
Tokenizer outputs can either be padded and truncated with a
`sequence_length` argument, or left un-truncated. The exact output will
depend on the rank of the input tensors.
If input is a batch of strings (rank > 0):
By default, the layer will output a `tf.RaggedTensor` where the last
dimension of the output is ragged. If `sequence_length` is set, the layer
will output a dense `tf.Tensor` where all inputs have been padded or
truncated to `sequence_length`.
If input is a scalar string (rank == 0):
By default, the layer will output a dense `tf.Tensor` with static shape
`[None]`. If `sequence_length` is set, the output will be
a dense `tf.Tensor` of shape `[sequence_length]`.
The output dtype can be controlled via the `dtype` argument, which should
be either an integer or string type.
Args:
vocabulary: A list of strings or a string filename path. If
passing a list, each element of the list should be a single
WordPiece token string. If passing a filename, the file should be a
plain text file containing a single WordPiece token per line.
sequence_length: int. If set, the output will be converted to a dense
tensor and padded/trimmed so all outputs are of sequence_length.
lowercase: bool. If `True`, the input text will be
lowercased before tokenization. Defaults to `False`.
strip_accents: bool. If `True`, all accent marks will
be removed from text before tokenization. Defaults to `False`.
split: bool. If `True`, input will be split on
whitespace and punctuation marks, and all punctuation marks will be
kept as tokens. If `False`, input should be split ("pre-tokenized")
before calling the tokenizer, and passed as a dense or ragged tensor
of whole words. Defaults to `True`.
split_on_cjk: bool. If True, input will be split
on CJK characters, i.e., Chinese, Japanese, Korean and Vietnamese
characters (https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)).
Note that this is applicable only when `split` is True.
Defaults to `True`.
suffix_indicator: str. The characters prepended to a
WordPiece to indicate that it is a suffix to another subword.
E.g. "##ing". Defaults to `"##"`.
oov_token: str. The string value to substitute for
an unknown token. It must be included in the vocab.
Defaults to `"[UNK]"`.
References:
- [Schuster and Nakajima, 2012](https://research.google/pubs/pub37842/)
- [Song et al., 2020](https://arxiv.org/abs/2012.15524)
Examples:
Ragged outputs.
>>> vocab = ["[UNK]", "the", "qu", "##ick", "br", "##own", "fox", "."]
>>> inputs = "The quick brown fox."
>>> tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
... vocabulary=vocab,
... lowercase=True,
... )
>>> outputs = tokenizer(inputs)
>>> np.array(outputs)
array([1, 2, 3, 4, 5, 6, 7], dtype=int32)
Dense outputs.
>>> vocab = ["[UNK]", "the", "qu", "##ick", "br", "##own", "fox", "."]
>>> inputs = ["The quick brown fox."]
>>> tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
... vocabulary=vocab,
... sequence_length=10,
... lowercase=True,
... )
>>> outputs = tokenizer(inputs)
>>> np.array(outputs)
array([[1, 2, 3, 4, 5, 6, 7, 0, 0, 0]], dtype=int32)
String output.
>>> vocab = ["[UNK]", "the", "qu", "##ick", "br", "##own", "fox", "."]
>>> inputs = "The quick brown fox."
>>> tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
... vocabulary=vocab,
... lowercase=True,
... dtype="string",
... )
>>> outputs = tokenizer(inputs)
>>> np.array(outputs).astype("U")
array(['the', 'qu', '##ick', 'br', '##own', 'fox', '.'], dtype='<U5')
Detokenization.
>>> vocab = ["[UNK]", "the", "qu", "##ick", "br", "##own", "fox", "."]
>>> inputs = "The quick brown fox."
>>> tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
... vocabulary=vocab,
... lowercase=True,
... )
>>> outputs = tokenizer.detokenize(tokenizer.tokenize(inputs))
>>> np.array(outputs).astype("U")
array('the quick brown fox .', dtype='<U21')
Custom splitting.
>>> vocab = ["[UNK]", "the", "qu", "##ick", "br", "##own", "fox", "."]
>>> inputs = "The$quick$brown$fox"
>>> tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
... vocabulary=vocab,
... split=False,
... lowercase=True,
... dtype='string',
... )
>>> split_inputs = tf.strings.split(inputs, sep="$")
>>> outputs = tokenizer(split_inputs)
>>> np.array(outputs).astype("U")
array(['the', 'qu', '##ick', 'br', '##own', 'fox'], dtype='<U5')
"""
def __init__(
self,
vocabulary=None,
sequence_length: int = None,
lowercase: bool = False,
strip_accents: bool = False,
split: bool = True,
split_on_cjk: bool = True,
suffix_indicator: str = "##",
oov_token: str = "[UNK]",
dtype="int32",
**kwargs,
) -> None:
assert_tf_text_installed(self.__class__.__name__)
if not is_int_dtype(dtype) and not is_string_dtype(dtype):
raise ValueError(
"Output dtype must be an integer type or a string. "
f"Received: dtype={dtype}"
)
super().__init__(dtype=dtype, **kwargs)
if oov_token is None:
raise ValueError("`oov_token` cannot be None.")
self.sequence_length = sequence_length
self.lowercase = lowercase
self.strip_accents = strip_accents
self.split = split
self.split_on_cjk = split_on_cjk
self.suffix_indicator = suffix_indicator
self.oov_token = oov_token
self.set_vocabulary(vocabulary)
def save_assets(self, dir_path):
path = os.path.join(dir_path, VOCAB_FILENAME)
with open(path, "w", encoding="utf-8") as file:
for token in self.vocabulary:
file.write(f"{token}\n")
def load_assets(self, dir_path):
path = os.path.join(dir_path, VOCAB_FILENAME)
self.set_vocabulary(path)
def set_vocabulary(self, vocabulary):
"""Set the tokenizer vocabulary to a file or list of strings."""
if vocabulary is None:
self.vocabulary = None
self._fast_word_piece = None
return
if isinstance(vocabulary, str):
with open(vocabulary, "r", encoding="utf-8") as file:
self.vocabulary = [line.rstrip() for line in file]
elif isinstance(vocabulary, Iterable):
# Make a defensive copy.
self.vocabulary = list(vocabulary)
else:
raise ValueError(
"Vocabulary must be an file path or list of terms. "
f"Received: vocabulary={vocabulary}"
)
if self.oov_token not in self.vocabulary:
raise ValueError(
f'Cannot find `oov_token="{self.oov_token}"` in the '
"vocabulary.\n"
"You can either update the vocabulary to include "
f'`"{self.oov_token}"`, or pass a different value for '
"the `oov_token` argument when creating the tokenizer."
)
self._fast_word_piece = tf_text.FastWordpieceTokenizer(
vocab=self.vocabulary,
token_out_type=self.compute_dtype,
suffix_indicator=self.suffix_indicator,
unknown_token=self.oov_token,
no_pretokenization=True,
support_detokenization=True,
)
def get_vocabulary(self) -> List[str]:
"""Get the tokenizer vocabulary as a list of strings tokens."""
self._check_vocabulary()
return self.vocabulary
def vocabulary_size(self) -> int:
"""Get the size of the tokenizer vocabulary."""
self._check_vocabulary()
return len(self.vocabulary)
def id_to_token(self, id: int) -> str:
"""Convert an integer id to a string token."""
self._check_vocabulary()
if id >= self.vocabulary_size() or id < 0:
raise ValueError(
f"`id` must be in range [0, {self.vocabulary_size() - 1}]. "
f"Received: {id}"
)
return self.vocabulary[id]
def token_to_id(self, token: str) -> int:
"""Convert a string token to an integer id."""
# This will be slow, but keep memory usage down compared to building a
# . Assuming the main use case is looking up a few special tokens
# early in the vocab, this should be fine.
self._check_vocabulary()
return self.vocabulary.index(token)
def get_config(self):
config = super().get_config()
config.update(
{
"vocabulary": None, # Save vocabulary via an asset!
"sequence_length": self.sequence_length,
"lowercase": self.lowercase,
"strip_accents": self.strip_accents,
"split": self.split,
"suffix_indicator": self.suffix_indicator,
"oov_token": self.oov_token,
}
)
return config
def _check_vocabulary(self):
if self.vocabulary is None:
raise ValueError(
"No vocabulary has been set for WordPieceTokenizer. Make sure "
"to pass a `vocabulary` argument when creating the layer."
)
def tokenize(self, inputs):
self._check_vocabulary()
if not isinstance(inputs, (tf.Tensor, tf.RaggedTensor)):
inputs = tf.convert_to_tensor(inputs)
scalar_input = inputs.shape.rank == 0
inputs = pretokenize(
inputs,
self.lowercase,
self.strip_accents,
self.split,
self.split_on_cjk,
)
# Apply WordPiece and coerce shape for outputs.
tokens = self._fast_word_piece.tokenize(inputs)
# By default tf.text tokenizes text with two ragged dimensions (one for
# split words and one for split subwords). We will collapse to a single
# ragged dimension which is a better out of box default.
tokens = tokens.merge_dims(-2, -1)
# Convert to a dense output if `sequence_length` is set.
if self.sequence_length:
output_shape = tokens.shape.as_list()
output_shape[-1] = self.sequence_length
tokens = tokens.to_tensor(shape=output_shape)
# Convert to a dense output if input in scalar
if scalar_input:
tokens = tf.squeeze(tokens, 0)
tf.ensure_shape(tokens, shape=[self.sequence_length])
return tokens
def detokenize(self, inputs):
self._check_vocabulary()
inputs, unbatched, _ = convert_to_ragged_batch(inputs)
outputs = self._fast_word_piece.detokenize(inputs)
if unbatched:
outputs = tf.squeeze(outputs, 0)
return outputs
@classproperty
def presets(cls):
return {}
@classmethod
def from_preset(
cls,
preset,
**kwargs,
):
"""Instantiate {{model_name}} tokenizer from preset vocabulary.
Args:
preset: string. Must be one of "{{preset_names}}".
Examples:
```python
# Load a preset tokenizer.
tokenizer = {{model_name}}.from_preset("{{example_preset_name}}")
# Tokenize some input.
tokenizer("The quick brown fox tripped.")
# Detokenize some input.
tokenizer.detokenize([5, 6, 7, 8, 9])
```
"""
# We support short IDs for official presets, e.g. `"bert_base_en"`.
# Map these to a Kaggle Models handle.
if preset in cls.presets:
preset = cls.presets[preset]["kaggle_handle"]
config_file = "tokenizer.json"
check_preset_class(preset, cls, config_file=config_file)
return load_from_preset(
preset,
config_file=config_file,
config_overrides=kwargs,
)
def __init_subclass__(cls, **kwargs):
# Use __init_subclass__ to setup a correct docstring for from_preset.
super().__init_subclass__(**kwargs)
# If the subclass does not define from_preset, assign a wrapper so that
# each class can have a distinct docstring.
if "from_preset" not in cls.__dict__:
def from_preset(calling_cls, *args, **kwargs):
return super(cls, calling_cls).from_preset(*args, **kwargs)
cls.from_preset = classmethod(from_preset)
# Format and assign the docstring unless the subclass has overridden it.
if cls.from_preset.__doc__ is None:
cls.from_preset.__func__.__doc__ = (
WordPieceTokenizer.from_preset.__doc__
)
format_docstring(
model_name=cls.__name__,
example_preset_name=next(iter(cls.presets), ""),
preset_names='", "'.join(cls.presets),
)(cls.from_preset.__func__)
| keras-nlp/keras_nlp/tokenizers/word_piece_tokenizer.py/0 | {
"file_path": "keras-nlp/keras_nlp/tokenizers/word_piece_tokenizer.py",
"repo_id": "keras-nlp",
"token_count": 8559
} | 142 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Script to create (and optionally install) a `.whl` archive for KerasNLP.
Usage:
1. Create a `.whl` file in `dist/`:
```
python3 pip_build.py
```
2. Also install the new package immediately after:
```
python3 pip_build.py --install
```
"""
import argparse
import datetime
import glob
import os
import pathlib
import shutil
import namex
package = "keras_nlp"
build_directory = "tmp_build_dir"
dist_directory = "dist"
to_copy = [
"setup.py",
"setup.cfg",
"README.md",
]
def ignore_files(_, filenames):
return [f for f in filenames if "_test" in f]
def export_version_string(version, is_nightly=False):
"""Export Version and Package Name."""
if is_nightly:
date = datetime.datetime.now()
version += f".dev{date.strftime('%Y%m%d%H')}"
# Replaces `name="keras-nlp"` in `setup.py` with `keras-nlp-nightly`
with open("setup.py") as f:
setup_contents = f.read()
with open("setup.py", "w") as f:
setup_contents = setup_contents.replace(
'name="keras-nlp"', 'name="keras-nlp-nightly"'
)
setup_contents = setup_contents.replace(
'"tensorflow-text', '"tf-nightly", "tensorflow-text-nightly'
)
f.write(setup_contents)
# Overwrite the version string with our package version.
with open(os.path.join(package, "src", "version_utils.py")) as f:
version_contents = f.readlines()
with open(os.path.join(package, "src", "version_utils.py"), "w") as f:
for line in version_contents:
if line.startswith("__version__"):
f.write(f'__version__ = "{version}"\n')
else:
f.write(line)
# Make sure to export the __version__ string.
with open(os.path.join(package, "__init__.py")) as f:
init_contents = f.read()
with open(os.path.join(package, "__init__.py"), "w") as f:
f.write(init_contents)
f.write("from keras_nlp.src.version_utils import __version__\n")
def copy_source_to_build_directory(root_path):
# Copy sources (`keras_nlp/` directory and setup files) to build dir
os.chdir(root_path)
os.mkdir(build_directory)
shutil.copytree(
package, os.path.join(build_directory, package), ignore=ignore_files
)
for fname in to_copy:
shutil.copy(fname, os.path.join(f"{build_directory}", fname))
os.chdir(build_directory)
def run_namex_conversion():
# Restructure the codebase so that source files live in `keras_nlp/src`
namex.convert_codebase(package, code_directory="src")
# Generate API __init__.py files in `keras_nlp/`
namex.generate_api_files(package, code_directory="src", verbose=True)
def build_and_save_output(root_path, __version__):
"""Build the package."""
os.system("python3 -m build")
# Save the dist files generated by the build process
os.chdir(root_path)
if not os.path.exists(dist_directory):
os.mkdir(dist_directory)
for fpath in glob.glob(
os.path.join(build_directory, dist_directory, "*.*")
):
shutil.copy(fpath, dist_directory)
# Find the .whl file path
whl_path = None
for fname in os.listdir(dist_directory):
if __version__ in fname and fname.endswith(".whl"):
whl_path = os.path.abspath(os.path.join(dist_directory, fname))
if whl_path:
print(f"Build successful. Wheel file available at {whl_path}")
else:
print("Build failed.")
return whl_path
def build(root_path, is_nightly=False):
if os.path.exists(build_directory):
raise ValueError(f"Directory already exists: {build_directory}")
try:
copy_source_to_build_directory(root_path)
run_namex_conversion()
# Make sure to export the __version__ string
from keras_nlp.src import __version__ # noqa: E402
export_version_string(__version__, is_nightly)
return build_and_save_output(root_path, __version__)
finally:
# Clean up: remove the build directory (no longer needed)
shutil.rmtree(build_directory)
def install_whl(whl_fpath):
print("Installing wheel file.")
os.system(f"pip3 install {whl_fpath} --force-reinstall --no-dependencies")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--install",
action="store_true",
help="Whether to install the generated wheel file.",
)
parser.add_argument(
"--nightly",
action="store_true",
help="Whether to generate nightly wheel file.",
)
args = parser.parse_args()
root_path = pathlib.Path(__file__).parent.resolve()
whl_path = build(root_path, args.nightly)
if whl_path and args.install:
install_whl(whl_path)
| keras-nlp/pip_build.py/0 | {
"file_path": "keras-nlp/pip_build.py",
"repo_id": "keras-nlp",
"token_count": 2198
} | 143 |
<jupyter_start><jupyter_text>Install deps<jupyter_code>!pip install git+https://github.com/jbischof/keras-nlp.git@bert_ckpt tensorflow tf-models-official --upgrade --quiet
import json
import keras_nlp
import tensorflow as tf
import tensorflow_models as tfm
from tensorflow import keras
TOKEN_TYPE = "uncased"
MODEL_TYPE = "bert_base"
MODEL_NAME = MODEL_TYPE + "_" + TOKEN_TYPE
VOCAB_SIZE = 30522<jupyter_output><empty_output><jupyter_text>Load the model garden checkpoints and weights<jupyter_code># Model garden BERT paths.
zip_path = f"""https://storage.googleapis.com/tf_model_garden/nlp/bert/v3/{TOKEN_TYPE}_L-12_H-768_A-12.tar.gz"""
zip_file = keras.utils.get_file(
f"""/content/{MODEL_NAME}""",
zip_path,
extract=True,
archive_format="tar",
)
!tar -xvf """{MODEL_NAME}"""
# Model garden BERT paths.
extract_dir = "/content/tmp/temp_dir/raw/"
vocab_path = extract_dir + "vocab.txt"
checkpoint_path = extract_dir + "bert_model.ckpt"
config_path = extract_dir + "bert_config.json"
vars = tf.train.list_variables(checkpoint_path)
weights = {}
for name, shape in vars:
print(name, shape)
weight = tf.train.load_variable(checkpoint_path, name)
weights[name] = weight<jupyter_output>_CHECKPOINTABLE_OBJECT_GRAPH []
encoder/layer_with_weights-0/embeddings/.ATTRIBUTES/VARIABLE_VALUE [30522, 768]
encoder/layer_with_weights-1/embeddings/.ATTRIBUTES/VARIABLE_VALUE [512, 768]
encoder/layer_with_weights-10/_attention_layer/_key_dense/bias/.ATTRIBUTES/VARIABLE_VALUE [12, 64]
encoder/layer_with_weights-10/_attention_layer/_key_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE [768, 12, 64]
encoder/layer_with_weights-10/_attention_layer/_query_dense/bias/.ATTRIBUTES/VARIABLE_VALUE [12, 64]
encoder/layer_with_weights-10/_attention_layer/_query_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE [768, 12, 64]
encoder/layer_with_weights-10/_attention_layer/_value_dense/bias/.ATTRIBUTES/VARIABLE_VALUE [12, 64]
encoder/layer_with_weights-10/_attention_layer/_value_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE [768, 12, 64]
encoder/layer_with_weights-10/_attention_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE [768]
encoder/layer_with_weights-10/_attention_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE [768]
encode[...]<jupyter_text>Load BertBase model with KerasNLP.<jupyter_code>model = keras_nlp.models.BertBase(vocabulary_size=VOCAB_SIZE)
model.summary()<jupyter_output>Model: "bert"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
token_ids (InputLayer) [(None, None)] 0 []
token_embedding (Embedding) (None, None, 768) 23440896 ['token_ids[0][0]']
segment_ids (InputLayer) [(None, None)] 0 []
position_embedding (PositionEm (None, None, 768) 393216 ['token_embedding[0][0]'] [...]<jupyter_text>Convert Weights<jupyter_code>model.get_layer("token_embedding").embeddings.assign(
weights[
"encoder/layer_with_weights-0/embeddings/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer("position_embedding").position_embeddings.assign(
weights[
"encoder/layer_with_weights-1/embeddings/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer("segment_embedding").embeddings.assign(
weights[
"encoder/layer_with_weights-2/embeddings/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer("embeddings_layer_norm").gamma.assign(
weights["encoder/layer_with_weights-3/gamma/.ATTRIBUTES/VARIABLE_VALUE"]
)
model.get_layer("embeddings_layer_norm").beta.assign(
weights["encoder/layer_with_weights-3/beta/.ATTRIBUTES/VARIABLE_VALUE"]
)
for i in range(model.num_layers):
model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._key_dense.kernel.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_attention_layer/_key_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._key_dense.bias.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_attention_layer/_key_dense/bias/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._query_dense.kernel.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_attention_layer/_query_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._query_dense.bias.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_attention_layer/_query_dense/bias/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._value_dense.kernel.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_attention_layer/_value_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._value_dense.bias.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_attention_layer/_value_dense/bias/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._output_dense.kernel.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_attention_output_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._output_dense.bias.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_attention_output_dense/bias/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer_norm.gamma.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_attention_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer_norm.beta.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_attention_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._feedforward_intermediate_dense.kernel.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_intermediate_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._feedforward_intermediate_dense.bias.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_intermediate_dense/bias/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._feedforward_output_dense.kernel.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_output_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._feedforward_output_dense.bias.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_output_dense/bias/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._feedforward_layer_norm.gamma.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_output_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer(
f"transformer_layer_{i}"
)._feedforward_layer_norm.beta.assign(
weights[
f"encoder/layer_with_weights-{i + 4}/_output_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE"
]
)
model.get_layer("pooled_dense").kernel.assign(
weights["next_sentence..pooler_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE"]
)
model.get_layer("pooled_dense").bias.assign(
weights["next_sentence..pooler_dense/bias/.ATTRIBUTES/VARIABLE_VALUE"]
)
pass<jupyter_output><empty_output><jupyter_text>Compare Output<jupyter_code>def preprocess(x):
tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
vocabulary=vocab_path,
)
packer = keras_nlp.layers.MultiSegmentPacker(
sequence_length=model.max_sequence_length,
start_value=tokenizer.token_to_id("[CLS]"),
end_value=tokenizer.token_to_id("[SEP]"),
)
return packer(tokenizer(x))
token_ids, segment_ids = preprocess(["the quick brown fox."])
encoder_config = tfm.nlp.encoders.EncoderConfig(
type="bert",
bert=json.load(tf.io.gfile.GFile(config_path)),
)
mg_model = tfm.nlp.encoders.build_encoder(encoder_config)
checkpoint = tf.train.Checkpoint(encoder=mg_model)
checkpoint.read(checkpoint_path).assert_consumed()
keras_nlp_output = model(
{
"token_ids": token_ids,
"segment_ids": segment_ids,
"padding_mask": token_ids != 0,
}
)["pooled_output"]
mg_output = mg_model(
{
"input_word_ids": token_ids,
"input_type_ids": segment_ids,
"padding_mask": token_ids != 0,
}
)["pooled_output"]
keras_nlp_output[0, 0:10]
mg_output[0, 0:10]
# Very close! Though not 100% exact.
tf.reduce_mean(keras_nlp_output - mg_output)
# Save BertBase checkpoint
model.save_weights(f"""{MODEL_NAME}.h5""")
model2 = keras_nlp.models.BertBase(vocabulary_size=VOCAB_SIZE)
model2.load_weights(f"""{MODEL_NAME}.h5""")
# Same output from loaded checkpoint
keras_nlp_output2 = model2(
{
"token_ids": token_ids,
"segment_ids": segment_ids,
"padding_mask": token_ids != 0,
}
)["pooled_output"]
tf.reduce_mean(keras_nlp_output - keras_nlp_output2)
# Save vocab file as well
vocab_info = tf.io.gfile.GFile(vocab_path).read()
f = open("vocab.txt", "w")
f.write(vocab_info)
# Get MD5 of model
!md5sum """{MODEL_NAME}.h5"""
# Upload model to drive
# from google.colab import drive
# drive.mount('/content/drive')
# Check uploaded model once added to repo
model_cloud = keras_nlp.models.BertBase(weights=MODEL_NAME)
# Same output from cloud model
keras_nlp_output_cloud = model_cloud(
{
"token_ids": token_ids,
"segment_ids": segment_ids,
"padding_mask": token_ids != 0,
}
)["pooled_output"]
tf.reduce_mean(keras_nlp_output - keras_nlp_output_cloud)
keras_nlp_output_cloud[0, 0:10]<jupyter_output><empty_output> | keras-nlp/tools/checkpoint_conversion/bert_base_uncased.ipynb/0 | {
"file_path": "keras-nlp/tools/checkpoint_conversion/bert_base_uncased.ipynb",
"repo_id": "keras-nlp",
"token_count": 5039
} | 144 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import torch
from transformers import AutoModel
from keras_nlp.models.llama.llama_backbone import LlamaBackbone
os.environ["KERAS_BACKEND"] = "torch"
# from huggingface_hub import login
# llama weights as of now are on request access
# login(token='<your_huggingface_token')
PRESET_NAME = "Llama-2-7b-hf"
PRESET = "meta-llama/Llama-2-7b-hf"
EXTRACT_DIR = "./{}"
extract_dir = EXTRACT_DIR.format(PRESET_NAME)
if not os.path.exists(extract_dir):
os.makedirs(extract_dir)
hf_model = AutoModel.from_pretrained(PRESET, use_auth_token=True)
hf_config = hf_model.config.to_dict()
hf_model.eval()
hf_wts = hf_model.state_dict()
cfg = {}
cfg["vocabulary_size"] = hf_config["vocab_size"]
cfg["num_layers"] = hf_config["num_hidden_layers"]
cfg["num_heads"] = hf_config["num_attention_heads"]
cfg["hidden_dim"] = hf_config["hidden_size"]
cfg["intermediate_dim"] = hf_config["intermediate_size"]
cfg["max_sequence_length"] = hf_config["max_position_embeddings"]
cfg["rope_scaling_type"] = hf_config["rope_scaling"]
cfg["layer_norm_epsilon"] = hf_config["rms_norm_eps"]
cfg["num_key_value_heads"] = hf_config["num_key_value_heads"]
keras_model = LlamaBackbone(**cfg)
keras_model.get_layer("token_embedding").embeddings.assign(
hf_wts["embed_tokens.weight"]
)
for ilayer in range(cfg["num_layers"]):
# attention layer
keras_model.get_layer(
f"transformer_layer_{ilayer}"
)._self_attention_layer._query_dense.kernel.assign(
hf_wts[f"layers.{ilayer}.self_attn.q_proj.weight"]
.numpy()
.T.reshape((cfg["hidden_dim"], cfg["num_heads"], -1))
)
keras_model.get_layer(
f"transformer_layer_{ilayer}"
)._self_attention_layer._key_dense.kernel.assign(
hf_wts[f"layers.{ilayer}.self_attn.k_proj.weight"]
.numpy()
.T.reshape((cfg["hidden_dim"], cfg["num_key_value_heads"], -1))
)
keras_model.get_layer(
f"transformer_layer_{ilayer}"
)._self_attention_layer._value_dense.kernel.assign(
hf_wts[f"layers.{ilayer}.self_attn.v_proj.weight"]
.numpy()
.T.reshape((cfg["hidden_dim"], cfg["num_key_value_heads"], -1))
)
keras_model.get_layer(
f"transformer_layer_{ilayer}"
)._self_attention_layer._output_dense.kernel.assign(
hf_wts[f"layers.{ilayer}.self_attn.o_proj.weight"].numpy().T
)
# MLP
keras_model.get_layer(
f"transformer_layer_{ilayer}"
)._feedforward_intermediate_dense.kernel.assign(
hf_wts[f"layers.{ilayer}.mlp.up_proj.weight"].numpy().T
)
keras_model.get_layer(
f"transformer_layer_{ilayer}"
)._feedforward_gate_dense.kernel.assign(
hf_wts[f"layers.{ilayer}.mlp.gate_proj.weight"].numpy().T
)
keras_model.get_layer(
f"transformer_layer_{ilayer}"
)._feedforward_output_dense.kernel.assign(
hf_wts[f"layers.{ilayer}.mlp.down_proj.weight"].numpy().T
)
# LAYERNORM
keras_model.get_layer(
f"transformer_layer_{ilayer}"
)._self_attention_layernorm.weight.assign(
hf_wts[f"layers.{ilayer}.input_layernorm.weight"]
)
keras_model.get_layer(
f"transformer_layer_{ilayer}"
)._feedforward_layernorm.weight.assign(
hf_wts[f"layers.{ilayer}.post_attention_layernorm.weight"]
)
keras_model.get_layer("layer_norm").gamma.assign(hf_wts["norm.weight"])
token_ids = [1, 2181, 8522, 338]
padding_mask = [1, 1, 1, 1]
keras_inputs = {
"token_ids": torch.tensor([token_ids]),
"padding_mask": torch.tensor([padding_mask]),
}
with torch.no_grad():
keras_outputs = keras_model(keras_inputs)
print("Keras output = ", keras_outputs.numpy())
| keras-nlp/tools/checkpoint_conversion/convert_llama_checkpoints.py/0 | {
"file_path": "keras-nlp/tools/checkpoint_conversion/convert_llama_checkpoints.py",
"repo_id": "keras-nlp",
"token_count": 1841
} | 145 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import time
import keras_nlp
# List directories of parsed Wikipedia articles and vocab sizes
directories = [
"eswiki_parsed",
"frwiki_parsed",
"hiwiki_parsed",
"arwiki_parsed",
"ruwiki_parsed",
"bnwiki_parsed",
"idwiki_parsed",
"ptwiki_parsed",
]
vocab_sizes = [20000, 50000]
identifier = "v1"
# Runs the computation
for directory in directories:
for vocab_size in vocab_sizes:
print(f"Running directory {directory} with vocab size {vocab_size}")
files = []
for folder in os.listdir(directory):
path = os.path.join(directory, folder)
for file in os.listdir(path):
if file[0] != ".":
files.append(os.path.join(path, file))
if os.path.exists(f"{directory}_{vocab_size}_{identifier}.txt"):
raise ValueError("already done.")
start = time.time()
keras_nlp.tokenizers.compute_word_piece_vocabulary(
files,
vocabulary_size=vocab_size,
lowercase=False,
strip_accents=False,
vocabulary_output_file=f"{directory}_{vocab_size}_{identifier}.txt",
)
end = time.time()
print("Time taken:", end - start)
| keras-nlp/tools/pretrained_tokenizers/word_piece_training_script.py/0 | {
"file_path": "keras-nlp/tools/pretrained_tokenizers/word_piece_training_script.py",
"repo_id": "keras-nlp",
"token_count": 737
} | 146 |
# Keras Preprocessing
โ ๏ธ This GitHub repository is now deprecated -- all Keras Preprocessing symbols have
moved into the core Keras [repository](https://github.com/keras-team/keras)
and the TensorFlow [`pip` package](https://www.tensorflow.org/install). All code
changes and discussion should move to the Keras repository.
For users looking for a place to start preprocessing data, consult the
[preprocessing layers guide](https://keras.io/guides/preprocessing_layers/)
and refer to the [data loading utilities API](https://keras.io/api/data_loading/).
| keras-preprocessing/README.md/0 | {
"file_path": "keras-preprocessing/README.md",
"repo_id": "keras-preprocessing",
"token_count": 156
} | 147 |
import os
import shutil
import tempfile
import numpy as np
import pytest
from PIL import Image
from keras_preprocessing.image import image_data_generator
@pytest.fixture(scope='module')
def all_test_images():
img_w = img_h = 20
rgb_images = []
rgba_images = []
gray_images = []
gray_images_16bit = []
gray_images_32bit = []
for n in range(8):
bias = np.random.rand(img_w, img_h, 1) * 64
variance = np.random.rand(img_w, img_h, 1) * (255 - 64)
# RGB
imarray = np.random.rand(img_w, img_h, 3) * variance + bias
im = Image.fromarray(imarray.astype('uint8')).convert('RGB')
rgb_images.append(im)
# RGBA
imarray = np.random.rand(img_w, img_h, 4) * variance + bias
im = Image.fromarray(imarray.astype('uint8')).convert('RGBA')
rgba_images.append(im)
# 8-bit grayscale
imarray = np.random.rand(img_w, img_h, 1) * variance + bias
im = Image.fromarray(imarray.astype('uint8').squeeze()).convert('L')
gray_images.append(im)
# 16-bit grayscale
imarray = np.array(
np.random.randint(-2147483648, 2147483647, (img_w, img_h))
)
im = Image.fromarray(imarray.astype('uint16'))
gray_images_16bit.append(im)
# 32-bit grayscale
im = Image.fromarray(imarray.astype('uint32'))
gray_images_32bit.append(im)
return [rgb_images, rgba_images,
gray_images, gray_images_16bit, gray_images_32bit]
def test_directory_iterator(all_test_images, tmpdir):
num_classes = 2
# create folders and subfolders
paths = []
for cl in range(num_classes):
class_directory = 'class-{}'.format(cl)
classpaths = [
class_directory,
os.path.join(class_directory, 'subfolder-1'),
os.path.join(class_directory, 'subfolder-2'),
os.path.join(class_directory, 'subfolder-1', 'sub-subfolder')
]
for path in classpaths:
tmpdir.join(path).mkdir()
paths.append(classpaths)
# save the images in the paths
count = 0
filenames = []
for test_images in all_test_images:
for im in test_images:
# rotate image class
im_class = count % num_classes
# rotate subfolders
classpaths = paths[im_class]
filename = os.path.join(
classpaths[count % len(classpaths)],
'image-{}.png'.format(count))
filenames.append(filename)
im.save(str(tmpdir / filename))
count += 1
# create iterator
generator = image_data_generator.ImageDataGenerator()
dir_iterator = generator.flow_from_directory(str(tmpdir))
# check number of classes and images
assert len(dir_iterator.class_indices) == num_classes
assert len(dir_iterator.classes) == count
assert set(dir_iterator.filenames) == set(filenames)
# Test invalid use cases
with pytest.raises(ValueError):
generator.flow_from_directory(str(tmpdir), color_mode='cmyk')
with pytest.raises(ValueError):
generator.flow_from_directory(str(tmpdir), class_mode='output')
def preprocessing_function(x):
"""This will fail if not provided by a Numpy array.
Note: This is made to enforce backward compatibility.
"""
assert x.shape == (26, 26, 3)
assert type(x) is np.ndarray
return np.zeros_like(x)
# Test usage as Sequence
generator = image_data_generator.ImageDataGenerator(
preprocessing_function=preprocessing_function)
dir_seq = generator.flow_from_directory(str(tmpdir),
target_size=(26, 26),
color_mode='rgb',
batch_size=3,
class_mode='categorical')
assert len(dir_seq) == np.ceil(count / 3.)
x1, y1 = dir_seq[1]
assert x1.shape == (3, 26, 26, 3)
assert y1.shape == (3, num_classes)
x1, y1 = dir_seq[5]
assert (x1 == 0).all()
with pytest.raises(ValueError):
x1, y1 = dir_seq[14] # there are 40 images and batch size is 3
def test_directory_iterator_class_mode_input(all_test_images, tmpdir):
tmpdir.join('class-1').mkdir()
# save the images in the paths
count = 0
for test_images in all_test_images:
for im in test_images:
filename = str(
tmpdir / 'class-1' / 'image-{}.png'.format(count))
im.save(filename)
count += 1
# create iterator
generator = image_data_generator.ImageDataGenerator()
dir_iterator = generator.flow_from_directory(str(tmpdir),
class_mode='input')
batch = next(dir_iterator)
# check if input and output have the same shape
assert(batch[0].shape == batch[1].shape)
# check if the input and output images are not the same numpy array
input_img = batch[0][0]
output_img = batch[1][0]
output_img[0][0][0] += 1
assert(input_img[0][0][0] != output_img[0][0][0])
@pytest.mark.parametrize('validation_split,num_training', [
(0.25, 30),
(0.50, 20),
(0.75, 10),
])
def test_directory_iterator_with_validation_split(all_test_images,
validation_split,
num_training):
num_classes = 2
tmp_folder = tempfile.mkdtemp(prefix='test_images')
# create folders and subfolders
paths = []
for cl in range(num_classes):
class_directory = 'class-{}'.format(cl)
classpaths = [
class_directory,
os.path.join(class_directory, 'subfolder-1'),
os.path.join(class_directory, 'subfolder-2'),
os.path.join(class_directory, 'subfolder-1', 'sub-subfolder')
]
for path in classpaths:
os.mkdir(os.path.join(tmp_folder, path))
paths.append(classpaths)
# save the images in the paths
count = 0
filenames = []
for test_images in all_test_images:
for im in test_images:
# rotate image class
im_class = count % num_classes
# rotate subfolders
classpaths = paths[im_class]
filename = os.path.join(
classpaths[count % len(classpaths)],
'image-{}.png'.format(count))
filenames.append(filename)
im.save(os.path.join(tmp_folder, filename))
count += 1
# create iterator
generator = image_data_generator.ImageDataGenerator(
validation_split=validation_split
)
with pytest.raises(ValueError):
generator.flow_from_directory(tmp_folder, subset='foo')
train_iterator = generator.flow_from_directory(tmp_folder,
subset='training')
assert train_iterator.samples == num_training
valid_iterator = generator.flow_from_directory(tmp_folder,
subset='validation')
assert valid_iterator.samples == count - num_training
# check number of classes and images
assert len(train_iterator.class_indices) == num_classes
assert len(train_iterator.classes) == num_training
assert len(set(train_iterator.filenames) &
set(filenames)) == num_training
shutil.rmtree(tmp_folder)
if __name__ == '__main__':
pytest.main([__file__])
| keras-preprocessing/tests/image/directory_iterator_test.py/0 | {
"file_path": "keras-preprocessing/tests/image/directory_iterator_test.py",
"repo_id": "keras-preprocessing",
"token_count": 3508
} | 148 |
# Release v1.4.7
## Bug fixes
## New features
# Release v1.4.6
## Bug fixes
* When running in parallel, the chief may exit before some client ask for
another trial, which informs the client to exit. Now, it is fixed.
## New features
* Updated the dependency from `keras-core` to `keras` version 3 and above. Also
support `keras` version 2 for backward compatibility.
# Release v1.4.5
## Bug fixes
* When running in parallel, the client oracle used to wait forever when the
chief oracle is not responding. Now, it is fixed.
* When running in parallel, the client would call the chief after calling
`oracle.end_trial()`, when the chief have already ended. Now, it is fixed.
* When running in parallel, the chief used to start to block in
`tuner.__init__()`. However, it makes more sense to block when calling
`tuner.search()`. Now, it is fixed.
* Could not do `from keras_tuner.engine.hypermodel import HyperModel`. It is now
fixed.
* Could not do `from keras_tuner.engine.hyperparameters import HyperParameters`.
It is now fixed.
* Could not do `from keras_tuner.engine.metrics_tracking import
infer_metric_direction`. It is now fixed.
* Could not do `from keras_tuner.engine.oracle import Objective`. It is now
fixed.
* Could not do `from keras_tuner.engine.oracle import Oracle`. It is now fixed.
# Release v1.4.4
## Bug fixes
* Could not do `from keras_tuner.engine.hyperparameters import serialize`. It is
now fixed.
* Could not do `from keras_tuner.engine.hyperparameters import deserialize`. It
is now fixed.
* Could not do `from keras_tuner.engine.tuner import maybe_distribute`. It is
now fixed.
# Release v1.4.3
## Bug fixes
* Could not do `from keras_tuner.engine.tuner import Tuner`. It is now fixed.
* When TensorFlow version is low, it would error out with keras models have no
attributed called `get_build_config`. It is now fixed.
# Release v1.4.2
## Bug fixes
* Could not do `from keras_tuner.engine import trial`. It is now fixed.
# Release v1.4.1
## Bug fixes
* Could not do `from keras_tuner.engine import base_tuner`. It is now fixed.
# Release v1.4.0
## Breaking changes
* All private APIs are hidden under `keras_tuner.src.*`. For example, if you use
`keras_tuner.some_private_api`, it will now be
`keras_tuner.src.some_private_api`.
## New features
* Support Keras Core with multi-backend.
# Release v1.3.5
## Breaking changes
* Removed TensorFlow from the required dependencies of KerasTuner. The user need
to install TensorFlow either separately with KerasTuner or with
`pip install keras_tuner[tensorflow]`. This change is because some people may
want to use KerasTuner with `tensorflow-cpu` instead of `tensorflow`.
## Bug fixes
* KerasTuner used to require protobuf version to be under 3.20. The limit is
removed. Now, it support both protobuf 3 and 4.
# Release v1.3.4
## Bug fixes
* If you have a protobuf version > 3.20, it would through an error when import
KerasTuner. It is now fixed.
# Release v1.3.3
## Bug fixes
* KerasTuner would install protobuf 3.19 with `protobuf<=3.20`. We want to
install `3.20.3`, so we changed it to `protobuf<=3.20.3`. It is now fixed.
# Release v1.3.2
## Bug fixes
* It use to install protobuf 4.22.1 if install with TensorFlow 2.12, which is
not compatible with KerasTuner. We limited the version to <=3.20. Now it is
fixed.
# Release v1.3.1
## Bug fixes
* The `Tuner.results_summary()` did not print error messages for failed trials
and did not display `Objective` information correctly. It is now fixed.
* The `BayesianOptimization` would break when not specifying the
`num_initial_points` and overriding `.run_trial()`. It is now fixed.
* TensorFlow 2.12 would break because the different protobuf version. It is now
fixed.
# Release v1.3.0
## Breaking changes
* Removed `Logger` and `CloudLogger` and the related arguments in
`BaseTuner.__init__(logger=...)`.
* Removed `keras_tuner.oracles.BayesianOptimization`,
`keras_tuner.oracles.Hyperband`, `keras_tuner.oracles.RandomSearch`, which
were actually `Oracle`s instead of `Tuner`s. Please
use`keras_tuner.oracles.BayesianOptimizationOracle`,
`keras_tuner.oracles.HyperbandOracle`,
`keras_tuner.oracles.RandomSearchOracle` instead.
* Removed `keras_tuner.Sklearn`. Please use `keras_tuner.SklearnTuner` instead.
## New features
* `keras_tuner.oracles.GridSearchOracle` is now available as a standalone
`Oracle` to be used with custom tuners.
# Release v1.2.1
## Bug fixes
* The resume feature (`overwrite=False`) would crash in 1.2.0. This is now fixed.
# Release v1.2.0
## Breaking changes
* If you implemented your own `Tuner`, the old use case of reporting results
with `Oracle.update_trial()` in `Tuner.run_trial()` is deprecated. Please
return the metrics in `Tuner.run_trial()` instead.
* If you implemented your own `Oracle` and overrided `Oracle.end_trial()`, you
need to change the signature of the function from
`Oracle.end_trial(trial.trial_id, trial.status)` to `Oracle.end_trial(trial)`.
* The default value of the `step` argument in
* `keras_tuner.HyperParameters.Int()` is
changed to `None`, which was `1` before. No change in default behavior.
* The default value of the `sampling` argument in
`keras_tuner.HyperParameters.Int()` is changed to `"linear"`, which was `None`
before. No change in default behavior.
* The default value of the `sampling` argument in
`keras_tuner.HyperParameters.Float()` is changed to `"linear"`, which was
`None` before. No change in default behavior.
* If you explicitly rely on protobuf values, the new protobuf bug fix may affect
you.
* Changed the mechanism of how a random sample is drawn for a hyperparameter.
They now all start from a random value between 0 and 1, and convert the value
to a random sample.
## New features
* A new tuner is added, `keras_tuner.GridSearch`, which can exhaust all the
possible hyperparameter combinations.
* Better fault tolerance during the search. Added two new arguments to `Tuner`
and `Oracle` initializers, `max_retries_per_trial` and
`max_consecutive_failed_trials`.
* You can now mark a `Trial` as failed by
`raise keras_tuner.FailedTrialError("error message.")` in
`HyperModel.build()`, `HyperModel.fit()`, or your model build function.
* Provides better error messages for invalid configs for `Int` and `Float` type
hyperparameters.
* A decorator `@keras_tuner.synchronized` is added to decorate the methods in
`Oracle` and its subclasses to synchronize the concurrent calls to ensure
thread safety in parallel tuning.
## Bug fixes
* Protobuf was not converting Boolean type hyperparameter correctly. This is now
fixed.
* Hyperband was not loading the weights correctly for half-trained models. This
is now fixed.
* `KeyError` may occur if using `hp.conditional_scope()`, or the `parent`
argument for hyperparameters. This is now fixed.
* `num_initial_points` of the `BayesianOptimization` should defaults to `3 *
dimension`, but it defaults to 2. This is now fixed.
* It would through an error when using a concrete Keras optimizer object to
override the `HyperModel` compile arg. This is now fixed.
* Workers might crash due to `Oracle` reloading when running in parallel. This
is now fixed.
| keras-tuner/RELEASE.md/0 | {
"file_path": "keras-tuner/RELEASE.md",
"repo_id": "keras-tuner",
"token_count": 2231
} | 149 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_tuner.api_export import keras_tuner_export
from keras_tuner.backend import keras
from keras_tuner.backend import ops
from keras_tuner.backend import random
from keras_tuner.backend.keras import layers
from keras_tuner.engine import hypermodel
# dict of functions that create layers for transforms.
# Each function takes a factor (0 to 1) for the strength
# of the transform.
TRANSFORMS = {
"translate_x": lambda x: layers.RandomTranslation(x, 0),
"translate_y": lambda y: layers.RandomTranslation(0, y),
"rotate": layers.RandomRotation,
"contrast": layers.RandomContrast,
}
@keras.saving.register_keras_serializable(package="keras_tuner")
class RandAugment(keras.layers.Layer):
"""A single RandAugment layer."""
def __init__(self, layers, factors):
super().__init__()
self.layers = layers
self.factors = factors
def call(self, inputs):
x = inputs
batch_size = ops.shape(x)[0]
# selection tensor determines operation for each sample.
selection = ops.cast(
random.uniform((batch_size, 1, 1, 1), maxval=len(self.layers)),
dtype="int32",
)
for i, layer in enumerate(self.layers):
factor = self.factors[i]
if factor == 0:
continue
transform_layer = TRANSFORMS[layer](factor)
x_trans = transform_layer(x)
# For each sample, apply the transform if and only if
# selection matches the transform index `i`
x = ops.where(ops.equal(i, selection), x_trans, x)
return x
def compute_output_shape(self, input_shape):
return input_shape
@keras_tuner_export("keras_tuner.applications.HyperImageAugment")
class HyperImageAugment(hypermodel.HyperModel):
"""A image augmentation hypermodel.
The `HyperImageAugment` class searches for the best combination of image
augmentation operations in Keras preprocessing layers. The input shape of
the model should be (height, width, channels). The output of the model is
of the same shape as the input.
Args:
input_shape: Optional shape tuple, e.g. `(256, 256, 3)`.
input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
rotate: A number between [0, 1], a list of two numbers between [0, 1]
or None. Configures the search space of the factor of random
rotation transform in the augmentation. A factor is chosen for each
trial. It sets maximum of clockwise and counterclockwise rotation
in terms of fraction of pi, among all samples in the trial.
Default is 0.5. When `rotate` is a single number, the search range
is [0, `rotate`].
The transform is off when set to None.
translate_x: A number between [0, 1], a list of two numbers between
[0, 1] or None. Configures the search space of the factor of random
horizontal translation transform in the augmentation. A factor is
chosen for each trial. It sets maximum of horizontal translation in
terms of ratio over the width among all samples in the trial.
Default is 0.4. When `translate_x` is a single number, the search
range is [0, `translate_x`].
The transform is off when set to None.
translate_y: A number between [0, 1], a list of two numbers between
[0, 1] or None. Configures the search space of the factor of random
vertical translation transform in the augmentation. A factor is
chosen for each trial. It sets maximum of vertical translation in
terms of ratio over the height among all samples in the trial.
Default is 0.4. When `translate_y` is a single number ,the search
range is [0, `translate_y`]. The transform is off when set to None.
contrast: A number between [0, 1], a list of two numbers between [0, 1]
or None. Configures the search space of the factor of random
contrast transform in the augmentation. A factor is chosen for each
trial. It sets maximum ratio of contrast change among all samples in
the trial. Default is 0.3. When `contrast` is a single number, the
search rnage is [0, `contrast`].
The transform is off when set to None.
augment_layers: None, int or list of two ints, controlling the number
of augment applied. Default is 3.
When `augment_layers` is 0, all transform are applied sequentially.
When `augment_layers` is nonzero, or a list of two ints, a simple
version of RandAugment(https://arxiv.org/abs/1909.13719) is used.
A search space for 'augment_layers' is created to search [0,
`augment_layers`], or between the two ints if a `augment_layers` is
a list. For each trial, the hyperparameter 'augment_layers'
determines number of layers of augment transforms are applied,
each randomly picked from all available transform types with equal
probability on each sample.
**kwargs: Additional keyword arguments that apply to all hypermodels.
See `keras_tuner.HyperModel`.
Example:
```python
hm_aug = HyperImageAugment(input_shape=(32, 32, 3),
augment_layers=0,
rotate=[0.2, 0.3],
translate_x=0.1,
translate_y=None,
contrast=None)
```
Then the hypermodel `hm_aug` will search 'factor_rotate' between [0.2, 0.3]
and 'factor_translate_x' between [0, 0.1]. These two augments are applied
on all samples with factor picked per each trial.
```python
hm_aug = HyperImageAugment(input_shape=(32, 32, 3),
translate_x=0.5,
translate_y=[0.2, 0.4]
contrast=None)
```
Then the hypermodel `hm_aug` will search 'factor_rotate' between [0, 0.2],
'factor_translate_x' between [0, 0.5], 'factor_translate_y' between
[0.2, 0.4]. It will use RandAugment, searching 'augment_layers'
between [0, 3]. Each layer on each sample will be chosen from rotate,
translate_x and translate_y.
"""
def __init__(
self,
input_shape=None,
input_tensor=None,
rotate=0.5,
translate_x=0.4,
translate_y=0.4,
contrast=0.3,
augment_layers=3,
**kwargs,
):
if input_shape is None and input_tensor is None:
raise ValueError(
"You must specify either `input_shape` or `input_tensor`."
)
self.transforms = []
self._register_transform("rotate", rotate)
self._register_transform("translate_x", translate_x)
self._register_transform("translate_y", translate_y)
self._register_transform("contrast", contrast)
self.input_shape = input_shape
self.input_tensor = input_tensor
if augment_layers:
self.model_name = "image_rand_augment"
try:
augment_layers_min = augment_layers[0]
augment_layers_max = augment_layers[1]
except TypeError:
augment_layers_min = 0
augment_layers_max = augment_layers
if not (
isinstance(augment_layers_min, int)
and isinstance(augment_layers_max, int)
):
raise ValueError(
"Keyword argument `augment_layers` must be int, "
f"but received {augment_layers}."
)
self.augment_layers_min = augment_layers_min
self.augment_layers_max = augment_layers_max
else:
# Separatedly tune and apply all augment transforms if
# `randaug_count` is set to 0.
self.model_name = "image_augment"
super().__init__(**kwargs)
def build(self, hp):
if self.input_tensor is not None:
inputs = keras.utils.get_source_inputs(self.input_tensor)
x = self.input_tensor
else:
inputs = layers.Input(shape=self.input_shape)
x = inputs
if self.model_name == "image_rand_augment":
x = self._build_randaug_layers(x, hp)
else:
x = self._build_fixedaug_layers(x, hp)
model = keras.Model(inputs, x, name=self.model_name)
return model
def _build_randaug_layers(self, inputs, hp):
augment_layers = hp.Int(
"augment_layers",
self.augment_layers_min,
self.augment_layers_max,
default=self.augment_layers_min,
)
x = inputs
for _ in range(augment_layers):
factors = []
for i, (transform, (f_min, f_max)) in enumerate(self.transforms):
# Factor for each transform is determined per each trial.
factor = hp.Float(
f"factor_{transform}", f_min, f_max, default=f_min
)
factors.append(factor)
x = RandAugment([layer for layer, _ in self.transforms], factors)(x)
return x
def _build_fixedaug_layers(self, inputs, hp):
x = inputs
for transform, (factor_min, factor_max) in self.transforms:
transform_factor = hp.Float(
f"factor_{transform}",
factor_min,
factor_max,
step=0.05,
default=factor_min,
)
if transform_factor == 0:
continue
transform_layer = TRANSFORMS[transform](transform_factor)
x = transform_layer(x)
return x
def _register_transform(self, transform_name, transform_params):
"""Register a transform and format parameters for tuning the transform.
Args:
transform_name: A string, the name of the transform.
trnasform_params: A number between [0, 1], a list of two numbers
between [0, 1] or None. If set to a single number x, the
corresponding transform factor will be between [0, x].
If set to a list of 2 numbers [x, y], the factor will be
between [x, y]. If set to None, the transform will be excluded.
"""
if not transform_params:
return
try:
transform_factor_min = transform_params[0]
transform_factor_max = transform_params[1]
if len(transform_params) > 2:
raise ValueError(
"Length of keyword argument "
f"{transform_name} must not exceed 2."
)
except TypeError:
transform_factor_min = 0
transform_factor_max = transform_params
if not (
isinstance(transform_factor_max, (int, float))
and isinstance(transform_factor_min, (int, float))
):
raise ValueError(
f"Keyword argument {transform_name} must be int "
f"or float, but received {transform_params}."
)
self.transforms.append(
(transform_name, (transform_factor_min, transform_factor_max))
)
| keras-tuner/keras_tuner/applications/augment.py/0 | {
"file_path": "keras-tuner/keras_tuner/applications/augment.py",
"repo_id": "keras-tuner",
"token_count": 5296
} | 150 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from keras_tuner.engine import hyperparameters as hp_module
def test_boolean():
# Test default default
boolean = hp_module.Boolean("bool")
assert boolean.default is False
# Test default setting
boolean = hp_module.Boolean("bool", default=True)
assert boolean.default is True
# Wrong default type
with pytest.raises(ValueError, match="must be a Python boolean"):
hp_module.Boolean("bool", default=None)
# Test serialization
boolean = hp_module.Boolean("bool", default=True)
boolean = hp_module.Boolean.from_config(boolean.get_config())
assert boolean.default is True
assert boolean.name == "bool"
# Test random_sample
assert boolean.random_sample() in {True, False}
assert boolean.random_sample(123) == boolean.random_sample(123)
assert {boolean.value_to_prob(True), boolean.value_to_prob(False)} == {
0.25,
0.75,
}
def test_boolean_repr():
assert repr(hp_module.Boolean("bool")) == repr(hp_module.Boolean("bool"))
def test_boolean_values_property():
assert list(hp_module.Boolean("bool").values) == [True, False]
| keras-tuner/keras_tuner/engine/hyperparameters/hp_types/boolean_hp_test.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/hyperparameters/hp_types/boolean_hp_test.py",
"repo_id": "keras-tuner",
"token_count": 561
} | 151 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import numpy as np
import six
from keras_tuner import protos
from keras_tuner.api_export import keras_tuner_export
from keras_tuner.backend import keras
class MetricObservation:
"""Metric value at a given step of training across multiple executions.
If the model is trained multiple
times (multiple executions), KerasTuner records the value of each
metric at each training step. These values are aggregated
over multiple executions into a list where each value corresponds
to one execution.
Args:
value: Float or a list of floats. The evaluated metric values.
step: Int. The step of the evaluation, for example, the epoch number.
"""
def __init__(self, value, step):
if not isinstance(value, list):
value = [value]
self.value = value
self.step = step
def append(self, value):
if not isinstance(value, list):
value = [value]
self.value += value
def mean(self):
return np.mean(self.value)
def get_config(self):
return {"value": self.value, "step": self.step}
@classmethod
def from_config(cls, config):
return cls(**config)
def __eq__(self, other):
return (
other.value == self.value and other.step == self.step
if isinstance(other, MetricObservation)
else False
)
def __repr__(self):
return f"MetricObservation(value={self.value}, step={self.step})"
def to_proto(self):
return protos.get_proto().MetricObservation(
value=self.value, step=self.step
)
@classmethod
def from_proto(cls, proto):
return cls(value=list(proto.value), step=proto.step)
class MetricHistory:
"""Record of multiple executions of a single metric.
It contains a collection of `MetricObservation` instances.
Args:
direction: String. The direction of the metric to optimize. The value
should be "min" or "max".
"""
def __init__(self, direction="min"):
if direction not in {"min", "max"}:
raise ValueError(
"`direction` should be one of "
'{"min", "max"}, but got: %s' % (direction,)
)
self.direction = direction
# Mapping step to `MetricObservation`.
self._observations = {}
def update(self, value, step):
if step in self._observations:
self._observations[step].append(value)
else:
self._observations[step] = MetricObservation(value, step=step)
def get_best_value(self):
values = [obs.mean() for obs in self._observations.values()]
if not values:
return None
return (
np.nanmin(values) if self.direction == "min" else np.nanmax(values)
)
def get_best_step(self):
best_value = self.get_best_value()
if best_value is None:
return None
for obs in self._observations.values():
if obs.mean() == best_value:
return obs.step
def get_history(self):
return sorted(self._observations.values(), key=lambda obs: obs.step)
def set_history(self, observations):
for obs in observations:
self.update(obs.value, step=obs.step)
def get_statistics(self):
history = self.get_history()
history_values = [obs.mean() for obs in history]
return (
{
"min": float(np.nanmin(history_values)),
"max": float(np.nanmax(history_values)),
"mean": float(np.nanmean(history_values)),
"median": float(np.nanmedian(history_values)),
"var": float(np.nanvar(history_values)),
"std": float(np.nanstd(history_values)),
}
if len(history_values)
else {}
)
def get_last_value(self):
history = self.get_history()
if history:
last_obs = history[-1]
return last_obs.mean()
else:
return None
def get_config(self):
config = {
"direction": self.direction,
"observations": [obs.get_config() for obs in self.get_history()],
}
return config
@classmethod
def from_config(cls, config):
instance = cls(config["direction"])
instance.set_history(
[
MetricObservation.from_config(obs)
for obs in config["observations"]
]
)
return instance
def to_proto(self):
return protos.get_proto().MetricHistory(
observations=[obs.to_proto() for obs in self.get_history()],
maximize=self.direction == "max",
)
@classmethod
def from_proto(cls, proto):
direction = "max" if proto.maximize else "min"
instance = cls(direction)
instance.set_history(
[MetricObservation.from_proto(p) for p in proto.observations]
)
return instance
class MetricsTracker:
"""Record of the values of multiple executions of all metrics.
It contains `MetricHistory` instances for the metrics.
Args:
metrics: List of strings of the names of the metrics.
"""
def __init__(self, metrics=None):
# str -> MetricHistory
self.metrics = {}
self.register_metrics(metrics)
def exists(self, name):
return name in self.metrics
def register_metrics(self, metrics=None):
metrics = metrics or []
for metric in metrics:
self.register(metric.name)
def register(self, name, direction=None):
if self.exists(name):
raise ValueError(f"Metric already exists: {name}")
if direction is None:
direction = infer_metric_direction(name)
if direction is None:
# Objective direction is handled separately, but
# non-objective direction defaults to min.
direction = "min"
self.metrics[name] = MetricHistory(direction)
def update(self, name, value, step=0):
value = float(value)
if not self.exists(name):
self.register(name)
prev_best = self.metrics[name].get_best_value()
self.metrics[name].update(value, step=step)
new_best = self.metrics[name].get_best_value()
improved = new_best != prev_best
return improved
def get_history(self, name):
self._assert_exists(name)
return self.metrics[name].get_history()
def set_history(self, name, observations):
if not self.exists(name):
self.register(name)
self.metrics[name].set_history(observations)
def get_best_value(self, name):
self._assert_exists(name)
return self.metrics[name].get_best_value()
def get_best_step(self, name):
self._assert_exists(name)
return self.metrics[name].get_best_step()
def get_statistics(self, name):
self._assert_exists(name)
return self.metrics[name].get_statistics()
def get_last_value(self, name):
self._assert_exists(name)
return self.metrics[name].get_last_value()
def get_direction(self, name):
self._assert_exists(name)
return self.metrics[name].direction
def get_config(self):
return {
"metrics": {
name: metric_history.get_config()
for name, metric_history in self.metrics.items()
}
}
@classmethod
def from_config(cls, config):
instance = cls()
instance.metrics = {
name: MetricHistory.from_config(metric_history)
for name, metric_history in config["metrics"].items()
}
return instance
def to_proto(self):
return protos.get_proto().MetricsTracker(
metrics={
name: metric_history.to_proto()
for name, metric_history in self.metrics.items()
}
)
@classmethod
def from_proto(cls, proto):
instance = cls()
instance.metrics = {
name: MetricHistory.from_proto(metric_history)
for name, metric_history in proto.metrics.items()
}
return instance
def _assert_exists(self, name):
if name not in self.metrics:
raise ValueError(f"Unknown metric: {name}")
_MAX_METRICS = (
"Accuracy",
"BinaryAccuracy",
"CategoricalAccuracy",
"SparseCategoricalAccuracy",
"TopKCategoricalAccuracy",
"SparseTopKCategoricalAccuracy",
"TruePositives",
"TrueNegatives",
"Precision",
"Recall",
"AUC",
"SensitivityAtSpecificity",
"SpecificityAtSensitivity",
)
_MAX_METRIC_FNS = (
"accuracy",
"categorical_accuracy",
"binary_accuracy",
"sparse_categorical_accuracy",
)
@keras_tuner_export(
"keras_tuner.engine.metrics_tracking.infer_metric_direction",
)
def infer_metric_direction(metric):
# Handle str input and get canonical object.
if isinstance(metric, six.string_types):
metric_name = metric
if metric_name.startswith("val_"):
metric_name = metric_name.replace("val_", "", 1)
if metric_name.startswith("weighted_"):
metric_name = metric_name.replace("weighted_", "", 1)
# Special-cases (from `keras/engine/training_utils.py`)
if metric_name in {"loss", "crossentropy", "ce"}:
return "min"
elif metric_name == "acc":
return "max"
try:
if (
"use_legacy_format"
in inspect.getfullargspec(keras.metrics.deserialize).args
):
metric = keras.metrics.deserialize( # pragma: no cover
metric_name, use_legacy_format=True
)
else:
metric = keras.metrics.deserialize( # pragma: no cover
metric_name
)
except ValueError:
try:
if (
"use_legacy_format"
in inspect.getfullargspec(keras.losses.deserialize).args
):
metric = keras.losses.deserialize( # pragma: no cover
metric_name, use_legacy_format=True
)
else:
metric = keras.losses.deserialize( # pragma: no cover
metric_name
)
except Exception:
# Direction can't be inferred.
return None
# Metric class, Loss class, or function.
if isinstance(metric, (keras.metrics.Metric, keras.losses.Loss)):
name = metric.__class__.__name__
if name == "MeanMetricWrapper":
name = metric._fn.__name__ # pragma: no cover
elif isinstance(metric, str):
name = metric
else:
name = metric.__name__
if name in _MAX_METRICS or name in _MAX_METRIC_FNS:
return "max"
elif hasattr(keras.metrics, name) or hasattr(keras.losses, name):
return "min"
# Direction can't be inferred.
return None
| keras-tuner/keras_tuner/engine/metrics_tracking.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/metrics_tracking.py",
"repo_id": "keras-tuner",
"token_count": 5320
} | 152 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
def test_kerastuner_same_as_keras_tuner():
with pytest.deprecated_call():
import kerastuner
from kerastuner.tuners import RandomSearch # noqa: F401
from kerastuner.tuners import BayesianOptimization # noqa: F401
from kerastuner.tuners import Hyperband # noqa: F401
from kerastuner.engine.base_tuner import BaseTuner # noqa: F401
from kerastuner.engine.conditions import Condition # noqa: F401
from kerastuner.engine.hypermodel import HyperModel # noqa: F401
from kerastuner.engine.hyperparameters import ( # noqa: F401
HyperParameter,
)
from kerastuner.engine.hyperparameters import ( # noqa: F401
HyperParameters,
)
from kerastuner.engine.metrics_tracking import ( # noqa: F401
MetricObservation,
)
from kerastuner import Objective # noqa: F401
from kerastuner.engine.oracle import Oracle # noqa: F401
from kerastuner.engine.tuner import Tuner # noqa: F401
from kerastuner.engine.stateful import Stateful # noqa: F401
from kerastuner.engine.trial import Trial # noqa: F401
from kerastuner.engine.multi_execution_tuner import ( # noqa: F401
MultiExecutionTuner,
)
from kerastuner.applications import HyperResNet # noqa: F401
from kerastuner.applications import HyperXception # noqa: F401
import keras_tuner
attr1 = [attr for attr in dir(kerastuner) if not attr.startswith("__")]
attr2 = [attr for attr in dir(keras_tuner) if not attr.startswith("__")]
assert len(attr1) > 20
assert set(attr1) >= set(attr2)
| keras-tuner/keras_tuner/integration_tests/legacy_import_test.py/0 | {
"file_path": "keras-tuner/keras_tuner/integration_tests/legacy_import_test.py",
"repo_id": "keras-tuner",
"token_count": 860
} | 153 |
# Keras 3: Deep Learning for Humans
Keras 3 is a multi-backend deep learning framework, with support for TensorFlow, JAX, and PyTorch.
## Installation
### Install with pip
Keras 3 is available on PyPI as `keras`. Note that Keras 2 remains available as the `tf-keras` package.
1. Install `keras`:
```
pip install keras --upgrade
```
2. Install backend package(s).
To use `keras`, you should also install the backend of choice: `tensorflow`, `jax`, or `torch`.
Note that `tensorflow` is required for using certain Keras 3 features: certain preprocessing layers
as well as `tf.data` pipelines.
### Local installation
#### Minimal installation
Keras 3 is compatible with Linux and MacOS systems. For Windows users, we recommend using WSL2 to run Keras.
To install a local development version:
1. Install dependencies:
```
pip install -r requirements.txt
```
2. Run installation command from the root directory.
```
python pip_build.py --install
```
#### Adding GPU support
The `requirements.txt` file will install a CPU-only version of TensorFlow, JAX, and PyTorch. For GPU support, we also
provide a separate `requirements-{backend}-cuda.txt` for TensorFlow, JAX, and PyTorch. These install all CUDA
dependencies via `pip` and expect a NVIDIA driver to be pre-installed. We recommend a clean python environment for each
backend to avoid CUDA version mismatches. As an example, here is how to create a Jax GPU environment with `conda`:
```shell
conda create -y -n keras-jax python=3.10
conda activate keras-jax
pip install -r requirements-jax-cuda.txt
python pip_build.py --install
```
## Configuring your backend
You can export the environment variable `KERAS_BACKEND` or you can edit your local config file at `~/.keras/keras.json`
to configure your backend. Available backend options are: `"tensorflow"`, `"jax"`, `"torch"`. Example:
```
export KERAS_BACKEND="jax"
```
In Colab, you can do:
```python
import os
os.environ["KERAS_BACKEND"] = "jax"
import keras
```
**Note:** The backend must be configured before importing `keras`, and the backend cannot be changed after
the package has been imported.
## Backwards compatibility
Keras 3 is intended to work as a drop-in replacement for `tf.keras` (when using the TensorFlow backend). Just take your
existing `tf.keras` code, make sure that your calls to `model.save()` are using the up-to-date `.keras` format, and you're
done.
If your `tf.keras` model does not include custom components, you can start running it on top of JAX or PyTorch immediately.
If it does include custom components (e.g. custom layers or a custom `train_step()`), it is usually possible to convert it
to a backend-agnostic implementation in just a few minutes.
In addition, Keras models can consume datasets in any format, regardless of the backend you're using:
you can train your models with your existing `tf.data.Dataset` pipelines or PyTorch `DataLoaders`.
## Why use Keras 3?
- Run your high-level Keras workflows on top of any framework -- benefiting at will from the advantages of each framework,
e.g. the scalability and performance of JAX or the production ecosystem options of TensorFlow.
- Write custom components (e.g. layers, models, metrics) that you can use in low-level workflows in any framework.
- You can take a Keras model and train it in a training loop written from scratch in native TF, JAX, or PyTorch.
- You can take a Keras model and use it as part of a PyTorch-native `Module` or as part of a JAX-native model function.
- Make your ML code future-proof by avoiding framework lock-in.
- As a PyTorch user: get access to power and usability of Keras, at last!
- As a JAX user: get access to a fully-featured, battle-tested, well-documented modeling and training library.
Read more in the [Keras 3 release announcement](https://keras.io/keras_3/).
| keras/README.md/0 | {
"file_path": "keras/README.md",
"repo_id": "keras",
"token_count": 1119
} | 154 |
import numpy as np
from keras import backend
from keras import ops
from keras import testing
from keras.backend.common.stateless_scope import StatelessScope
class TestStatelessScope(testing.TestCase):
def test_basic_flow(self):
var1 = backend.Variable(np.zeros((2,)))
var2 = backend.Variable(np.zeros((2,)))
var_out = backend.Variable(np.zeros((2,)))
value1 = ops.ones(shape=(2,))
value2 = ops.ones(shape=(2,))
with StatelessScope(
state_mapping=[(var1, value1), (var2, value2)]
) as scope:
out = var1 + var2
var_out.assign(out)
var_out_value = var_out + 0.0
# Inside scope: new value is used.
self.assertAllClose(var_out_value, 2 * np.ones((2,)))
# Out of scope: old value is used.
var_out_value = var_out + 0.0
self.assertAllClose(var_out_value, np.zeros((2,)))
# Updates are tracked.
var_out_value = scope.get_current_value(var_out)
self.assertAllClose(var_out_value, 2 * np.ones((2,)))
# Updates can be reapplied.
var_out.assign(scope.get_current_value(var_out))
self.assertAllClose(var_out_value, 2 * np.ones((2,)))
def test_invalid_key_in_state_mapping(self):
# var1 = backend.Variable(np.zeros((2,)))
invalid_key = "not_a_keras_variable"
value1 = ops.ones(shape=(2,))
with self.assertRaisesRegex(
ValueError, "all keys in argument `mapping` must be KerasVariable"
):
StatelessScope(state_mapping=[(invalid_key, value1)])
def test_invalid_value_shape_in_state_mapping(self):
var1 = backend.Variable(np.zeros((2,)))
invalid_value = ops.ones(shape=(3,)) # Incorrect shape
with self.assertRaisesRegex(
ValueError, "all values in argument `mapping` must be tensors with"
):
StatelessScope(state_mapping=[(var1, invalid_value)])
| keras/keras/backend/common/stateless_scope_test.py/0 | {
"file_path": "keras/keras/backend/common/stateless_scope_test.py",
"repo_id": "keras",
"token_count": 892
} | 155 |
import jax
from keras.backend.config import floatx
from keras.random.seed_generator import SeedGenerator
from keras.random.seed_generator import draw_seed
from keras.random.seed_generator import make_default_seed
def jax_draw_seed(seed):
if isinstance(seed, jax.Array):
return seed
else:
return draw_seed(seed)
def normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
dtype = dtype or floatx()
seed = jax_draw_seed(seed)
sample = jax.random.normal(seed, shape=shape, dtype=dtype)
return sample * stddev + mean
def uniform(shape, minval=0.0, maxval=1.0, dtype=None, seed=None):
dtype = dtype or floatx()
seed = jax_draw_seed(seed)
return jax.random.uniform(
seed, shape=shape, dtype=dtype, minval=minval, maxval=maxval
)
def categorical(logits, num_samples, dtype="int32", seed=None):
seed = jax_draw_seed(seed)
output_shape = list(logits.shape)
output_shape[1] = num_samples
output_shape = tuple(output_shape)
output = jax.random.categorical(
seed, logits[..., None], shape=output_shape, axis=1
)
return output.astype(dtype)
def randint(shape, minval, maxval, dtype="int32", seed=None):
seed = jax_draw_seed(seed)
return jax.random.randint(
seed, shape=shape, dtype=dtype, minval=minval, maxval=maxval
)
def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
dtype = dtype or floatx()
seed = jax_draw_seed(seed)
sample = jax.random.truncated_normal(
seed, shape=shape, lower=-2.0, upper=2.0, dtype=dtype
)
return sample * stddev + mean
def _get_concrete_noise_shape(inputs, noise_shape):
if noise_shape is None:
return inputs.shape
concrete_inputs_shape = inputs.shape
concrete_noise_shape = []
for i, value in enumerate(noise_shape):
concrete_noise_shape.append(
concrete_inputs_shape[i] if value is None else value
)
return concrete_noise_shape
def dropout(inputs, rate, noise_shape=None, seed=None):
seed = jax_draw_seed(seed)
keep_prob = 1.0 - rate
# The `noise_shape` may contain `None` so we need to convert it
# into a concrete shape before passing it on to jax.
noise_shape = _get_concrete_noise_shape(inputs, noise_shape)
mask = jax.random.bernoulli(seed, p=keep_prob, shape=noise_shape)
mask = jax.numpy.broadcast_to(mask, inputs.shape)
return jax.lax.select(
mask, inputs / keep_prob, jax.numpy.zeros_like(inputs)
)
def shuffle(x, axis=0, seed=None):
seed = jax_draw_seed(seed)
return jax.random.shuffle(seed, x, axis)
def gamma(shape, alpha, dtype=None, seed=None):
seed = jax_draw_seed(seed)
dtype = dtype or floatx()
return jax.random.gamma(seed, alpha, shape=shape, dtype=dtype)
def binomial(shape, counts, probabilities, dtype=None, seed=None):
dtype = dtype or floatx()
seed = jax_draw_seed(seed)
# jax doesn't accept python lists as arguments
counts = jax.numpy.array(counts)
probabilities = jax.numpy.array(probabilities)
sample = jax.random.binomial(
key=seed, n=counts, p=probabilities, shape=shape, dtype=dtype
)
return sample
def beta(shape, alpha, beta, dtype=None, seed=None):
dtype = dtype or floatx()
seed = jax_draw_seed(seed)
# jax doesn't accept python lists as arguments
alpha = jax.numpy.array(alpha)
beta = jax.numpy.array(beta)
sample = jax.random.beta(
key=seed, a=alpha, b=beta, shape=shape, dtype=dtype
)
return sample
| keras/keras/backend/jax/random.py/0 | {
"file_path": "keras/keras/backend/jax/random.py",
"repo_id": "keras",
"token_count": 1474
} | 156 |
import types
import numpy as np
import tensorflow as tf
from tensorflow.compiler.tf2xla.python.xla import dynamic_update_slice
from keras.backend.common import KerasVariable
from keras.backend.common import global_state
from keras.backend.common import standardize_dtype
from keras.backend.common.keras_tensor import KerasTensor
from keras.backend.common.name_scope import name_scope as base_name_scope
from keras.backend.common.stateless_scope import StatelessScope
from keras.backend.common.stateless_scope import in_stateless_scope
from keras.utils.naming import auto_name
SUPPORTS_SPARSE_TENSORS = True
class Variable(
KerasVariable,
tf.__internal__.types.Tensor,
tf.__internal__.tracking.Trackable,
):
_should_act_as_resource_variable = True
@property
def handle(self):
return self.value.handle
def _initialize(self, value):
self._value = tf.Variable(
value, dtype=self._dtype, trainable=self.trainable, name=self.name
)
def _deferred_initialize(self):
if self._value is not None:
raise ValueError(f"Variable {self.path} is already initialized.")
if in_stateless_scope():
raise ValueError(
"You are attempting to initialize a variable "
"while in a stateless scope. This is disallowed. "
"Make sure that all variables are initialized "
"before you start using your layer/model objects."
)
with tf.init_scope():
value = self._initializer(self._shape, dtype=self._dtype)
self._initialize(value)
def _direct_assign(self, value):
self._value.assign(tf.cast(value, self._value.dtype))
def _convert_to_tensor(self, value, dtype=None):
return convert_to_tensor(value, dtype=dtype)
def numpy(self): # noqa: F811
return self.value.numpy()
@property
def shape(self):
return tf.TensorShape(super().shape)
# Overload native accessor.
def __tf_tensor__(self, dtype=None, name=None):
return tf.convert_to_tensor(self.value, dtype=dtype, name=name)
# Methods below are for SavedModel support
@property
def _shared_name(self):
return self.value._shared_name
def _serialize_to_tensors(self):
try:
return self.value._serialize_to_tensors()
except NotImplementedError:
return {"VARIABLE_VALUE": self.value}
def _restore_from_tensors(self, restored_tensors):
try:
return self.value._restore_from_tensors(restored_tensors)
except NotImplementedError:
self.assign(restored_tensors["VARIABLE_VALUE"])
return self.value
def _copy_trackable_to_cpu(self, object_map):
self.value._copy_trackable_to_cpu(object_map)
object_map[self] = tf.Variable(object_map[self.value])
def _export_to_saved_model_graph(
self, object_map, tensor_map, options, **kwargs
):
resource_list = self.value._export_to_saved_model_graph(
object_map, tensor_map, options, **kwargs
)
object_map[self] = tf.Variable(object_map[self.value])
return resource_list
def _write_object_proto(self, proto, options):
return self.value._write_object_proto(proto, options)
def convert_to_tensor(x, dtype=None, sparse=None):
if isinstance(x, tf.SparseTensor) and sparse is not None and not sparse:
x_shape = x.shape
x = tf.sparse.to_dense(x)
x.set_shape(x_shape)
if dtype is not None:
dtype = standardize_dtype(dtype)
if not tf.is_tensor(x):
if dtype == "bool":
# TensorFlow boolean conversion is stricter than other backends.
# It does not allow ints. We convert without dtype and cast instead.
x = tf.convert_to_tensor(x)
return tf.cast(x, dtype)
return tf.convert_to_tensor(x, dtype=dtype)
elif dtype is not None:
return tf.cast(x, dtype=dtype)
else:
return x
def convert_to_numpy(x):
if isinstance(x, tf.SparseTensor):
x_shape = x.shape
x = tf.sparse.to_dense(x)
x.set_shape(x_shape)
elif isinstance(x, tf.IndexedSlices):
x = tf.convert_to_tensor(x)
return np.asarray(x)
def is_tensor(x):
return tf.is_tensor(x)
def shape(x):
"""Always return a tuple shape.
`tf.shape` will return a `tf.Tensor`, which differs from the tuple return
type on the torch and jax backends. We write our own method instead which
always returns a tuple, with integer values when the shape is known, and
tensor values when the shape is unknown (this is tf specific, as dynamic
shapes do not apply in other backends).
"""
if isinstance(x, KerasTensor):
return x.shape
if not tf.is_tensor(x):
x = tf.convert_to_tensor(x)
dynamic = tf.shape(x)
if x.shape == tf.TensorShape(None):
raise ValueError(
"All tensors passed to `ops.shape` must have a statically known "
f"rank. Received: x={x} with unknown rank."
)
static = x.shape.as_list()
return tuple(dynamic[i] if s is None else s for i, s in enumerate(static))
def cast(x, dtype):
dtype = standardize_dtype(dtype)
return tf.cast(x, dtype=dtype)
def compute_output_spec(fn, *args, **kwargs):
with StatelessScope():
graph_name = auto_name("scratch_graph")
with tf.__internal__.FuncGraph(graph_name).as_default():
def convert_keras_tensor_to_tf(x):
if isinstance(x, KerasTensor):
if x.sparse:
return tf.compat.v1.sparse_placeholder(
shape=x.shape, dtype=x.dtype
)
else:
return tf.compat.v1.placeholder(
shape=x.shape, dtype=x.dtype
)
if isinstance(x, types.FunctionType):
def _fn(*x_args, **x_kwargs):
out = x(*x_args, **x_kwargs)
out = convert_keras_tensor_to_tf(out)
return out
return _fn
return x
args, kwargs = tf.nest.map_structure(
convert_keras_tensor_to_tf, (args, kwargs)
)
tf_out = fn(*args, **kwargs)
def convert_tf_to_keras_tensor(x):
if tf.is_tensor(x):
return KerasTensor(
x.shape, x.dtype, sparse=isinstance(x, tf.SparseTensor)
)
return x
output_spec = tf.nest.map_structure(
convert_tf_to_keras_tensor, tf_out
)
return output_spec
def cond(pred, true_fn, false_fn):
return tf.cond(pred, true_fn=true_fn, false_fn=false_fn)
def vectorized_map(function, elements):
return tf.vectorized_map(function, elements)
def scatter(indices, values, shape):
return tf.scatter_nd(indices, values, shape)
def scatter_update(inputs, indices, updates):
return tf.tensor_scatter_nd_update(inputs, indices, updates)
def slice(inputs, start_indices, shape):
return tf.slice(inputs, start_indices, shape)
def slice_update(inputs, start_indices, updates):
return dynamic_update_slice(inputs, updates, start_indices)
def while_loop(
cond,
body,
loop_vars,
maximum_iterations=None,
):
is_tuple = isinstance(loop_vars, (tuple, list))
loop_vars = tuple(loop_vars) if is_tuple else (loop_vars,)
def _body(*args):
outputs = body(*args)
return tuple(outputs) if is_tuple else (outputs,)
outputs = tf.while_loop(
cond,
_body,
loop_vars,
maximum_iterations=maximum_iterations,
)
return outputs if is_tuple else outputs[0]
def fori_loop(lower, upper, body_fun, init_val):
return tf.while_loop(
lambda i, val: i < upper,
lambda i, val: (i + 1, body_fun(i, val)),
(lower, init_val),
)[1]
def stop_gradient(variable):
return tf.stop_gradient(variable)
def unstack(x, num=None, axis=0):
return tf.unstack(x, num=num, axis=axis)
class name_scope(base_name_scope):
def __init__(self, name, **kwargs):
super().__init__(name, **kwargs)
self._tf_name_scope = tf.name_scope(name)
def __enter__(self):
name_scope_stack = global_state.get_global_attribute(
"name_scope_stack", default=[], set_to_default=True
)
if self.deduplicate and name_scope_stack:
parent_caller = name_scope_stack[-1].caller
parent_name = name_scope_stack[-1].name
if (
self.caller is not None
and self.caller is parent_caller
and self.name == parent_name
):
return self
name_scope_stack.append(self)
self._pop_on_exit = True
self._tf_name_scope.__enter__()
return self
def __exit__(self, *args, **kwargs):
super().__exit__(*args, **kwargs)
if self._pop_on_exit:
self._tf_name_scope.__exit__(*args, **kwargs)
def device_scope(device_name):
return tf.device(device_name)
| keras/keras/backend/tensorflow/core.py/0 | {
"file_path": "keras/keras/backend/tensorflow/core.py",
"repo_id": "keras",
"token_count": 4304
} | 157 |
import tensorflow as tf
def start_trace(logdir):
tf.profiler.experimental.start(logdir=logdir)
def stop_trace(save):
tf.profiler.experimental.stop(save=save)
| keras/keras/backend/tensorflow/tensorboard.py/0 | {
"file_path": "keras/keras/backend/tensorflow/tensorboard.py",
"repo_id": "keras",
"token_count": 64
} | 158 |
import torch
from keras import ops
from keras import optimizers
from keras.backend.torch.optimizers import torch_parallel_optimizer
class Adam(torch_parallel_optimizer.TorchParallelOptimizer, optimizers.Adam):
def _parallel_update_step(
self,
grads,
variables,
learning_rate,
):
keras_variables = variables
variables = [v.value for v in variables]
dtype = variables[0].dtype
lr = ops.cast(learning_rate, dtype)
local_step = ops.cast(self.iterations + 1, dtype)
beta_1_power = ops.power(ops.cast(self.beta_1, dtype), local_step)
beta_2_power = ops.power(ops.cast(self.beta_2, dtype), local_step)
alpha = lr * ops.sqrt(1 - beta_2_power) / (1 - beta_1_power)
m_list = [
self._momentums[self._get_variable_index(variable)].value
for variable in keras_variables
]
v_list = [
self._velocities[self._get_variable_index(variable)].value
for variable in keras_variables
]
torch._foreach_mul_(m_list, self.beta_1)
torch._foreach_add_(m_list, grads, alpha=1 - self.beta_1)
torch._foreach_mul_(v_list, self.beta_2)
torch._foreach_add_(
v_list, torch._foreach_mul(grads, grads), alpha=1 - self.beta_2
)
if self.amsgrad:
v_hat_list = [
self._velocity_hats[self._get_variable_index(variable)].value
for variable in keras_variables
]
torch._foreach_maximum_(v_hat_list, v_list)
v_list = v_hat_list
torch._foreach_add_(
variables,
torch._foreach_div(
torch._foreach_mul(m_list, alpha),
torch._foreach_add(torch._foreach_sqrt(v_list), self.epsilon),
),
alpha=-1,
)
| keras/keras/backend/torch/optimizers/torch_adam.py/0 | {
"file_path": "keras/keras/backend/torch/optimizers/torch_adam.py",
"repo_id": "keras",
"token_count": 933
} | 159 |
import tree
from keras.api_export import keras_export
from keras.callbacks.callback import Callback
from keras.callbacks.history import History
from keras.callbacks.progbar_logger import ProgbarLogger
@keras_export("keras.callbacks.CallbackList")
class CallbackList(Callback):
"""Container abstracting a list of callbacks."""
def __init__(
self,
callbacks=None,
add_history=False,
add_progbar=False,
model=None,
**params,
):
"""Container for `Callback` instances.
This object wraps a list of `Callback` instances, making it possible
to call them all at once via a single endpoint
(e.g. `callback_list.on_epoch_end(...)`).
Args:
callbacks: List of `Callback` instances.
add_history: Whether a `History` callback should be added, if one
does not already exist in the `callbacks` list.
add_progbar: Whether a `ProgbarLogger` callback should be added, if
one does not already exist in the `callbacks` list.
model: The `Model` these callbacks are used with.
**params: If provided, parameters will be passed to each `Callback`
via `Callback.set_params`.
"""
self.callbacks = tree.flatten(callbacks) if callbacks else []
self._add_default_callbacks(add_history, add_progbar)
if model:
self.set_model(model)
if params:
self.set_params(params)
def _add_default_callbacks(self, add_history, add_progbar):
"""Adds `Callback`s that are always present."""
self._progbar = None
self._history = None
for cb in self.callbacks:
if isinstance(cb, ProgbarLogger):
self._progbar = cb
elif isinstance(cb, History):
self._history = cb
if self._history is None and add_history:
self._history = History()
self.callbacks.append(self._history)
if self._progbar is None and add_progbar:
self._progbar = ProgbarLogger()
self.callbacks.append(self._progbar)
def append(self, callback):
self.callbacks.append(callback)
def set_params(self, params):
self.params = params
for callback in self.callbacks:
callback.set_params(params)
def set_model(self, model):
super().set_model(model)
if self._history:
model.history = self._history
for callback in self.callbacks:
callback.set_model(model)
def on_batch_begin(self, batch, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_batch_begin(batch, logs=logs)
def on_batch_end(self, batch, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_batch_end(batch, logs=logs)
def on_epoch_begin(self, epoch, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_epoch_begin(epoch, logs)
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_epoch_end(epoch, logs)
def on_train_batch_begin(self, batch, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_train_batch_begin(batch, logs=logs)
def on_train_batch_end(self, batch, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_train_batch_end(batch, logs=logs)
def on_test_batch_begin(self, batch, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_test_batch_begin(batch, logs=logs)
def on_test_batch_end(self, batch, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_test_batch_end(batch, logs=logs)
def on_predict_batch_begin(self, batch, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_predict_batch_begin(batch, logs=logs)
def on_predict_batch_end(self, batch, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_predict_batch_end(batch, logs=logs)
def on_train_begin(self, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_train_begin(logs)
def on_train_end(self, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_train_end(logs)
def on_test_begin(self, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_test_begin(logs)
def on_test_end(self, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_test_end(logs)
def on_predict_begin(self, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_predict_begin(logs)
def on_predict_end(self, logs=None):
logs = logs or {}
for callback in self.callbacks:
callback.on_predict_end(logs)
| keras/keras/callbacks/callback_list.py/0 | {
"file_path": "keras/keras/callbacks/callback_list.py",
"repo_id": "keras",
"token_count": 2321
} | 160 |
import json
import warnings
import numpy as np
from keras.api_export import keras_export
from keras.callbacks.callback import Callback
try:
import requests
except ImportError:
requests = None
@keras_export("keras.callbacks.RemoteMonitor")
class RemoteMonitor(Callback):
"""Callback used to stream events to a server.
Requires the `requests` library.
Events are sent to `root + '/publish/epoch/end/'` by default. Calls are
HTTP POST, with a `data` argument which is a
JSON-encoded dictionary of event data.
If `send_as_json=True`, the content type of the request will be
`"application/json"`.
Otherwise the serialized JSON will be sent within a form.
Args:
root: String; root url of the target server.
path: String; path relative to `root` to which the events will be sent.
field: String; JSON field under which the data will be stored.
The field is used only if the payload is sent within a form
(i.e. send_as_json is set to False).
headers: Dictionary; optional custom HTTP headers.
send_as_json: Boolean; whether the request should be
sent as `"application/json"`.
"""
def __init__(
self,
root="http://localhost:9000",
path="/publish/epoch/end/",
field="data",
headers=None,
send_as_json=False,
):
super().__init__()
self.root = root
self.path = path
self.field = field
self.headers = headers
self.send_as_json = send_as_json
def on_epoch_end(self, epoch, logs=None):
if requests is None:
raise ImportError("RemoteMonitor requires the `requests` library.")
logs = logs or {}
send = {}
send["epoch"] = epoch
for k, v in logs.items():
# np.ndarray and np.generic are not scalar types
# therefore we must unwrap their scalar values and
# pass to the json-serializable dict 'send'
if isinstance(v, (np.ndarray, np.generic)):
send[k] = v.item()
else:
send[k] = v
try:
if self.send_as_json:
requests.post(
self.root + self.path, json=send, headers=self.headers
)
else:
requests.post(
self.root + self.path,
{self.field: json.dumps(send)},
headers=self.headers,
)
except requests.exceptions.RequestException:
warnings.warn(
f"Could not reach RemoteMonitor root server at {self.root}",
stacklevel=2,
)
| keras/keras/callbacks/remote_monitor.py/0 | {
"file_path": "keras/keras/callbacks/remote_monitor.py",
"repo_id": "keras",
"token_count": 1213
} | 161 |
import numpy as np
from keras import backend
from keras import initializers
from keras import testing
class ConstantInitializersTest(testing.TestCase):
def test_zeros_initializer(self):
shape = (3, 3)
initializer = initializers.Zeros()
values = initializer(shape=shape)
self.assertEqual(values.shape, shape)
np_values = backend.convert_to_numpy(values)
self.assertAllClose(np_values, np.zeros(shape=shape))
self.run_class_serialization_test(initializer)
def test_ones_initializer(self):
shape = (3, 3)
initializer = initializers.Ones()
values = initializer(shape=shape)
self.assertEqual(values.shape, shape)
np_values = backend.convert_to_numpy(values)
self.assertAllClose(np_values, np.ones(shape=shape))
self.run_class_serialization_test(initializer)
def test_constant_initializer(self):
shape = (3, 3)
constant_value = 6.0
initializer = initializers.Constant(value=constant_value)
values = initializer(shape=shape)
self.assertEqual(values.shape, shape)
np_values = backend.convert_to_numpy(values)
self.assertAllClose(
np_values, np.full(shape=shape, fill_value=constant_value)
)
self.run_class_serialization_test(initializer)
def test_constant_initializer_array_value(self):
shape = (3, 3)
constant_value = np.random.random((3, 3))
initializer = initializers.Constant(value=constant_value)
values = initializer(shape=shape)
self.assertEqual(values.shape, shape)
np_values = backend.convert_to_numpy(values)
self.assertAllClose(
np_values, np.full(shape=shape, fill_value=constant_value)
)
self.run_class_serialization_test(initializer)
def test_identity_initializer(self):
shape = (3, 3)
gain = 2
initializer = initializers.Identity(gain=gain)
values = initializer(shape=shape)
self.assertEqual(values.shape, shape)
np_values = backend.convert_to_numpy(values)
self.assertAllClose(np_values, np.eye(*shape) * gain)
self.run_class_serialization_test(initializer)
| keras/keras/initializers/constant_initializers_test.py/0 | {
"file_path": "keras/keras/initializers/constant_initializers_test.py",
"repo_id": "keras",
"token_count": 950
} | 162 |
import os
import numpy as np
import pytest
from keras import backend
from keras import constraints
from keras import layers
from keras import models
from keras import ops
from keras import saving
from keras import testing
from keras.backend.common import keras_tensor
class DenseTest(testing.TestCase):
@pytest.mark.requires_trainable_backend
def test_dense_basics(self):
# 2D case, no bias.
self.run_layer_test(
layers.Dense,
init_kwargs={
"units": 4,
"activation": "relu",
"kernel_initializer": "random_uniform",
"bias_initializer": "ones",
"use_bias": False,
},
input_shape=(2, 3),
expected_output_shape=(2, 4),
expected_num_trainable_weights=1,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
)
# 3D case, some regularizers.
self.run_layer_test(
layers.Dense,
init_kwargs={
"units": 5,
"activation": "sigmoid",
"kernel_regularizer": "l2",
"bias_regularizer": "l2",
},
input_shape=(2, 3, 4),
expected_output_shape=(2, 3, 5),
expected_num_trainable_weights=2,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=2, # we have 2 regularizers.
supports_masking=True,
)
def test_dense_correctness(self):
# With bias and activation.
layer = layers.Dense(units=2, activation="relu")
layer.build((1, 2))
layer.set_weights(
[
np.array([[1.0, -2.0], [3.0, -4.0]]),
np.array([5.0, -6.0]),
]
)
inputs = np.array(
[[-1.0, 2.0]],
)
self.assertAllClose(layer(inputs), [[10.0, 0.0]])
# Just a kernel matmul.
layer = layers.Dense(units=2, use_bias=False)
layer.build((1, 2))
layer.set_weights(
[
np.array([[1.0, -2.0], [3.0, -4.0]]),
]
)
inputs = np.array(
[[-1.0, 2.0]],
)
self.assertEqual(layer.bias, None)
self.assertAllClose(layer(inputs), [[5.0, -6.0]])
def test_dense_errors(self):
with self.assertRaisesRegex(ValueError, "incompatible with the layer"):
layer = layers.Dense(units=2, activation="relu")
layer(keras_tensor.KerasTensor((1, 2)))
layer(keras_tensor.KerasTensor((1, 3)))
@pytest.mark.skipif(
not backend.SUPPORTS_SPARSE_TENSORS,
reason="Backend does not support sparse tensors.",
)
def test_dense_sparse(self):
import tensorflow as tf
self.run_layer_test(
layers.Dense,
init_kwargs={
"units": 4,
},
input_shape=(2, 3),
input_sparse=True,
expected_output_shape=(2, 4),
expected_num_trainable_weights=2,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
)
inputs = 4 * backend.random.uniform((10, 10))
inputs = tf.sparse.from_dense(tf.nn.dropout(inputs, 0.8))
inputs = np.random.random((10, 10)).astype("float32")
inputs = np.multiply(inputs, inputs >= 0.8)
if backend.backend() == "tensorflow":
import tensorflow as tf
inputs = tf.sparse.from_dense(inputs)
elif backend.backend() == "jax":
import jax.experimental.sparse as jax_sparse
inputs = jax_sparse.BCOO.fromdense(inputs)
else:
self.fail(f"Sparse is unsupported with backend {backend.backend()}")
layer = layers.Dense(units=10)
outputs = layer(inputs)
# Verify the computation is the same as if it had been a dense tensor
expected_outputs = ops.add(
ops.matmul(
backend.convert_to_tensor(inputs, sparse=False), layer.kernel
),
layer.bias,
)
self.assertAllClose(outputs, expected_outputs)
# Verify the gradient is sparse
if backend.backend() == "tensorflow":
import tensorflow as tf
with tf.GradientTape() as g:
outputs = layer(inputs)
self.assertIsInstance(
g.gradient(outputs, layer.kernel), tf.IndexedSlices
)
def test_dense_no_activation(self):
layer = layers.Dense(units=2, use_bias=False, activation=None)
layer.build((1, 2))
layer.set_weights(
[
np.array([[1.0, -2.0], [3.0, -4.0]]),
]
)
inputs = np.array(
[[-1.0, 2.0]],
)
self.assertEqual(layer.bias, None)
self.assertAllClose(layer(inputs), [[5.0, -6.0]])
def test_dense_without_activation_set(self):
layer = layers.Dense(units=2, use_bias=False)
layer.build((1, 2))
layer.set_weights(
[
np.array([[1.0, -2.0], [3.0, -4.0]]),
]
)
layer.activation = None
inputs = np.array(
[[-1.0, 2.0]],
)
self.assertEqual(layer.bias, None)
self.assertAllClose(layer(inputs), [[5.0, -6.0]])
def test_dense_with_activation(self):
layer = layers.Dense(units=2, use_bias=False, activation="relu")
layer.build((1, 2))
layer.set_weights(
[
np.array([[1.0, -2.0], [3.0, -4.0]]),
]
)
inputs = np.array(
[[-1.0, 2.0]],
)
output = layer(inputs)
expected_output = np.array([[5.0, 0.0]])
self.assertAllClose(output, expected_output)
def test_dense_constraints(self):
layer = layers.Dense(units=2, kernel_constraint="non_neg")
layer.build((None, 2))
self.assertIsInstance(layer.kernel.constraint, constraints.NonNeg)
layer = layers.Dense(units=2, bias_constraint="non_neg")
layer.build((None, 2))
self.assertIsInstance(layer.bias.constraint, constraints.NonNeg)
@pytest.mark.requires_trainable_backend
def test_enable_lora(self):
layer = layers.Dense(units=16)
layer.build((None, 8))
layer.enable_lora(4)
self.assertLen(layer.trainable_weights, 3)
self.assertLen(layer.non_trainable_weights, 1)
# Try eager call
x = np.random.random((64, 8))
y = np.random.random((64, 16))
_ = layer(x[:2])
init_lora_a_kernel_value = layer.lora_kernel_a.numpy()
init_lora_b_kernel_value = layer.lora_kernel_b.numpy()
# Try calling fit()
model = models.Sequential(
[
layer,
]
)
model.compile(optimizer="sgd", loss="mse")
model.fit(x, y)
final_lora_a_kernel_value = layer.lora_kernel_a.numpy()
final_lora_b_kernel_value = layer.lora_kernel_b.numpy()
diff_a = np.max(
np.abs(init_lora_a_kernel_value - final_lora_a_kernel_value)
)
diff_b = np.max(
np.abs(init_lora_b_kernel_value - final_lora_b_kernel_value)
)
self.assertGreater(diff_a, 0.0)
self.assertGreater(diff_b, 0.0)
# Try saving and reloading the model
temp_filepath = os.path.join(self.get_temp_dir(), "lora_model.keras")
model.save(temp_filepath)
new_model = saving.load_model(temp_filepath)
self.assertFalse(new_model.layers[0].lora_enabled)
self.assertAllClose(model.predict(x), new_model.predict(x))
# Try saving and reloading the model's weights only
temp_filepath = os.path.join(
self.get_temp_dir(), "lora_model.weights.h5"
)
model.save_weights(temp_filepath)
# Load the file into a fresh, non-lora model
new_model = models.Sequential(
[
layers.Dense(units=16),
]
)
new_model.build((None, 8))
new_model.load_weights(temp_filepath)
self.assertAllClose(model.predict(x), new_model.predict(x))
# Try loading a normal checkpoint into a lora model
new_model.save_weights(temp_filepath)
model.load_weights(temp_filepath)
self.assertAllClose(model.predict(x), new_model.predict(x))
@pytest.mark.requires_trainable_backend
def test_lora_rank_argument(self):
self.run_layer_test(
layers.Dense,
init_kwargs={
"units": 5,
"activation": "sigmoid",
"kernel_regularizer": "l2",
"lora_rank": 2,
},
input_shape=(2, 3, 4),
expected_output_shape=(2, 3, 5),
expected_num_trainable_weights=3,
expected_num_non_trainable_weights=1,
expected_num_seed_generators=0,
expected_num_losses=2, # we have 2 regularizers.
supports_masking=True,
)
def test_enable_lora_with_kernel_constraint(self):
layer = layers.Dense(units=2, kernel_constraint="max_norm")
with self.assertRaisesRegex(
ValueError, "incompatible with kernel constraints"
):
layer.enable_lora(rank=2)
def test_enable_lora_on_unbuilt_layer(self):
layer = layers.Dense(units=2)
with self.assertRaisesRegex(
ValueError, "Cannot enable lora on a layer that isn't yet built"
):
layer.enable_lora(rank=2)
def test_enable_lora_when_already_enabled(self):
layer = layers.Dense(units=2)
layer.build((None, 2))
layer.enable_lora(rank=2)
with self.assertRaisesRegex(ValueError, "lora is already enabled"):
layer.enable_lora(rank=2)
| keras/keras/layers/core/dense_test.py/0 | {
"file_path": "keras/keras/layers/core/dense_test.py",
"repo_id": "keras",
"token_count": 5227
} | 163 |
from keras import backend
from keras import ops
from keras.api_export import keras_export
from keras.layers.input_spec import InputSpec
from keras.layers.regularization.dropout import Dropout
class BaseSpatialDropout(Dropout):
def __init__(self, rate, seed=None, name=None, dtype=None):
super().__init__(rate, seed=seed, name=name, dtype=dtype)
def call(self, inputs, training=False):
if training and self.rate > 0:
return backend.random.dropout(
inputs,
self.rate,
noise_shape=self._get_noise_shape(inputs),
seed=self.seed_generator,
)
return inputs
def get_config(self):
return {
"rate": self.rate,
"seed": self.seed,
"name": self.name,
"dtype": self.dtype,
}
@keras_export("keras.layers.SpatialDropout1D")
class SpatialDropout1D(BaseSpatialDropout):
"""Spatial 1D version of Dropout.
This layer performs the same function as Dropout, however, it drops
entire 1D feature maps instead of individual elements. If adjacent frames
within feature maps are strongly correlated (as is normally the case in
early convolution layers) then regular dropout will not regularize the
activations and will otherwise just result in an effective learning rate
decrease. In this case, `SpatialDropout1D` will help promote independence
between feature maps and should be used instead.
Args:
rate: Float between 0 and 1. Fraction of the input units to drop.
Call arguments:
inputs: A 3D tensor.
training: Python boolean indicating whether the layer
should behave in training mode (applying dropout)
or in inference mode (pass-through).
Input shape:
3D tensor with shape: `(samples, timesteps, channels)`
Output shape: Same as input.
Reference:
- [Tompson et al., 2014](https://arxiv.org/abs/1411.4280)
"""
def __init__(self, rate, seed=None, name=None, dtype=None):
super().__init__(rate, seed=seed, name=name, dtype=dtype)
self.input_spec = InputSpec(ndim=3)
def _get_noise_shape(self, inputs):
input_shape = ops.shape(inputs)
return (input_shape[0], 1, input_shape[2])
@keras_export("keras.layers.SpatialDropout2D")
class SpatialDropout2D(BaseSpatialDropout):
"""Spatial 2D version of Dropout.
This version performs the same function as Dropout, however, it drops
entire 2D feature maps instead of individual elements. If adjacent pixels
within feature maps are strongly correlated (as is normally the case in
early convolution layers) then regular dropout will not regularize the
activations and will otherwise just result in an effective learning rate
decrease. In this case, `SpatialDropout2D` will help promote independence
between feature maps and should be used instead.
Args:
rate: Float between 0 and 1. Fraction of the input units to drop.
data_format: `"channels_first"` or `"channels_last"`.
In `"channels_first"` mode, the channels dimension (the depth)
is at index 1, in `"channels_last"` mode is it at index 3.
It defaults to the `image_data_format` value found in your
Keras config file at `~/.keras/keras.json`.
If you never set it, then it will be `"channels_last"`.
Call arguments:
inputs: A 4D tensor.
training: Python boolean indicating whether the layer
should behave in training mode (applying dropout)
or in inference mode (pass-through).
Input shape:
4D tensor with shape: `(samples, channels, rows, cols)` if
data_format='channels_first'
or 4D tensor with shape: `(samples, rows, cols, channels)` if
data_format='channels_last'.
Output shape: Same as input.
Reference:
- [Tompson et al., 2014](https://arxiv.org/abs/1411.4280)
"""
def __init__(
self, rate, data_format=None, seed=None, name=None, dtype=None
):
super().__init__(rate, seed=seed, name=name, dtype=dtype)
self.data_format = backend.standardize_data_format(data_format)
self.input_spec = InputSpec(ndim=4)
def _get_noise_shape(self, inputs):
input_shape = ops.shape(inputs)
if self.data_format == "channels_first":
return (input_shape[0], input_shape[1], 1, 1)
elif self.data_format == "channels_last":
return (input_shape[0], 1, 1, input_shape[3])
def get_config(self):
base_config = super().get_config()
config = {
"data_format": self.data_format,
}
return {**base_config, **config}
@keras_export("keras.layers.SpatialDropout3D")
class SpatialDropout3D(BaseSpatialDropout):
"""Spatial 3D version of Dropout.
This version performs the same function as Dropout, however, it drops
entire 3D feature maps instead of individual elements. If adjacent voxels
within feature maps are strongly correlated (as is normally the case in
early convolution layers) then regular dropout will not regularize the
activations and will otherwise just result in an effective learning rate
decrease. In this case, SpatialDropout3D will help promote independence
between feature maps and should be used instead.
Args:
rate: Float between 0 and 1. Fraction of the input units to drop.
data_format: `"channels_first"` or `"channels_last"`.
In `"channels_first"` mode, the channels dimension (the depth)
is at index 1, in `"channels_last"` mode is it at index 4.
It defaults to the `image_data_format` value found in your
Keras config file at `~/.keras/keras.json`.
If you never set it, then it will be `"channels_last"`.
Call arguments:
inputs: A 5D tensor.
training: Python boolean indicating whether the layer
should behave in training mode (applying dropout)
or in inference mode (pass-through).
Input shape:
5D tensor with shape: `(samples, channels, dim1, dim2, dim3)` if
data_format='channels_first'
or 5D tensor with shape: `(samples, dim1, dim2, dim3, channels)` if
data_format='channels_last'.
Output shape: Same as input.
Reference:
- [Tompson et al., 2014](https://arxiv.org/abs/1411.4280)
"""
def __init__(
self, rate, data_format=None, seed=None, name=None, dtype=None
):
super().__init__(rate, seed=seed, name=name, dtype=dtype)
self.data_format = backend.standardize_data_format(data_format)
self.input_spec = InputSpec(ndim=5)
def _get_noise_shape(self, inputs):
input_shape = ops.shape(inputs)
if self.data_format == "channels_first":
return (input_shape[0], input_shape[1], 1, 1, 1)
elif self.data_format == "channels_last":
return (input_shape[0], 1, 1, 1, input_shape[4])
def get_config(self):
base_config = super().get_config()
config = {
"data_format": self.data_format,
}
return {**base_config, **config}
| keras/keras/layers/regularization/spatial_dropout.py/0 | {
"file_path": "keras/keras/layers/regularization/spatial_dropout.py",
"repo_id": "keras",
"token_count": 2878
} | 164 |
import pytest
from absl.testing import parameterized
from keras import backend
from keras import layers
from keras import testing
from keras.backend.common.keras_tensor import KerasTensor
class ReshapeTest(testing.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
[
{"testcase_name": "dense", "sparse": False},
{"testcase_name": "sparse", "sparse": True},
]
)
@pytest.mark.requires_trainable_backend
def test_reshape(self, sparse):
if sparse and not backend.SUPPORTS_SPARSE_TENSORS:
pytest.skip("Backend does not support sparse tensors.")
self.run_layer_test(
layers.Reshape,
init_kwargs={"target_shape": (8, 1)},
input_shape=(3, 2, 4),
input_sparse=sparse,
expected_output_shape=(3, 8, 1),
expected_output_sparse=sparse,
run_training_check=not sparse,
)
self.run_layer_test(
layers.Reshape,
init_kwargs={"target_shape": (8,)},
input_shape=(3, 2, 4),
input_sparse=sparse,
expected_output_shape=(3, 8),
expected_output_sparse=sparse,
run_training_check=not sparse,
)
self.run_layer_test(
layers.Reshape,
init_kwargs={"target_shape": (2, 4)},
input_shape=(3, 8),
input_sparse=sparse,
expected_output_shape=(3, 2, 4),
expected_output_sparse=sparse,
run_training_check=not sparse,
)
self.run_layer_test(
layers.Reshape,
init_kwargs={"target_shape": (-1, 1)},
input_shape=(3, 2, 4),
input_sparse=sparse,
expected_output_shape=(3, 8, 1),
expected_output_sparse=sparse,
run_training_check=not sparse,
)
self.run_layer_test(
layers.Reshape,
init_kwargs={"target_shape": (1, -1)},
input_shape=(3, 2, 4),
input_sparse=sparse,
expected_output_shape=(3, 1, 8),
expected_output_sparse=sparse,
run_training_check=not sparse,
)
self.run_layer_test(
layers.Reshape,
init_kwargs={"target_shape": (-1,)},
input_shape=(3, 2, 4),
input_sparse=sparse,
expected_output_shape=(3, 8),
expected_output_sparse=sparse,
run_training_check=not sparse,
)
self.run_layer_test(
layers.Reshape,
init_kwargs={"target_shape": (2, -1)},
input_shape=(3, 2, 4),
input_sparse=sparse,
expected_output_shape=(3, 2, 4),
expected_output_sparse=sparse,
run_training_check=not sparse,
)
def test_reshape_with_dynamic_batch_size(self):
input_layer = layers.Input(shape=(2, 4))
reshaped = layers.Reshape((8,))(input_layer)
self.assertEqual(reshaped.shape, (None, 8))
def test_reshape_with_dynamic_batch_size_and_minus_one(self):
input = KerasTensor((None, 6, 4))
layer = layers.Reshape((-1, 8))
layer.build(input.shape)
reshaped = backend.compute_output_spec(layer.__call__, input)
self.assertEqual(reshaped.shape, (None, 3, 8))
def test_reshape_with_dynamic_dim_and_minus_one(self):
input = KerasTensor((4, 6, None, 3))
layer = layers.Reshape((-1, 3))
layer.build(input.shape)
reshaped = backend.compute_output_spec(layer.__call__, input)
self.assertEqual(reshaped.shape, (4, None, 3))
def test_reshape_sets_static_shape(self):
input_layer = layers.Input(batch_shape=(2, None))
reshaped = layers.Reshape((3, 5))(input_layer)
# Also make sure the batch dim is not lost after reshape.
self.assertEqual(reshaped.shape, (2, 3, 5))
| keras/keras/layers/reshaping/reshape_test.py/0 | {
"file_path": "keras/keras/layers/reshaping/reshape_test.py",
"repo_id": "keras",
"token_count": 1985
} | 165 |
import tree
from keras import activations
from keras import backend
from keras import constraints
from keras import initializers
from keras import ops
from keras import regularizers
from keras.layers.input_spec import InputSpec
from keras.layers.layer import Layer
from keras.layers.rnn.dropout_rnn_cell import DropoutRNNCell
from keras.layers.rnn.rnn import RNN
from keras.ops import operation_utils
from keras.utils import argument_validation
class ConvLSTMCell(Layer, DropoutRNNCell):
"""Cell class for the ConvLSTM layer.
Args:
rank: Integer, rank of the convolution, e.g. "2" for 2D convolutions.
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of n integers, specifying the
dimensions of the convolution window.
strides: An integer or tuple/list of n integers, specifying the strides
of the convolution. Specifying any stride value != 1
is incompatible with specifying any `dilation_rate` value != 1.
padding: One of `"valid"` or `"same"` (case-insensitive).
`"valid"` means no padding. `"same"` results in padding evenly
to the left/right or up/down of the input such that output
has the same height/width dimension as the input.
data_format: A string, one of `channels_last` (default) or
`channels_first`. When unspecified, uses
`image_data_format` value found in your Keras config file at
`~/.keras/keras.json` (if exists) else 'channels_last'.
Defaults to `'channels_last'`.
dilation_rate: An integer or tuple/list of n integers, specifying the
dilation rate to use for dilated convolution.
Currently, specifying any `dilation_rate` value != 1 is
incompatible with specifying any `strides` value != 1.
activation: Activation function. If `None`, no activation is applied.
recurrent_activation: Activation function to use for the recurrent step.
use_bias: Boolean, (default `True`), whether the layer
should use a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix,
used for the linear transformation of the inputs. Default:
`"glorot_uniform"`.
recurrent_initializer: Initializer for the `recurrent_kernel`
weights matrix, used for the linear transformation of the recurrent
state. Default: `"orthogonal"`.
bias_initializer: Initializer for the bias vector. Default: `"zeros"`.
unit_forget_bias: Boolean (default `True`). If `True`,
add 1 to the bias of the forget gate at initialization.
Setting it to `True` will also force `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et al.](
https://github.com/mlresearch/v37/blob/gh-pages/jozefowicz15.pdf)
kernel_regularizer: Regularizer function applied to the `kernel` weights
matrix. Default: `None`.
recurrent_regularizer: Regularizer function applied to the
`recurrent_kernel` weights matrix. Default: `None`.
bias_regularizer: Regularizer function applied to the bias vector.
Default: `None`.
activity_regularizer: Regularizer function applied to the output of the
layer (its "activation"). Default: `None`.
kernel_constraint: Constraint function applied to the `kernel` weights
matrix. Default: `None`.
recurrent_constraint: Constraint function applied to the
`recurrent_kernel` weights matrix. Default: `None`.
bias_constraint: Constraint function applied to the bias vector.
Default: `None`.
dropout: Float between 0 and 1. Fraction of the units to drop for the
linear transformation of the inputs. Default: 0.
recurrent_dropout: Float between 0 and 1. Fraction of the units to drop
for the linear transformation of the recurrent state. Default: 0.
seed: Random seed for dropout.
Call arguments:
inputs: A (2+ `rank`)D tensor.
states: List of state tensors corresponding to the previous timestep.
training: Python boolean indicating whether the layer should behave in
training mode or in inference mode. Only relevant when `dropout` or
`recurrent_dropout` is used.
"""
def __init__(
self,
rank,
filters,
kernel_size,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
activation="tanh",
recurrent_activation="sigmoid",
use_bias=True,
kernel_initializer="glorot_uniform",
recurrent_initializer="orthogonal",
bias_initializer="zeros",
unit_forget_bias=True,
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.0,
recurrent_dropout=0.0,
seed=None,
**kwargs,
):
super().__init__(**kwargs)
self.seed = seed
self.seed_generator = backend.random.SeedGenerator(seed=seed)
self.rank = rank
if self.rank > 3:
raise ValueError(
f"Rank {rank} convolutions are not currently "
f"implemented. Received: rank={rank}"
)
self.filters = filters
self.kernel_size = argument_validation.standardize_tuple(
kernel_size, self.rank, "kernel_size"
)
self.strides = argument_validation.standardize_tuple(
strides, self.rank, "strides", allow_zero=True
)
self.padding = argument_validation.standardize_padding(padding)
self.data_format = backend.standardize_data_format(data_format)
self.dilation_rate = argument_validation.standardize_tuple(
dilation_rate, self.rank, "dilation_rate"
)
self.activation = activations.get(activation)
self.recurrent_activation = activations.get(recurrent_activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.recurrent_initializer = initializers.get(recurrent_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.unit_forget_bias = unit_forget_bias
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.recurrent_regularizer = regularizers.get(recurrent_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.recurrent_constraint = constraints.get(recurrent_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.dropout = min(1.0, max(0.0, dropout))
self.recurrent_dropout = min(1.0, max(0.0, recurrent_dropout))
self.input_spec = InputSpec(ndim=rank + 2)
self.state_size = -1 # Custom, defined in methods
def build(self, inputs_shape, states_shape=None):
if self.data_format == "channels_first":
channel_axis = 1
self.spatial_dims = inputs_shape[2:]
else:
channel_axis = -1
self.spatial_dims = inputs_shape[1:-1]
if None in self.spatial_dims:
raise ValueError(
"ConvLSTM layers only support static "
"input shapes for the spatial dimension. "
f"Received invalid input shape: input_shape={inputs_shape}"
)
if inputs_shape[channel_axis] is None:
raise ValueError(
"The channel dimension of the inputs (last axis) should be "
"defined. Found None. Full input shape received: "
f"input_shape={inputs_shape}"
)
self.input_spec = InputSpec(
ndim=self.rank + 3, shape=(None,) + inputs_shape[1:]
)
input_dim = inputs_shape[channel_axis]
self.input_dim = input_dim
self.kernel_shape = self.kernel_size + (input_dim, self.filters * 4)
recurrent_kernel_shape = self.kernel_size + (
self.filters,
self.filters * 4,
)
self.kernel = self.add_weight(
shape=self.kernel_shape,
initializer=self.kernel_initializer,
name="kernel",
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
)
self.recurrent_kernel = self.add_weight(
shape=recurrent_kernel_shape,
initializer=self.recurrent_initializer,
name="recurrent_kernel",
regularizer=self.recurrent_regularizer,
constraint=self.recurrent_constraint,
)
if self.use_bias:
if self.unit_forget_bias:
def bias_initializer(_, *args, **kwargs):
return ops.concatenate(
[
self.bias_initializer(
(self.filters,), *args, **kwargs
),
initializers.get("ones")(
(self.filters,), *args, **kwargs
),
self.bias_initializer(
(self.filters * 2,), *args, **kwargs
),
]
)
else:
bias_initializer = self.bias_initializer
self.bias = self.add_weight(
shape=(self.filters * 4,),
name="bias",
initializer=bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
)
else:
self.bias = None
self.built = True
def call(self, inputs, states, training=False):
h_tm1 = states[0] # previous memory state
c_tm1 = states[1] # previous carry state
# dp_mask = self.get_dropout_mask(inputs)
# rec_dp_mask = self.get_recurrent_dropout_mask(h_tm1)
# if training and 0.0 < self.dropout < 1.0:
# inputs *= dp_mask
# if training and 0.0 < self.recurrent_dropout < 1.0:
# h_tm1 *= rec_dp_mask
inputs_i = inputs
inputs_f = inputs
inputs_c = inputs
inputs_o = inputs
h_tm1_i = h_tm1
h_tm1_f = h_tm1
h_tm1_c = h_tm1
h_tm1_o = h_tm1
(kernel_i, kernel_f, kernel_c, kernel_o) = ops.split(
self.kernel, 4, axis=self.rank + 1
)
(
recurrent_kernel_i,
recurrent_kernel_f,
recurrent_kernel_c,
recurrent_kernel_o,
) = ops.split(self.recurrent_kernel, 4, axis=self.rank + 1)
if self.use_bias:
bias_i, bias_f, bias_c, bias_o = ops.split(self.bias, 4)
else:
bias_i, bias_f, bias_c, bias_o = None, None, None, None
x_i = self.input_conv(inputs_i, kernel_i, bias_i, padding=self.padding)
x_f = self.input_conv(inputs_f, kernel_f, bias_f, padding=self.padding)
x_c = self.input_conv(inputs_c, kernel_c, bias_c, padding=self.padding)
x_o = self.input_conv(inputs_o, kernel_o, bias_o, padding=self.padding)
h_i = self.recurrent_conv(h_tm1_i, recurrent_kernel_i)
h_f = self.recurrent_conv(h_tm1_f, recurrent_kernel_f)
h_c = self.recurrent_conv(h_tm1_c, recurrent_kernel_c)
h_o = self.recurrent_conv(h_tm1_o, recurrent_kernel_o)
i = self.recurrent_activation(x_i + h_i)
f = self.recurrent_activation(x_f + h_f)
c = f * c_tm1 + i * self.activation(x_c + h_c)
o = self.recurrent_activation(x_o + h_o)
h = o * self.activation(c)
return h, [h, c]
def compute_output_shape(self, inputs_shape, states_shape=None):
conv_output_shape = operation_utils.compute_conv_output_shape(
inputs_shape,
self.filters,
self.kernel_size,
strides=self.strides,
padding=self.padding,
data_format=self.data_format,
dilation_rate=self.dilation_rate,
)
return conv_output_shape, [conv_output_shape, conv_output_shape]
def get_initial_state(self, batch_size=None):
if self.data_format == "channels_last":
input_shape = (batch_size,) + self.spatial_dims + (self.input_dim,)
else:
input_shape = (batch_size, self.input_dim) + self.spatial_dims
state_shape = self.compute_output_shape(input_shape)[0]
return [
ops.zeros(state_shape, dtype=self.compute_dtype),
ops.zeros(state_shape, dtype=self.compute_dtype),
]
def input_conv(self, x, w, b=None, padding="valid"):
conv_out = ops.conv(
x,
w,
strides=self.strides,
padding=padding,
data_format=self.data_format,
dilation_rate=self.dilation_rate,
)
if b is not None:
if self.data_format == "channels_last":
bias_shape = (1,) * (self.rank + 1) + (self.filters,)
else:
bias_shape = (1, self.filters) + (1,) * self.rank
bias = ops.reshape(b, bias_shape)
conv_out += bias
return conv_out
def recurrent_conv(self, x, w):
strides = argument_validation.standardize_tuple(
1, self.rank, "strides", allow_zero=True
)
conv_out = ops.conv(
x, w, strides=strides, padding="same", data_format=self.data_format
)
return conv_out
def get_config(self):
config = {
"filters": self.filters,
"kernel_size": self.kernel_size,
"strides": self.strides,
"padding": self.padding,
"data_format": self.data_format,
"dilation_rate": self.dilation_rate,
"activation": activations.serialize(self.activation),
"recurrent_activation": activations.serialize(
self.recurrent_activation
),
"use_bias": self.use_bias,
"kernel_initializer": initializers.serialize(
self.kernel_initializer
),
"recurrent_initializer": initializers.serialize(
self.recurrent_initializer
),
"bias_initializer": initializers.serialize(self.bias_initializer),
"unit_forget_bias": self.unit_forget_bias,
"kernel_regularizer": regularizers.serialize(
self.kernel_regularizer
),
"recurrent_regularizer": regularizers.serialize(
self.recurrent_regularizer
),
"bias_regularizer": regularizers.serialize(self.bias_regularizer),
"kernel_constraint": constraints.serialize(self.kernel_constraint),
"recurrent_constraint": constraints.serialize(
self.recurrent_constraint
),
"bias_constraint": constraints.serialize(self.bias_constraint),
"dropout": self.dropout,
"recurrent_dropout": self.recurrent_dropout,
"seed": self.seed,
}
base_config = super().get_config()
return {**base_config, **config}
class ConvLSTM(RNN):
"""Abstract N-D Convolutional LSTM layer (used as implementation base).
Similar to an LSTM layer, but the input transformations
and recurrent transformations are both convolutional.
Args:
rank: Integer, rank of the convolution, e.g. "2" for 2D convolutions.
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of n integers, specifying the
dimensions of the convolution window.
strides: An integer or tuple/list of n integers,
specifying the strides of the convolution.
Specifying any stride value != 1 is incompatible with specifying
any `dilation_rate` value != 1.
padding: One of `"valid"` or `"same"` (case-insensitive).
`"valid"` means no padding. `"same"` results in padding evenly to
the left/right or up/down of the input such that output has the same
height/width dimension as the input.
data_format: A string,
one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape
`(batch, time, ..., channels)`
while `channels_first` corresponds to
inputs with shape `(batch, time, channels, ...)`.
When unspecified, uses
`image_data_format` value found in your Keras config file at
`~/.keras/keras.json` (if exists) else 'channels_last'.
Defaults to `'channels_last'`.
dilation_rate: An integer or tuple/list of n integers, specifying
the dilation rate to use for dilated convolution.
Currently, specifying any `dilation_rate` value != 1 is
incompatible with specifying any `strides` value != 1.
activation: Activation function to use.
By default hyperbolic tangent activation function is applied
(`tanh(x)`).
recurrent_activation: Activation function to use
for the recurrent step.
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix,
used for the linear transformation of the inputs.
recurrent_initializer: Initializer for the `recurrent_kernel`
weights matrix,
used for the linear transformation of the recurrent state.
bias_initializer: Initializer for the bias vector.
unit_forget_bias: Boolean.
If True, add 1 to the bias of the forget gate at initialization.
Use in combination with `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et al., 2015](
http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix.
recurrent_regularizer: Regularizer function applied to
the `recurrent_kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to.
kernel_constraint: Constraint function applied to
the `kernel` weights matrix.
recurrent_constraint: Constraint function applied to
the `recurrent_kernel` weights matrix.
bias_constraint: Constraint function applied to the bias vector.
dropout: Float between 0 and 1.
Fraction of the units to drop for
the linear transformation of the inputs.
recurrent_dropout: Float between 0 and 1.
Fraction of the units to drop for
the linear transformation of the recurrent state.
seed: Random seed for dropout.
return_sequences: Boolean. Whether to return the last output
in the output sequence, or the full sequence. (default False)
return_state: Boolean Whether to return the last state
in addition to the output. (default False)
go_backwards: Boolean (default False).
If True, process the input sequence backwards.
stateful: Boolean (default False). If True, the last state
for each sample at index i in a batch will be used as initial
state for the sample of index i in the following batch.
"""
def __init__(
self,
rank,
filters,
kernel_size,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
activation="tanh",
recurrent_activation="sigmoid",
use_bias=True,
kernel_initializer="glorot_uniform",
recurrent_initializer="orthogonal",
bias_initializer="zeros",
unit_forget_bias=True,
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.0,
recurrent_dropout=0.0,
seed=None,
return_sequences=False,
return_state=False,
go_backwards=False,
stateful=False,
**kwargs,
):
cell = ConvLSTMCell(
rank=rank,
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activation,
recurrent_activation=recurrent_activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
recurrent_initializer=recurrent_initializer,
bias_initializer=bias_initializer,
unit_forget_bias=unit_forget_bias,
kernel_regularizer=kernel_regularizer,
recurrent_regularizer=recurrent_regularizer,
bias_regularizer=bias_regularizer,
kernel_constraint=kernel_constraint,
recurrent_constraint=recurrent_constraint,
bias_constraint=bias_constraint,
dropout=dropout,
recurrent_dropout=recurrent_dropout,
seed=seed,
name="conv_lstm_cell",
dtype=kwargs.get("dtype"),
)
super().__init__(
cell,
return_sequences=return_sequences,
return_state=return_state,
go_backwards=go_backwards,
stateful=stateful,
**kwargs,
)
self.input_spec = InputSpec(ndim=rank + 3)
def call(self, sequences, initial_state=None, mask=None, training=False):
return super().call(
sequences, initial_state=initial_state, mask=mask, training=training
)
def compute_output_shape(self, sequences_shape, initial_state_shape=None):
batch_size = sequences_shape[0]
steps = sequences_shape[1]
step_shape = (batch_size,) + sequences_shape[2:]
state_shape = self.cell.compute_output_shape(step_shape)[0][1:]
if self.return_sequences:
output_shape = (
batch_size,
steps,
) + state_shape
else:
output_shape = (batch_size,) + state_shape
if self.return_state:
batched_state_shape = (batch_size,) + state_shape
return output_shape, batched_state_shape, batched_state_shape
return output_shape
def compute_mask(self, _, mask):
mask = tree.flatten(mask)[0]
output_mask = mask if self.return_sequences else None
if self.return_state:
state_mask = [None, None]
return [output_mask] + state_mask
else:
return output_mask
@property
def filters(self):
return self.cell.filters
@property
def kernel_size(self):
return self.cell.kernel_size
@property
def strides(self):
return self.cell.strides
@property
def padding(self):
return self.cell.padding
@property
def data_format(self):
return self.cell.data_format
@property
def dilation_rate(self):
return self.cell.dilation_rate
@property
def activation(self):
return self.cell.activation
@property
def recurrent_activation(self):
return self.cell.recurrent_activation
@property
def use_bias(self):
return self.cell.use_bias
@property
def kernel_initializer(self):
return self.cell.kernel_initializer
@property
def recurrent_initializer(self):
return self.cell.recurrent_initializer
@property
def bias_initializer(self):
return self.cell.bias_initializer
@property
def unit_forget_bias(self):
return self.cell.unit_forget_bias
@property
def kernel_regularizer(self):
return self.cell.kernel_regularizer
@property
def recurrent_regularizer(self):
return self.cell.recurrent_regularizer
@property
def bias_regularizer(self):
return self.cell.bias_regularizer
@property
def kernel_constraint(self):
return self.cell.kernel_constraint
@property
def recurrent_constraint(self):
return self.cell.recurrent_constraint
@property
def bias_constraint(self):
return self.cell.bias_constraint
@property
def dropout(self):
return self.cell.dropout
@property
def recurrent_dropout(self):
return self.cell.recurrent_dropout
def get_config(self):
config = {
"filters": self.filters,
"kernel_size": self.kernel_size,
"strides": self.strides,
"padding": self.padding,
"data_format": self.data_format,
"dilation_rate": self.dilation_rate,
"activation": activations.serialize(self.activation),
"recurrent_activation": activations.serialize(
self.recurrent_activation
),
"use_bias": self.use_bias,
"kernel_initializer": initializers.serialize(
self.kernel_initializer
),
"recurrent_initializer": initializers.serialize(
self.recurrent_initializer
),
"bias_initializer": initializers.serialize(self.bias_initializer),
"unit_forget_bias": self.unit_forget_bias,
"kernel_regularizer": regularizers.serialize(
self.kernel_regularizer
),
"recurrent_regularizer": regularizers.serialize(
self.recurrent_regularizer
),
"bias_regularizer": regularizers.serialize(self.bias_regularizer),
"activity_regularizer": regularizers.serialize(
self.activity_regularizer
),
"kernel_constraint": constraints.serialize(self.kernel_constraint),
"recurrent_constraint": constraints.serialize(
self.recurrent_constraint
),
"bias_constraint": constraints.serialize(self.bias_constraint),
"dropout": self.dropout,
"recurrent_dropout": self.recurrent_dropout,
"seed": self.cell.seed,
}
base_config = super().get_config()
del base_config["cell"]
return {**base_config, **config}
@classmethod
def from_config(cls, config):
return cls(**config)
| keras/keras/layers/rnn/conv_lstm.py/0 | {
"file_path": "keras/keras/layers/rnn/conv_lstm.py",
"repo_id": "keras",
"token_count": 12328
} | 166 |
from keras import activations
from keras import backend
from keras import constraints
from keras import initializers
from keras import ops
from keras import regularizers
from keras.api_export import keras_export
from keras.layers.input_spec import InputSpec
from keras.layers.layer import Layer
from keras.layers.rnn.dropout_rnn_cell import DropoutRNNCell
from keras.layers.rnn.rnn import RNN
@keras_export("keras.layers.SimpleRNNCell")
class SimpleRNNCell(Layer, DropoutRNNCell):
"""Cell class for SimpleRNN.
This class processes one step within the whole time sequence input, whereas
`keras.layer.SimpleRNN` processes the whole sequence.
Args:
units: Positive integer, dimensionality of the output space.
activation: Activation function to use.
Default: hyperbolic tangent (`tanh`).
If you pass `None`, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, (default `True`), whether the layer
should use a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix,
used for the linear transformation of the inputs. Default:
`"glorot_uniform"`.
recurrent_initializer: Initializer for the `recurrent_kernel`
weights matrix, used for the linear transformation
of the recurrent state. Default: `"orthogonal"`.
bias_initializer: Initializer for the bias vector. Default: `"zeros"`.
kernel_regularizer: Regularizer function applied to the `kernel` weights
matrix. Default: `None`.
recurrent_regularizer: Regularizer function applied to the
`recurrent_kernel` weights matrix. Default: `None`.
bias_regularizer: Regularizer function applied to the bias vector.
Default: `None`.
kernel_constraint: Constraint function applied to the `kernel` weights
matrix. Default: `None`.
recurrent_constraint: Constraint function applied to the
`recurrent_kernel` weights matrix. Default: `None`.
bias_constraint: Constraint function applied to the bias vector.
Default: `None`.
dropout: Float between 0 and 1. Fraction of the units to drop for the
linear transformation of the inputs. Default: 0.
recurrent_dropout: Float between 0 and 1. Fraction of the units to drop
for the linear transformation of the recurrent state. Default: 0.
seed: Random seed for dropout.
Call arguments:
sequence: A 2D tensor, with shape `(batch, features)`.
states: A 2D tensor with shape `(batch, units)`, which is the state
from the previous time step.
training: Python boolean indicating whether the layer should behave in
training mode or in inference mode. Only relevant when `dropout` or
`recurrent_dropout` is used.
Example:
```python
inputs = np.random.random([32, 10, 8]).astype(np.float32)
rnn = keras.layers.RNN(keras.layers.SimpleRNNCell(4))
output = rnn(inputs) # The output has shape `(32, 4)`.
rnn = keras.layers.RNN(
keras.layers.SimpleRNNCell(4),
return_sequences=True,
return_state=True
)
# whole_sequence_output has shape `(32, 10, 4)`.
# final_state has shape `(32, 4)`.
whole_sequence_output, final_state = rnn(inputs)
```
"""
def __init__(
self,
units,
activation="tanh",
use_bias=True,
kernel_initializer="glorot_uniform",
recurrent_initializer="orthogonal",
bias_initializer="zeros",
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.0,
recurrent_dropout=0.0,
seed=None,
**kwargs,
):
if units <= 0:
raise ValueError(
"Received an invalid value for argument `units`, "
f"expected a positive integer, got {units}."
)
super().__init__(**kwargs)
self.seed = seed
self.seed_generator = backend.random.SeedGenerator(seed)
self.units = units
self.activation = activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.recurrent_initializer = initializers.get(recurrent_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.recurrent_regularizer = regularizers.get(recurrent_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.recurrent_constraint = constraints.get(recurrent_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.dropout = min(1.0, max(0.0, dropout))
self.recurrent_dropout = min(1.0, max(0.0, recurrent_dropout))
self.state_size = self.units
self.output_size = self.units
def build(self, input_shape):
self.kernel = self.add_weight(
shape=(input_shape[-1], self.units),
name="kernel",
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
)
self.recurrent_kernel = self.add_weight(
shape=(self.units, self.units),
name="recurrent_kernel",
initializer=self.recurrent_initializer,
regularizer=self.recurrent_regularizer,
constraint=self.recurrent_constraint,
)
if self.use_bias:
self.bias = self.add_weight(
shape=(self.units,),
name="bias",
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
)
else:
self.bias = None
self.built = True
def call(self, sequence, states, training=False):
prev_output = states[0] if isinstance(states, (list, tuple)) else states
dp_mask = self.get_dropout_mask(sequence)
rec_dp_mask = self.get_recurrent_dropout_mask(prev_output)
if training and dp_mask is not None:
sequence *= dp_mask
h = ops.matmul(sequence, self.kernel)
if self.bias is not None:
h += self.bias
if training and rec_dp_mask is not None:
prev_output *= rec_dp_mask
output = h + ops.matmul(prev_output, self.recurrent_kernel)
if self.activation is not None:
output = self.activation(output)
new_state = [output] if isinstance(states, (list, tuple)) else output
return output, new_state
def get_initial_state(self, batch_size=None):
return [
ops.zeros((batch_size, self.state_size), dtype=self.compute_dtype)
]
def get_config(self):
config = {
"units": self.units,
"activation": activations.serialize(self.activation),
"use_bias": self.use_bias,
"kernel_initializer": initializers.serialize(
self.kernel_initializer
),
"recurrent_initializer": initializers.serialize(
self.recurrent_initializer
),
"bias_initializer": initializers.serialize(self.bias_initializer),
"kernel_regularizer": regularizers.serialize(
self.kernel_regularizer
),
"recurrent_regularizer": regularizers.serialize(
self.recurrent_regularizer
),
"bias_regularizer": regularizers.serialize(self.bias_regularizer),
"kernel_constraint": constraints.serialize(self.kernel_constraint),
"recurrent_constraint": constraints.serialize(
self.recurrent_constraint
),
"bias_constraint": constraints.serialize(self.bias_constraint),
"dropout": self.dropout,
"recurrent_dropout": self.recurrent_dropout,
"seed": self.seed,
}
base_config = super().get_config()
return {**base_config, **config}
@keras_export("keras.layers.SimpleRNN")
class SimpleRNN(RNN):
"""Fully-connected RNN where the output is to be fed back as the new input.
Args:
units: Positive integer, dimensionality of the output space.
activation: Activation function to use.
Default: hyperbolic tangent (`tanh`).
If you pass None, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, (default `True`), whether the layer uses
a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix,
used for the linear transformation of the inputs. Default:
`"glorot_uniform"`.
recurrent_initializer: Initializer for the `recurrent_kernel`
weights matrix, used for the linear transformation of the recurrent
state. Default: `"orthogonal"`.
bias_initializer: Initializer for the bias vector. Default: `"zeros"`.
kernel_regularizer: Regularizer function applied to the `kernel` weights
matrix. Default: `None`.
recurrent_regularizer: Regularizer function applied to the
`recurrent_kernel` weights matrix. Default: `None`.
bias_regularizer: Regularizer function applied to the bias vector.
Default: `None`.
activity_regularizer: Regularizer function applied to the output of the
layer (its "activation"). Default: `None`.
kernel_constraint: Constraint function applied to the `kernel` weights
matrix. Default: `None`.
recurrent_constraint: Constraint function applied to the
`recurrent_kernel` weights matrix. Default: `None`.
bias_constraint: Constraint function applied to the bias vector.
Default: `None`.
dropout: Float between 0 and 1.
Fraction of the units to drop for the linear transformation
of the inputs. Default: 0.
recurrent_dropout: Float between 0 and 1.
Fraction of the units to drop for the linear transformation of the
recurrent state. Default: 0.
return_sequences: Boolean. Whether to return the last output
in the output sequence, or the full sequence. Default: `False`.
return_state: Boolean. Whether to return the last state
in addition to the output. Default: `False`.
go_backwards: Boolean (default: `False`).
If `True`, process the input sequence backwards and return the
reversed sequence.
stateful: Boolean (default: `False`). If `True`, the last state
for each sample at index i in a batch will be used as initial
state for the sample of index i in the following batch.
unroll: Boolean (default: `False`).
If `True`, the network will be unrolled,
else a symbolic loop will be used.
Unrolling can speed-up a RNN,
although it tends to be more memory-intensive.
Unrolling is only suitable for short sequences.
Call arguments:
sequence: A 3D tensor, with shape `[batch, timesteps, feature]`.
mask: Binary tensor of shape `[batch, timesteps]` indicating whether
a given timestep should be masked. An individual `True` entry
indicates that the corresponding timestep should be utilized,
while a `False` entry indicates that the corresponding timestep
should be ignored.
training: Python boolean indicating whether the layer should behave in
training mode or in inference mode.
This argument is passed to the cell when calling it.
This is only relevant if `dropout` or `recurrent_dropout` is used.
initial_state: List of initial state tensors to be passed to the first
call of the cell.
Example:
```python
inputs = np.random.random((32, 10, 8))
simple_rnn = keras.layers.SimpleRNN(4)
output = simple_rnn(inputs) # The output has shape `(32, 4)`.
simple_rnn = keras.layers.SimpleRNN(
4, return_sequences=True, return_state=True
)
# whole_sequence_output has shape `(32, 10, 4)`.
# final_state has shape `(32, 4)`.
whole_sequence_output, final_state = simple_rnn(inputs)
```
"""
def __init__(
self,
units,
activation="tanh",
use_bias=True,
kernel_initializer="glorot_uniform",
recurrent_initializer="orthogonal",
bias_initializer="zeros",
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.0,
recurrent_dropout=0.0,
return_sequences=False,
return_state=False,
go_backwards=False,
stateful=False,
unroll=False,
seed=None,
**kwargs,
):
cell = SimpleRNNCell(
units,
activation=activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
recurrent_initializer=recurrent_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
recurrent_regularizer=recurrent_regularizer,
bias_regularizer=bias_regularizer,
kernel_constraint=kernel_constraint,
recurrent_constraint=recurrent_constraint,
bias_constraint=bias_constraint,
dropout=dropout,
recurrent_dropout=recurrent_dropout,
seed=seed,
dtype=kwargs.get("dtype", None),
trainable=kwargs.get("trainable", True),
name="simple_rnn_cell",
)
super().__init__(
cell,
return_sequences=return_sequences,
return_state=return_state,
go_backwards=go_backwards,
stateful=stateful,
unroll=unroll,
**kwargs,
)
self.input_spec = [InputSpec(ndim=3)]
def call(self, sequences, initial_state=None, mask=None, training=False):
return super().call(
sequences, mask=mask, training=training, initial_state=initial_state
)
@property
def units(self):
return self.cell.units
@property
def activation(self):
return self.cell.activation
@property
def use_bias(self):
return self.cell.use_bias
@property
def kernel_initializer(self):
return self.cell.kernel_initializer
@property
def recurrent_initializer(self):
return self.cell.recurrent_initializer
@property
def bias_initializer(self):
return self.cell.bias_initializer
@property
def kernel_regularizer(self):
return self.cell.kernel_regularizer
@property
def recurrent_regularizer(self):
return self.cell.recurrent_regularizer
@property
def bias_regularizer(self):
return self.cell.bias_regularizer
@property
def kernel_constraint(self):
return self.cell.kernel_constraint
@property
def recurrent_constraint(self):
return self.cell.recurrent_constraint
@property
def bias_constraint(self):
return self.cell.bias_constraint
@property
def dropout(self):
return self.cell.dropout
@property
def recurrent_dropout(self):
return self.cell.recurrent_dropout
def get_config(self):
config = {
"units": self.units,
"activation": activations.serialize(self.activation),
"use_bias": self.use_bias,
"kernel_initializer": initializers.serialize(
self.kernel_initializer
),
"recurrent_initializer": initializers.serialize(
self.recurrent_initializer
),
"bias_initializer": initializers.serialize(self.bias_initializer),
"kernel_regularizer": regularizers.serialize(
self.kernel_regularizer
),
"recurrent_regularizer": regularizers.serialize(
self.recurrent_regularizer
),
"bias_regularizer": regularizers.serialize(self.bias_regularizer),
"activity_regularizer": regularizers.serialize(
self.activity_regularizer
),
"kernel_constraint": constraints.serialize(self.kernel_constraint),
"recurrent_constraint": constraints.serialize(
self.recurrent_constraint
),
"bias_constraint": constraints.serialize(self.bias_constraint),
"dropout": self.dropout,
"recurrent_dropout": self.recurrent_dropout,
}
base_config = super().get_config()
del base_config["cell"]
return {**base_config, **config}
@classmethod
def from_config(cls, config):
return cls(**config)
| keras/keras/layers/rnn/simple_rnn.py/0 | {
"file_path": "keras/keras/layers/rnn/simple_rnn.py",
"repo_id": "keras",
"token_count": 7513
} | 167 |
import enum
import pytest
from keras import backend
from keras import testing
from keras.legacy.saving import json_utils
if backend.backend() == "tensorflow":
import tensorflow as tf
class JsonUtilsTestAllBackends(testing.TestCase):
def test_encode_decode_tuple(self):
metadata = {"key1": (3, 5), "key2": [(1, (3, 4)), (1,)]}
string = json_utils.Encoder().encode(metadata)
loaded = json_utils.decode(string)
self.assertEqual(set(loaded.keys()), {"key1", "key2"})
self.assertAllEqual(loaded["key1"], (3, 5))
self.assertAllEqual(loaded["key2"], [(1, (3, 4)), (1,)])
def test_encode_decode_enum(self):
class Enum(enum.Enum):
CLASS_A = "a"
CLASS_B = "b"
config = {"key": Enum.CLASS_A, "key2": Enum.CLASS_B}
string = json_utils.Encoder().encode(config)
loaded = json_utils.decode(string)
self.assertAllEqual({"key": "a", "key2": "b"}, loaded)
def test_encode_decode_bytes(self):
b_string = b"abc"
json_string = json_utils.Encoder().encode(b_string)
loaded = json_utils.decode(json_string)
self.assertAllEqual(b_string, loaded)
@pytest.mark.skipif(
backend.backend() != "tensorflow",
reason="These JSON serialization tests are specific to TF components.",
)
class JsonUtilsTestTF(testing.TestCase):
def test_encode_decode_tensor_shape(self):
metadata = {
"key1": tf.TensorShape(None),
"key2": [tf.TensorShape([None]), tf.TensorShape([3, None, 5])],
}
string = json_utils.Encoder().encode(metadata)
loaded = json_utils.decode(string)
self.assertEqual(set(loaded.keys()), {"key1", "key2"})
self.assertEqual(loaded["key1"].rank, None)
self.assertAllEqual(loaded["key2"][0].as_list(), [None])
self.assertAllEqual(loaded["key2"][1].as_list(), [3, None, 5])
def test_encode_decode_type_spec(self):
spec = tf.TensorSpec((1, 5), tf.float32)
string = json_utils.Encoder().encode(spec)
loaded = json_utils.decode(string)
self.assertEqual(spec, loaded)
invalid_type_spec = {
"class_name": "TypeSpec",
"type_spec": "Invalid Type",
"serialized": None,
}
string = json_utils.Encoder().encode(invalid_type_spec)
with self.assertRaisesRegexp(
ValueError, "No TypeSpec has been registered"
):
loaded = json_utils.decode(string)
def test_encode_decode_ragged_tensor(self):
x = tf.ragged.constant([[1.0, 2.0], [3.0]])
string = json_utils.Encoder().encode(x)
loaded = json_utils.decode(string)
self.assertAllClose(loaded.values, x.values)
def test_encode_decode_extension_type_tensor(self):
class MaskedTensor(tf.experimental.ExtensionType):
__name__ = "MaskedTensor"
values: tf.Tensor
mask: tf.Tensor
x = MaskedTensor(
values=[[1, 2, 3], [4, 5, 6]],
mask=[[True, True, False], [True, False, True]],
)
string = json_utils.Encoder().encode(x)
loaded = json_utils.decode(string)
self.assertAllClose(loaded.values, x.values)
self.assertAllClose(loaded.mask, x.mask)
| keras/keras/legacy/saving/json_utils_test.py/0 | {
"file_path": "keras/keras/legacy/saving/json_utils_test.py",
"repo_id": "keras",
"token_count": 1527
} | 168 |
from keras import backend
from keras import initializers
from keras import ops
from keras.api_export import keras_export
from keras.metrics.metric import Metric
@keras_export("keras.metrics.FBetaScore")
class FBetaScore(Metric):
"""Computes F-Beta score.
Formula:
```python
b2 = beta ** 2
f_beta_score = (1 + b2) * (precision * recall) / (precision * b2 + recall)
```
This is the weighted harmonic mean of precision and recall.
Its output range is `[0, 1]`. It works for both multi-class
and multi-label classification.
Args:
average: Type of averaging to be performed across per-class results
in the multi-class case.
Acceptable values are `None`, `"micro"`, `"macro"` and
`"weighted"`. Defaults to `None`.
If `None`, no averaging is performed and `result()` will return
the score for each class.
If `"micro"`, compute metrics globally by counting the total
true positives, false negatives and false positives.
If `"macro"`, compute metrics for each label,
and return their unweighted mean.
This does not take label imbalance into account.
If `"weighted"`, compute metrics for each label,
and return their average weighted by support
(the number of true instances for each label).
This alters `"macro"` to account for label imbalance.
It can result in an score that is not between precision and recall.
beta: Determines the weight of given to recall
in the harmonic mean between precision and recall (see pseudocode
equation above). Defaults to `1`.
threshold: Elements of `y_pred` greater than `threshold` are
converted to be 1, and the rest 0. If `threshold` is
`None`, the argmax of `y_pred` is converted to 1, and the rest to 0.
name: Optional. String name of the metric instance.
dtype: Optional. Data type of the metric result.
Returns:
F-Beta Score: float.
Example:
>>> metric = keras.metrics.FBetaScore(beta=2.0, threshold=0.5)
>>> y_true = np.array([[1, 1, 1],
... [1, 0, 0],
... [1, 1, 0]], np.int32)
>>> y_pred = np.array([[0.2, 0.6, 0.7],
... [0.2, 0.6, 0.6],
... [0.6, 0.8, 0.0]], np.float32)
>>> metric.update_state(y_true, y_pred)
>>> result = metric.result()
>>> result
[0.3846154 , 0.90909094, 0.8333334 ]
"""
def __init__(
self,
average=None,
beta=1.0,
threshold=None,
name="fbeta_score",
dtype=None,
):
super().__init__(name=name, dtype=dtype)
# Metric should be maximized during optimization.
self._direction = "up"
if average not in (None, "micro", "macro", "weighted"):
raise ValueError(
"Invalid `average` argument value. Expected one of: "
"{None, 'micro', 'macro', 'weighted'}. "
f"Received: average={average}"
)
if not isinstance(beta, float):
raise ValueError(
"Invalid `beta` argument value. "
"It should be a Python float. "
f"Received: beta={beta} of type '{type(beta)}'"
)
if beta <= 0.0:
raise ValueError(
"Invalid `beta` argument value. "
"It should be > 0. "
f"Received: beta={beta}"
)
if threshold is not None:
if not isinstance(threshold, float):
raise ValueError(
"Invalid `threshold` argument value. "
"It should be a Python float. "
f"Received: threshold={threshold} "
f"of type '{type(threshold)}'"
)
if threshold > 1.0 or threshold <= 0.0:
raise ValueError(
"Invalid `threshold` argument value. "
"It should verify 0 < threshold <= 1. "
f"Received: threshold={threshold}"
)
self.average = average
self.beta = beta
self.threshold = threshold
self.axis = None
self._built = False
if self.average != "micro":
self.axis = 0
def _build(self, y_true_shape, y_pred_shape):
if len(y_pred_shape) != 2 or len(y_true_shape) != 2:
raise ValueError(
"FBetaScore expects 2D inputs with shape "
"(batch_size, output_dim). Received input "
f"shapes: y_pred.shape={y_pred_shape} and "
f"y_true.shape={y_true_shape}."
)
if y_pred_shape[-1] is None or y_true_shape[-1] is None:
raise ValueError(
"FBetaScore expects 2D inputs with shape "
"(batch_size, output_dim), with output_dim fully "
"defined (not None). Received input "
f"shapes: y_pred.shape={y_pred_shape} and "
f"y_true.shape={y_true_shape}."
)
num_classes = y_pred_shape[-1]
if self.average != "micro":
init_shape = (num_classes,)
else:
init_shape = ()
def _add_zeros_variable(name):
return self.add_variable(
name=name,
shape=init_shape,
initializer=initializers.Zeros(),
dtype=self.dtype,
)
self.true_positives = _add_zeros_variable("true_positives")
self.false_positives = _add_zeros_variable("false_positives")
self.false_negatives = _add_zeros_variable("false_negatives")
self.intermediate_weights = _add_zeros_variable("intermediate_weights")
self._built = True
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = ops.convert_to_tensor(y_true, dtype=self.dtype)
y_pred = ops.convert_to_tensor(y_pred, dtype=self.dtype)
if not self._built:
self._build(y_true.shape, y_pred.shape)
if self.threshold is None:
threshold = ops.max(y_pred, axis=-1, keepdims=True)
# make sure [0, 0, 0] doesn't become [1, 1, 1]
# Use abs(x) > eps, instead of x != 0 to check for zero
y_pred = ops.logical_and(
y_pred >= threshold, ops.abs(y_pred) > 1e-9
)
else:
y_pred = y_pred > self.threshold
y_pred = ops.cast(y_pred, dtype=self.dtype)
y_true = ops.cast(y_true, dtype=self.dtype)
if sample_weight is not None:
sample_weight = ops.convert_to_tensor(
sample_weight, dtype=self.dtype
)
def _weighted_sum(val, sample_weight):
if sample_weight is not None:
val = ops.multiply(val, ops.expand_dims(sample_weight, 1))
return ops.sum(val, axis=self.axis)
self.true_positives.assign(
self.true_positives + _weighted_sum(y_pred * y_true, sample_weight)
)
self.false_positives.assign(
self.false_positives
+ _weighted_sum(y_pred * (1 - y_true), sample_weight)
)
self.false_negatives.assign(
self.false_negatives
+ _weighted_sum((1 - y_pred) * y_true, sample_weight)
)
self.intermediate_weights.assign(
self.intermediate_weights + _weighted_sum(y_true, sample_weight)
)
def result(self):
precision = ops.divide(
self.true_positives,
self.true_positives + self.false_positives + backend.epsilon(),
)
recall = ops.divide(
self.true_positives,
self.true_positives + self.false_negatives + backend.epsilon(),
)
precision = ops.convert_to_tensor(precision, dtype=self.dtype)
recall = ops.convert_to_tensor(recall, dtype=self.dtype)
mul_value = precision * recall
add_value = ((self.beta**2) * precision) + recall
mean = ops.divide(mul_value, add_value + backend.epsilon())
f1_score = mean * (1 + (self.beta**2))
if self.average == "weighted":
weights = ops.divide(
self.intermediate_weights,
ops.sum(self.intermediate_weights) + backend.epsilon(),
)
f1_score = ops.sum(f1_score * weights)
elif self.average is not None: # [micro, macro]
f1_score = ops.mean(f1_score)
return f1_score
def get_config(self):
"""Returns the serializable config of the metric."""
config = {
"name": self.name,
"dtype": self.dtype,
"average": self.average,
"beta": self.beta,
"threshold": self.threshold,
}
base_config = super().get_config()
return {**base_config, **config}
def reset_state(self):
for v in self.variables:
v.assign(ops.zeros(v.shape, dtype=v.dtype))
@keras_export("keras.metrics.F1Score")
class F1Score(FBetaScore):
r"""Computes F-1 Score.
Formula:
```python
f1_score = 2 * (precision * recall) / (precision + recall)
```
This is the harmonic mean of precision and recall.
Its output range is `[0, 1]`. It works for both multi-class
and multi-label classification.
Args:
average: Type of averaging to be performed on data.
Acceptable values are `None`, `"micro"`, `"macro"`
and `"weighted"`. Defaults to `None`.
If `None`, no averaging is performed and `result()` will return
the score for each class.
If `"micro"`, compute metrics globally by counting the total
true positives, false negatives and false positives.
If `"macro"`, compute metrics for each label,
and return their unweighted mean.
This does not take label imbalance into account.
If `"weighted"`, compute metrics for each label,
and return their average weighted by support
(the number of true instances for each label).
This alters `"macro"` to account for label imbalance.
It can result in an score that is not between precision and recall.
threshold: Elements of `y_pred` greater than `threshold` are
converted to be 1, and the rest 0. If `threshold` is
`None`, the argmax of `y_pred` is converted to 1, and the rest to 0.
name: Optional. String name of the metric instance.
dtype: Optional. Data type of the metric result.
Returns:
F-1 Score: float.
Example:
>>> metric = keras.metrics.F1Score(threshold=0.5)
>>> y_true = np.array([[1, 1, 1],
... [1, 0, 0],
... [1, 1, 0]], np.int32)
>>> y_pred = np.array([[0.2, 0.6, 0.7],
... [0.2, 0.6, 0.6],
... [0.6, 0.8, 0.0]], np.float32)
>>> metric.update_state(y_true, y_pred)
>>> result = metric.result()
array([0.5 , 0.8 , 0.6666667], dtype=float32)
"""
def __init__(
self,
average=None,
threshold=None,
name="f1_score",
dtype=None,
):
super().__init__(
average=average,
beta=1.0,
threshold=threshold,
name=name,
dtype=dtype,
)
def get_config(self):
base_config = super().get_config()
del base_config["beta"]
return base_config
| keras/keras/metrics/f_score_metrics.py/0 | {
"file_path": "keras/keras/metrics/f_score_metrics.py",
"repo_id": "keras",
"token_count": 5629
} | 169 |
import tree
from keras import testing
from keras.backend import KerasTensor
from keras.ops.symbolic_arguments import SymbolicArguments
class SymbolicArgumentsTest(testing.TestCase):
# Testing multiple args and empty kwargs
def test_args(self):
shape = (2, 3, 4)
a = KerasTensor(shape=shape)
b = KerasTensor(shape=shape)
args = SymbolicArguments(
(
a,
b,
),
{},
)
self.assertEqual(args.keras_tensors, [a, b])
self.assertEqual(args._flat_arguments, [a, b])
self.assertEqual(args._single_positional_tensor, None)
# Testing single arg and single position tensor
def test_args_single_arg(self):
shape = (2, 3, 4)
a = KerasTensor(shape=shape)
args = SymbolicArguments((a))
self.assertEqual(args.keras_tensors, [a])
self.assertEqual(args._flat_arguments, [a])
self.assertEqual(len(args.kwargs), 0)
self.assertEqual(isinstance(args.args[0], KerasTensor), True)
self.assertEqual(args._single_positional_tensor, a)
# Testing kwargs
def test_kwargs(self):
shape = (2, 3, 4)
a = KerasTensor(shape=shape)
b = KerasTensor(shape=shape)
c = KerasTensor(shape=shape)
args = SymbolicArguments(
(
a,
b,
),
{1: c},
)
self.assertEqual(args.keras_tensors, [a, b, c])
self.assertEqual(args._flat_arguments, [a, b, c])
self.assertEqual(args._single_positional_tensor, None)
# Testing conversion function with args and kwargs
def test_conversion_fn(self):
shape = (2, 3, 4)
a = KerasTensor(shape=shape)
b = KerasTensor(shape=shape)
c = KerasTensor(shape=shape)
sym_args = SymbolicArguments(
(
a,
b,
),
{1: c},
)
(value, _) = sym_args.convert(lambda x: x**2)
args1 = value[0][0]
self.assertIsInstance(args1, KerasTensor)
mapped_value = tree.map_structure(lambda x: x**2, a)
self.assertEqual(mapped_value.shape, args1.shape)
self.assertEqual(mapped_value.dtype, args1.dtype)
# Testing fill in function with single args only
def test_fill_in_single_arg(self):
shape = (2, 3, 4)
a = KerasTensor(shape=shape)
tensor_dict = {id(a): 3}
sym_args = SymbolicArguments((a))
# Call the method to be tested
result, _ = sym_args.fill_in(tensor_dict)
self.assertEqual(result, (3,))
# Testing fill in function with multiple args
def test_fill_in_multiple_arg(self):
shape = (2, 3, 4)
a = KerasTensor(shape=shape)
b = KerasTensor(shape=shape)
tensor_dict = {id(b): 2}
sym_args = SymbolicArguments((a, b))
# Call the method to be tested
result, _ = sym_args.fill_in(tensor_dict)
self.assertEqual(result, ((a, 2),))
# Testing fill in function for args and kwargs
def test_fill_in(self):
shape1 = (2, 3, 4)
shape2 = (3, 2, 4)
a = KerasTensor(shape=shape1)
b = KerasTensor(shape=shape2)
c = KerasTensor(shape=shape2)
dictionary = {id(a): 3, id(c): 2}
sym_args = SymbolicArguments(
(
a,
b,
),
{1: c},
)
(values, _) = sym_args.fill_in(dictionary)
self.assertEqual(values, ((3, b), {1: 2}))
| keras/keras/ops/symbolic_arguments_test.py/0 | {
"file_path": "keras/keras/ops/symbolic_arguments_test.py",
"repo_id": "keras",
"token_count": 1817
} | 170 |
# flake8: noqa
import numpy as np
from keras import backend
from keras import testing
from keras.optimizers.ftrl import Ftrl
class FtrlTest(testing.TestCase):
def test_config(self):
optimizer = Ftrl(
learning_rate=0.05,
learning_rate_power=-0.2,
initial_accumulator_value=0.4,
l1_regularization_strength=0.05,
l2_regularization_strength=0.15,
l2_shrinkage_regularization_strength=0.01,
beta=0.3,
)
self.run_class_serialization_test(optimizer)
def test_single_step(self):
optimizer = Ftrl(learning_rate=0.5)
grads = np.array([1.0, 6.0, 7.0, 2.0])
vars = backend.Variable([1.0, 2.0, 3.0, 4.0])
optimizer.apply_gradients(zip([grads], [vars]))
self.assertAllClose(
vars, [0.2218, 1.3954, 2.3651, 2.8814], rtol=1e-4, atol=1e-4
)
def test_correctness_with_golden(self):
optimizer = Ftrl(
learning_rate=0.05,
learning_rate_power=-0.2,
initial_accumulator_value=0.4,
l1_regularization_strength=0.05,
l2_regularization_strength=0.15,
l2_shrinkage_regularization_strength=0.01,
beta=0.3,
)
x = backend.Variable(np.ones([10]))
grads = np.arange(0.1, 1.1, 0.1)
first_grads = np.full((10,), 0.01)
# fmt: off
golden = np.array(
[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[-0.0034, -0.0077, -0.0118, -0.0157, -0.0194, -0.023, -0.0263, -0.0294, -0.0325, -0.0354],
[-0.0078, -0.0162, -0.0242, -0.0317, -0.0387, -0.0454, -0.0516, -0.0575, -0.0631, -0.0685],
[-0.0121, -0.0246, -0.0363, -0.0472, -0.0573, -0.0668, -0.0757, -0.0842, -0.0922, -0.0999],
[-0.0164, -0.0328, -0.0481, -0.0623, -0.0753, -0.0875, -0.099, -0.1098, -0.1201, -0.1299]]
)
# fmt: on
optimizer.apply_gradients(zip([first_grads], [x]))
for i in range(5):
self.assertAllClose(x, golden[i], rtol=5e-4, atol=5e-4)
optimizer.apply_gradients(zip([grads], [x]))
def test_clip_norm(self):
optimizer = Ftrl(clipnorm=1)
grad = [np.array([100.0, 100.0])]
clipped_grad = optimizer._clip_gradients(grad)
self.assertAllClose(clipped_grad[0], [2**0.5 / 2, 2**0.5 / 2])
def test_clip_value(self):
optimizer = Ftrl(clipvalue=1)
grad = [np.array([100.0, 100.0])]
clipped_grad = optimizer._clip_gradients(grad)
self.assertAllClose(clipped_grad[0], [1.0, 1.0])
| keras/keras/optimizers/ftrl_test.py/0 | {
"file_path": "keras/keras/optimizers/ftrl_test.py",
"repo_id": "keras",
"token_count": 1463
} | 171 |
# flake8: noqa
import numpy as np
from keras import backend
from keras import ops
from keras import testing
from keras.optimizers.sgd import SGD
class SGDTest(testing.TestCase):
def test_config(self):
optimizer = SGD(
learning_rate=0.5,
momentum=0.06,
nesterov=True,
weight_decay=0.004,
)
self.run_class_serialization_test(optimizer)
def test_single_step(self):
optimizer = SGD(learning_rate=0.5)
self.assertEqual(len(optimizer.variables), 2)
grads = ops.array([1.0, 6.0, 7.0, 2.0])
vars = backend.Variable([1.0, 2.0, 3.0, 4.0])
optimizer.build([vars])
optimizer.apply_gradients(zip([grads], [vars]))
self.assertAllClose(vars, [0.5, -1.0, -0.5, 3.0], rtol=1e-4, atol=1e-4)
self.assertEqual(len(optimizer.variables), 2)
self.assertEqual(optimizer.variables[0], 1)
self.assertEqual(optimizer.variables[1], 0.5)
def test_weight_decay(self):
grads, var1, var2, var3 = (
ops.zeros(()),
backend.Variable(2.0),
backend.Variable(2.0, name="exclude"),
backend.Variable(2.0),
)
optimizer_1 = SGD(learning_rate=1.0, weight_decay=0.004)
optimizer_1.apply_gradients(zip([grads], [var1]))
optimizer_2 = SGD(learning_rate=1.0, weight_decay=0.004)
optimizer_2.exclude_from_weight_decay(var_names=["exclude"])
optimizer_2.apply_gradients(zip([grads, grads], [var1, var2]))
optimizer_3 = SGD(learning_rate=1.0, weight_decay=0.004)
optimizer_3.exclude_from_weight_decay(var_list=[var3])
optimizer_3.apply_gradients(zip([grads, grads], [var1, var3]))
self.assertAlmostEqual(var1.numpy(), 1.9760959, decimal=6)
self.assertAlmostEqual(var2.numpy(), 2.0, decimal=6)
self.assertAlmostEqual(var3.numpy(), 2.0, decimal=6)
def test_correctness_with_golden(self):
optimizer = SGD(nesterov=True)
x = backend.Variable(np.ones([10]))
grads = ops.arange(0.1, 1.1, 0.1)
first_grads = ops.full((10,), 0.01)
# fmt: off
golden = np.array(
[[0.9999, 0.9999, 0.9999, 0.9999, 0.9999, 0.9999, 0.9999, 0.9999,
0.9999, 0.9999], [0.9989, 0.9979, 0.9969, 0.9959, 0.9949, 0.9939,
0.9929, 0.9919, 0.9909, 0.9899], [0.9979, 0.9959, 0.9939, 0.9919,
0.9899, 0.9879, 0.9859, 0.9839, 0.9819, 0.9799], [0.9969, 0.9939,
0.9909, 0.9879, 0.9849, 0.9819, 0.9789, 0.9759, 0.9729, 0.9699],
[0.9959, 0.9919, 0.9879, 0.9839, 0.9799, 0.9759, 0.9719, 0.9679,
0.9639, 0.9599]]
)
# fmt: on
optimizer.apply_gradients(zip([first_grads], [x]))
for i in range(5):
self.assertAllClose(x, golden[i], rtol=5e-4, atol=5e-4)
optimizer.apply_gradients(zip([grads], [x]))
def test_clip_norm(self):
optimizer = SGD(clipnorm=1)
grad = [np.array([100.0, 100.0])]
clipped_grad = optimizer._clip_gradients(grad)
self.assertAllClose(clipped_grad[0], [2**0.5 / 2, 2**0.5 / 2])
def test_clip_value(self):
optimizer = SGD(clipvalue=1)
grad = [np.array([100.0, 100.0])]
clipped_grad = optimizer._clip_gradients(grad)
self.assertAllClose(clipped_grad[0], [1.0, 1.0])
| keras/keras/optimizers/sgd_test.py/0 | {
"file_path": "keras/keras/optimizers/sgd_test.py",
"repo_id": "keras",
"token_count": 1778
} | 172 |
"""Tests for Keras python-based idempotent saving functions."""
import json
import os
import warnings
import zipfile
from pathlib import Path
from unittest import mock
import numpy as np
import pytest
import keras
from keras import ops
from keras import testing
from keras.saving import saving_lib
@keras.saving.register_keras_serializable(package="my_custom_package")
class MyDense(keras.layers.Layer):
def __init__(self, units, **kwargs):
super().__init__(**kwargs)
self.units = units
self.nested_layer = keras.layers.Dense(self.units, name="dense")
def build(self, input_shape):
self.additional_weights = [
self.add_weight(
shape=(),
name="my_additional_weight",
initializer="ones",
trainable=True,
),
self.add_weight(
shape=(),
name="my_additional_weight_2",
initializer="ones",
trainable=True,
),
]
self.weights_in_dict = {
"my_weight": self.add_weight(
shape=(),
name="my_dict_weight",
initializer="ones",
trainable=True,
),
}
self.nested_layer.build(input_shape)
def call(self, inputs):
return self.nested_layer(inputs)
def two(self):
return 2
ASSETS_DATA = "These are my assets"
VARIABLES_DATA = np.random.random((10,))
@keras.saving.register_keras_serializable(package="my_custom_package")
class LayerWithCustomSaving(MyDense):
def build(self, input_shape):
self.assets = ASSETS_DATA
self.stored_variables = VARIABLES_DATA
return super().build(input_shape)
def save_assets(self, inner_path):
with open(os.path.join(inner_path, "assets.txt"), "w") as f:
f.write(self.assets)
def save_own_variables(self, store):
store["variables"] = self.stored_variables
def load_assets(self, inner_path):
with open(os.path.join(inner_path, "assets.txt"), "r") as f:
text = f.read()
self.assets = text
def load_own_variables(self, store):
self.stored_variables = np.array(store["variables"])
@keras.saving.register_keras_serializable(package="my_custom_package")
class CustomModelX(keras.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.dense1 = MyDense(1, name="my_dense_1")
self.dense2 = MyDense(1, name="my_dense_2")
def call(self, inputs):
out = self.dense1(inputs)
return self.dense2(out)
def one(self):
return 1
@keras.saving.register_keras_serializable(package="my_custom_package")
class ModelWithCustomSaving(keras.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.custom_dense = LayerWithCustomSaving(1)
def call(self, inputs):
return self.custom_dense(inputs)
@keras.saving.register_keras_serializable(package="my_custom_package")
class CompileOverridingModel(keras.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.dense1 = MyDense(1)
def compile(self, *args, **kwargs):
super().compile(*args, **kwargs)
def call(self, inputs):
return self.dense1(inputs)
@keras.saving.register_keras_serializable(package="my_custom_package")
class CompileOverridingSequential(keras.Sequential):
def compile(self, *args, **kwargs):
super().compile(*args, **kwargs)
@keras.saving.register_keras_serializable(package="my_custom_package")
class SubclassFunctional(keras.Model):
"""Subclassed functional identical to `_get_basic_functional_model`."""
def __init__(self, **kwargs):
inputs = keras.Input(shape=(4,), batch_size=2)
dense = keras.layers.Dense(1, name="first_dense")
x = dense(inputs)
outputs = keras.layers.Dense(1, name="second_dense")(x)
super().__init__(inputs=inputs, outputs=outputs, **kwargs)
# Attrs for layers in the functional graph should not affect saving
self.layer_attr = dense
@property
def layer_property(self):
# Properties for layers in the functional graph should not affect saving
return self.layer_attr
def get_config(self):
return {}
@classmethod
def from_config(cls, config):
return cls(**config)
@keras.saving.register_keras_serializable(package="my_custom_package")
def my_mean_squared_error(y_true, y_pred):
"""Identical to built-in `mean_squared_error`, but as a custom fn."""
return ops.mean(ops.square(y_pred - y_true), axis=-1)
def _get_subclassed_model(compile=True):
subclassed_model = CustomModelX(name="custom_model_x")
if compile:
subclassed_model.compile(
optimizer="adam",
loss=my_mean_squared_error,
metrics=[keras.metrics.Hinge(), "mse"],
)
return subclassed_model
def _get_custom_sequential_model(compile=True):
sequential_model = keras.Sequential(
[MyDense(1), MyDense(1)], name="sequential"
)
if compile:
sequential_model.compile(
optimizer="adam",
loss=my_mean_squared_error,
metrics=[keras.metrics.Hinge(), "mse"],
)
return sequential_model
def _get_basic_sequential_model(compile=True):
sequential_model = keras.Sequential(
[
keras.layers.Dense(1, name="dense_1"),
keras.layers.Dense(1, name="dense_2"),
],
name="sequential",
)
if compile:
sequential_model.compile(
optimizer="adam",
loss=my_mean_squared_error,
metrics=[keras.metrics.Hinge(), "mse"],
)
return sequential_model
def _get_custom_functional_model(compile=True):
inputs = keras.Input(shape=(4,), batch_size=2)
x = MyDense(1, name="first_dense")(inputs)
outputs = MyDense(1, name="second_dense")(x)
functional_model = keras.Model(inputs, outputs)
if compile:
functional_model.compile(
optimizer="adam",
loss=my_mean_squared_error,
metrics=[keras.metrics.Hinge(), "mse"],
)
return functional_model
def _get_basic_functional_model(compile=True):
inputs = keras.Input(shape=(4,), batch_size=2)
x = keras.layers.Dense(1, name="first_dense")(inputs)
outputs = keras.layers.Dense(1, name="second_dense")(x)
functional_model = keras.Model(inputs, outputs)
if compile:
functional_model.compile(
optimizer="adam",
loss=my_mean_squared_error,
metrics=[keras.metrics.Hinge(), "mse"],
)
return functional_model
def _get_subclassed_functional_model(compile=True):
functional_model = SubclassFunctional()
if compile:
functional_model.compile(
optimizer="adam",
loss=my_mean_squared_error,
metrics=[keras.metrics.Hinge(), "mse"],
)
return functional_model
@pytest.mark.requires_trainable_backend
class SavingTest(testing.TestCase):
def _test_inference_after_instantiation(self, model):
x_ref = np.random.random((2, 4))
y_ref = model(x_ref)
temp_filepath = os.path.join(self.get_temp_dir(), "my_model.keras")
model.save(temp_filepath)
loaded_model = saving_lib.load_model(temp_filepath)
self.assertFalse(model.compiled)
for w_ref, w in zip(model.variables, loaded_model.variables):
self.assertAllClose(w_ref, w)
self.assertAllClose(y_ref, loaded_model(x_ref))
def test_inference_after_instantiation_subclassed(self):
model = _get_subclassed_model(compile=False)
self._test_inference_after_instantiation(model)
def test_inference_after_instantiation_basic_sequential(self):
model = _get_basic_sequential_model(compile=False)
self._test_inference_after_instantiation(model)
def test_inference_after_instantiation_basic_functional(self):
model = _get_basic_functional_model(compile=False)
self._test_inference_after_instantiation(model)
def test_inference_after_instantiation_custom_sequential(self):
model = _get_custom_sequential_model(compile=False)
self._test_inference_after_instantiation(model)
def test_inference_after_instantiation_custom_functional(self):
model = _get_custom_functional_model(compile=False)
self._test_inference_after_instantiation(model)
def test_inference_after_instantiation_subclassed_functional(self):
model = _get_subclassed_functional_model(compile=False)
self._test_inference_after_instantiation(model)
def _test_compile_preserved(self, model):
x_ref = np.random.random((2, 4))
y_ref = np.random.random((2, 1))
model.fit(x_ref, y_ref)
out_ref = model(x_ref)
ref_metrics = model.evaluate(x_ref, y_ref)
temp_filepath = os.path.join(self.get_temp_dir(), "my_model.keras")
model.save(temp_filepath)
loaded_model = saving_lib.load_model(temp_filepath)
self.assertTrue(model.compiled)
self.assertTrue(loaded_model.built)
for w_ref, w in zip(model.variables, loaded_model.variables):
self.assertAllClose(w_ref, w)
self.assertAllClose(out_ref, loaded_model(x_ref))
self.assertEqual(
model.optimizer.__class__, loaded_model.optimizer.__class__
)
self.assertEqual(
model.optimizer.get_config(), loaded_model.optimizer.get_config()
)
for w_ref, w in zip(
model.optimizer.variables, loaded_model.optimizer.variables
):
self.assertAllClose(w_ref, w)
new_metrics = loaded_model.evaluate(x_ref, y_ref)
for ref_m, m in zip(ref_metrics, new_metrics):
self.assertAllClose(ref_m, m)
def test_compile_preserved_subclassed(self):
model = _get_subclassed_model(compile=True)
self._test_compile_preserved(model)
def test_compile_preserved_basic_sequential(self):
model = _get_basic_sequential_model(compile=True)
self._test_compile_preserved(model)
def test_compile_preserved_custom_sequential(self):
model = _get_custom_sequential_model(compile=True)
self._test_compile_preserved(model)
def test_compile_preserved_basic_functional(self):
model = _get_basic_functional_model(compile=True)
self._test_compile_preserved(model)
def test_compile_preserved_custom_functional(self):
model = _get_custom_functional_model(compile=True)
self._test_compile_preserved(model)
def test_compile_preserved_subclassed_functional(self):
model = _get_subclassed_functional_model(compile=True)
self._test_compile_preserved(model)
def test_saving_preserve_unbuilt_state(self):
temp_filepath = os.path.join(self.get_temp_dir(), "my_model.keras")
subclassed_model = CustomModelX()
subclassed_model.save(temp_filepath)
loaded_model = saving_lib.load_model(temp_filepath)
self.assertEqual(subclassed_model.compiled, loaded_model.compiled)
self.assertFalse(subclassed_model.built)
self.assertFalse(loaded_model.built)
def test_saved_module_paths_and_class_names(self):
temp_filepath = os.path.join(self.get_temp_dir(), "my_model.keras")
subclassed_model = _get_subclassed_model()
x = np.random.random((100, 32))
y = np.random.random((100, 1))
subclassed_model.fit(x, y, epochs=1)
subclassed_model.save(temp_filepath)
with zipfile.ZipFile(temp_filepath, "r") as z:
with z.open(saving_lib._CONFIG_FILENAME, "r") as c:
config_json = c.read()
config_dict = json.loads(config_json)
self.assertEqual(
config_dict["registered_name"], "my_custom_package>CustomModelX"
)
self.assertEqual(
config_dict["compile_config"]["optimizer"],
"adam",
)
self.assertEqual(
config_dict["compile_config"]["loss"]["config"],
"my_mean_squared_error",
)
def test_saving_custom_assets_and_variables(self):
temp_filepath = os.path.join(self.get_temp_dir(), "my_model.keras")
model = ModelWithCustomSaving()
model.compile(
optimizer="adam",
loss="mse",
)
x = np.random.random((100, 32))
y = np.random.random((100, 1))
model.fit(x, y, epochs=1)
# Assert that the archive has not been saved.
self.assertFalse(os.path.exists(temp_filepath))
model.save(temp_filepath)
loaded_model = saving_lib.load_model(temp_filepath)
self.assertEqual(loaded_model.custom_dense.assets, ASSETS_DATA)
self.assertEqual(
loaded_model.custom_dense.stored_variables.tolist(),
VARIABLES_DATA.tolist(),
)
def _test_compile_overridden_warnings(self, model_type):
temp_filepath = os.path.join(self.get_temp_dir(), "my_model.keras")
model = (
CompileOverridingModel()
if model_type == "subclassed"
else CompileOverridingSequential(
[keras.layers.Embedding(4, 1), MyDense(1), MyDense(1)]
)
)
model.compile("sgd", "mse")
model.save(temp_filepath)
with mock.patch.object(warnings, "warn") as mock_warn:
saving_lib.load_model(temp_filepath)
if not mock_warn.call_args_list:
raise AssertionError("Did not warn.")
self.assertIn(
"`compile()` was not called as part of model loading "
"because the model's `compile()` method is custom. ",
mock_warn.call_args_list[0][0][0],
)
def test_compile_overridden_warnings_sequential(self):
self._test_compile_overridden_warnings("sequential")
def test_compile_overridden_warnings_subclassed(self):
self._test_compile_overridden_warnings("subclassed")
def test_metadata(self):
temp_filepath = Path(
os.path.join(self.get_temp_dir(), "my_model.keras")
)
model = CompileOverridingModel()
model.save(temp_filepath)
with zipfile.ZipFile(temp_filepath, "r") as z:
with z.open(saving_lib._METADATA_FILENAME, "r") as c:
metadata_json = c.read()
metadata = json.loads(metadata_json)
self.assertIn("keras_version", metadata)
self.assertIn("date_saved", metadata)
# def test_gfile_copy_local_called(self):
# temp_filepath = Path(
# os.path.join(self.get_temp_dir(), "my_model.keras")
# )
# model = CompileOverridingModel()
# with mock.patch(
# "re.match", autospec=True
# ) as mock_re_match, mock.patch(
# "tensorflow.compat.v2.io.file_utils.copy", autospec=True
# ) as mock_copy:
# # Mock Remote Path check to true to test gfile copy logic
# mock_re_match.return_value = True
# model.save(temp_filepath)
# mock_re_match.assert_called()
# mock_copy.assert_called()
# self.assertIn(str(temp_filepath), mock_re_match.call_args.args)
# self.assertIn(str(temp_filepath), mock_copy.call_args.args)
def test_save_load_weights_only(self):
temp_filepath = Path(
os.path.join(self.get_temp_dir(), "mymodel.weights.h5")
)
model = _get_basic_functional_model()
ref_input = np.random.random((2, 4))
ref_output = model.predict(ref_input)
saving_lib.save_weights_only(model, temp_filepath)
model = _get_basic_functional_model()
saving_lib.load_weights_only(model, temp_filepath)
self.assertAllClose(model.predict(ref_input), ref_output, atol=1e-6)
# Test with Model method
model = _get_basic_functional_model()
model.load_weights(temp_filepath)
self.assertAllClose(model.predict(ref_input), ref_output, atol=1e-6)
def test_load_weights_only_with_keras_file(self):
# Test loading weights from whole saved model
temp_filepath = Path(os.path.join(self.get_temp_dir(), "mymodel.keras"))
model = _get_basic_functional_model()
ref_input = np.random.random((2, 4))
ref_output = model.predict(ref_input)
saving_lib.save_model(model, temp_filepath)
model = _get_basic_functional_model()
saving_lib.load_weights_only(model, temp_filepath)
self.assertAllClose(model.predict(ref_input), ref_output, atol=1e-6)
# Test with Model method
model = _get_basic_functional_model()
model.load_weights(temp_filepath)
self.assertAllClose(model.predict(ref_input), ref_output, atol=1e-6)
def test_save_weights_subclassed_functional(self):
# The subclassed and basic functional model should have the same
# weights structure.
temp_filepath = Path(
os.path.join(self.get_temp_dir(), "mymodel.weights.h5")
)
model = _get_basic_functional_model()
ref_input = np.random.random((2, 4))
ref_output = model.predict(ref_input)
# Test saving basic, loading subclassed.
saving_lib.save_weights_only(model, temp_filepath)
model = _get_subclassed_functional_model()
saving_lib.load_weights_only(model, temp_filepath)
self.assertAllClose(model.predict(ref_input), ref_output, atol=1e-6)
# Test saving subclassed, loading basic.
saving_lib.save_weights_only(model, temp_filepath)
model = _get_basic_functional_model()
saving_lib.load_weights_only(model, temp_filepath)
self.assertAllClose(model.predict(ref_input), ref_output, atol=1e-6)
def test_compile_arg(self):
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.keras")
model = _get_basic_functional_model()
model.compile("sgd", "mse")
model.fit(np.random.random((2, 4)), np.random.random((2, 1)))
saving_lib.save_model(model, temp_filepath)
model = saving_lib.load_model(temp_filepath)
self.assertEqual(model.compiled, True)
model = saving_lib.load_model(temp_filepath, compile=False)
self.assertEqual(model.compiled, False)
# def test_overwrite(self):
# temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.keras")
# model = _get_basic_functional_model()
# model.save(temp_filepath)
# model.save(temp_filepath, overwrite=True)
# with self.assertRaises(EOFError):
# model.save(temp_filepath, overwrite=False)
# temp_filepath = os.path.join(
# self.get_temp_dir(), "mymodel.weights.h5"
# )
# model = _get_basic_functional_model()
# model.save_weights(temp_filepath)
# model.save_weights(temp_filepath, overwrite=True)
# with self.assertRaises(EOFError):
# model.save_weights(temp_filepath, overwrite=False)
def test_partial_load(self):
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.keras")
original_model = keras.Sequential(
[
keras.Input(shape=(3,), batch_size=2),
keras.layers.Dense(4),
keras.layers.Dense(5),
]
)
original_model.save(temp_filepath)
# Test with a model that has a differently shaped layer
new_model = keras.Sequential(
[
keras.Input(shape=(3,), batch_size=2),
keras.layers.Dense(4),
keras.layers.Dense(6),
]
)
new_layer_kernel_value = np.array(new_model.layers[1].kernel)
with self.assertRaisesRegex(ValueError, "must match"):
# Doesn't work by default
new_model.load_weights(temp_filepath)
# Now it works
new_model.load_weights(temp_filepath, skip_mismatch=True)
ref_weights = original_model.layers[0].get_weights()
new_weights = new_model.layers[0].get_weights()
self.assertEqual(len(ref_weights), len(new_weights))
for ref_w, w in zip(ref_weights, new_weights):
self.assertAllClose(ref_w, w)
self.assertAllClose(
np.array(new_model.layers[1].kernel), new_layer_kernel_value
)
# Test with a model that has a new layer at the end
new_model = keras.Sequential(
[
keras.Input(shape=(3,), batch_size=2),
keras.layers.Dense(4),
keras.layers.Dense(5),
keras.layers.Dense(5),
]
)
new_layer_kernel_value = np.array(new_model.layers[2].kernel)
with self.assertRaisesRegex(ValueError, "received 0 variables"):
# Doesn't work by default
new_model.load_weights(temp_filepath)
# Now it works
new_model.load_weights(temp_filepath, skip_mismatch=True)
for layer_index in [0, 1]:
ref_weights = original_model.layers[layer_index].get_weights()
new_weights = new_model.layers[layer_index].get_weights()
self.assertEqual(len(ref_weights), len(new_weights))
for ref_w, w in zip(ref_weights, new_weights):
self.assertAllClose(ref_w, w)
self.assertAllClose(
np.array(new_model.layers[2].kernel), new_layer_kernel_value
)
@pytest.mark.requires_trainable_backend
class SavingAPITest(testing.TestCase):
def test_saving_api_errors(self):
from keras.saving import saving_api
model = _get_basic_functional_model()
# Saving API errors
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel")
with self.assertRaisesRegex(ValueError, "argument is deprecated"):
saving_api.save_model(model, temp_filepath, save_format="keras")
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.notkeras")
with self.assertRaisesRegex(ValueError, "Invalid filepath extension"):
saving_api.save_model(model, temp_filepath)
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.keras")
with self.assertRaisesRegex(ValueError, "are not supported"):
saving_api.save_model(model, temp_filepath, invalid_arg="hello")
# Loading API errors
temp_filepath = os.path.join(self.get_temp_dir(), "non_existent.keras")
with self.assertRaisesRegex(
ValueError, "Please ensure the file is an accessible"
):
_ = saving_api.load_model(temp_filepath)
temp_filepath = os.path.join(self.get_temp_dir(), "my_saved_model")
with self.assertRaisesRegex(ValueError, "File format not supported"):
_ = saving_api.load_model(temp_filepath)
def test_model_api_endpoint(self):
temp_filepath = Path(os.path.join(self.get_temp_dir(), "mymodel.keras"))
model = _get_basic_functional_model()
ref_input = np.random.random((2, 4))
ref_output = model.predict(ref_input)
model.save(temp_filepath)
model = keras.saving.load_model(temp_filepath)
self.assertAllClose(model.predict(ref_input), ref_output, atol=1e-6)
def test_model_api_endpoint_h5(self):
temp_filepath = Path(os.path.join(self.get_temp_dir(), "mymodel.h5"))
model = _get_basic_functional_model()
ref_input = np.random.random((2, 4))
ref_output = model.predict(ref_input)
model.save(temp_filepath)
model = keras.saving.load_model(temp_filepath)
self.assertAllClose(model.predict(ref_input), ref_output, atol=1e-6)
def test_model_api_errors(self):
model = _get_basic_functional_model()
# Saving API errors
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel")
with self.assertRaisesRegex(ValueError, "argument is deprecated"):
model.save(temp_filepath, save_format="keras")
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.notkeras")
with self.assertRaisesRegex(ValueError, "Invalid filepath extension"):
model.save(temp_filepath)
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.keras")
with self.assertRaisesRegex(ValueError, "are not supported"):
model.save(temp_filepath, invalid_arg="hello")
def test_safe_mode(self):
temp_filepath = os.path.join(self.get_temp_dir(), "unsafe_model.keras")
model = keras.Sequential(
[
keras.Input(shape=(3,)),
keras.layers.Lambda(lambda x: x * 2),
]
)
model.save(temp_filepath)
with self.assertRaisesRegex(ValueError, "Deserializing it is unsafe"):
model = saving_lib.load_model(temp_filepath)
model = saving_lib.load_model(temp_filepath, safe_mode=False)
def test_normalization_kpl(self):
# With adapt
temp_filepath = os.path.join(self.get_temp_dir(), "norm_model.keras")
model = keras.Sequential(
[
keras.Input(shape=(3,)),
keras.layers.Normalization(),
]
)
data = np.random.random((3, 3))
model.layers[0].adapt(data)
ref_out = model(data)
model.save(temp_filepath)
model = saving_lib.load_model(temp_filepath)
out = model(data)
self.assertAllClose(ref_out, out, atol=1e-6)
# Without adapt
model = keras.Sequential(
[
keras.Input(shape=(3,)),
keras.layers.Normalization(
mean=np.random.random((3,)),
variance=np.random.random((3,)),
),
]
)
ref_out = model(data)
model.save(temp_filepath)
model = saving_lib.load_model(temp_filepath)
out = model(data)
self.assertAllClose(ref_out, out, atol=1e-6)
# This class is properly registered with a `get_config()` method.
# However, since it does not subclass keras.layers.Layer, it lacks
# `from_config()` for deserialization.
@keras.saving.register_keras_serializable()
class GrowthFactor:
def __init__(self, factor):
self.factor = factor
def __call__(self, inputs):
return inputs * self.factor
def get_config(self):
return {"factor": self.factor}
@keras.saving.register_keras_serializable(package="Complex")
class FactorLayer(keras.layers.Layer):
def __init__(self, factor, **kwargs):
super().__init__(**kwargs)
self.factor = factor
def call(self, x):
return x * self.factor
def get_config(self):
return {"factor": self.factor}
# This custom model does not explicitly deserialize the layers it includes
# in its `get_config`. Explicit deserialization in a `from_config` override
# or `__init__` is needed here, or an error will be thrown at loading time.
@keras.saving.register_keras_serializable(package="Complex")
class ComplexModel(keras.layers.Layer):
def __init__(self, first_layer, second_layer=None, **kwargs):
super().__init__(**kwargs)
self.first_layer = first_layer
if second_layer is not None:
self.second_layer = second_layer
else:
self.second_layer = keras.layers.Dense(8)
def get_config(self):
config = super().get_config()
config.update(
{
"first_layer": self.first_layer,
"second_layer": self.second_layer,
}
)
return config
def call(self, inputs):
return self.first_layer(self.second_layer(inputs))
class SavingBattleTest(testing.TestCase):
def test_custom_object_without_from_config(self):
temp_filepath = os.path.join(
self.get_temp_dir(), "custom_fn_model.keras"
)
inputs = keras.Input(shape=(4, 4))
outputs = keras.layers.Dense(1, activation=GrowthFactor(0.5))(inputs)
model = keras.Model(inputs, outputs)
model.save(temp_filepath)
with self.assertRaisesRegex(
TypeError, "Unable to reconstruct an instance"
):
_ = saving_lib.load_model(temp_filepath)
def test_complex_model_without_explicit_deserialization(self):
temp_filepath = os.path.join(self.get_temp_dir(), "complex_model.keras")
inputs = keras.Input((32,))
outputs = ComplexModel(first_layer=FactorLayer(0.5))(inputs)
model = keras.Model(inputs, outputs)
model.save(temp_filepath)
with self.assertRaisesRegex(TypeError, "are explicitly deserialized"):
_ = saving_lib.load_model(temp_filepath)
def test_redefinition_of_trackable(self):
"""Test that a trackable can be aliased under a new name."""
class NormalModel(keras.Model):
def __init__(self):
super().__init__()
self.dense = keras.layers.Dense(3)
def call(self, x):
return self.dense(x)
class WeirdModel(keras.Model):
def __init__(self):
super().__init__()
# This property will be traversed first,
# but "_dense" isn't in the saved file
# generated by NormalModel.
self.a_dense = keras.layers.Dense(3)
@property
def dense(self):
return self.a_dense
def call(self, x):
return self.dense(x)
temp_filepath = "normal_model.weights.h5"
model_a = NormalModel()
model_a(np.random.random((2, 2)))
model_a.save_weights(temp_filepath)
model_b = WeirdModel()
model_b(np.random.random((2, 2)))
model_b.load_weights(temp_filepath)
self.assertAllClose(
model_a.dense.kernel.numpy(), model_b.dense.kernel.numpy()
)
| keras/keras/saving/saving_lib_test.py/0 | {
"file_path": "keras/keras/saving/saving_lib_test.py",
"repo_id": "keras",
"token_count": 13769
} | 173 |
import math
import jax
import jax.experimental.sparse as jax_sparse
import numpy as np
import scipy
import tensorflow as tf
import torch
from absl.testing import parameterized
from jax import numpy as jnp
from keras import testing
from keras.testing.test_utils import named_product
from keras.trainers.data_adapters import generator_data_adapter
def example_generator(x, y, sample_weight=None, batch_size=32):
def make():
for i in range(math.ceil(len(x) / batch_size)):
low = i * batch_size
high = min(low + batch_size, len(x))
batch_x = x[low:high]
batch_y = y[low:high]
if sample_weight is not None:
yield batch_x, batch_y, sample_weight[low:high]
else:
yield batch_x, batch_y
return make
class GeneratorDataAdapterTest(testing.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
named_product(
[
{"testcase_name": "use_weight", "use_sample_weight": True},
{"testcase_name": "no_weight", "use_sample_weight": False},
],
generator_type=["np", "tf", "jax", "torch"],
iterator_type=["np", "tf", "jax", "torch"],
)
)
def test_basic_flow(self, use_sample_weight, generator_type, iterator_type):
x = np.random.random((34, 4)).astype("float32")
y = np.array([[i, i] for i in range(34)], dtype="float32")
sw = np.random.random((34,)).astype("float32")
if generator_type == "tf":
x, y, sw = tf.constant(x), tf.constant(y), tf.constant(sw)
elif generator_type == "jax":
x, y, sw = jnp.array(x), jnp.array(y), jnp.array(sw)
elif generator_type == "torch":
x, y, sw = (
torch.as_tensor(x),
torch.as_tensor(y),
torch.as_tensor(sw),
)
if not use_sample_weight:
sw = None
make_generator = example_generator(
x,
y,
sample_weight=sw,
batch_size=16,
)
adapter = generator_data_adapter.GeneratorDataAdapter(make_generator())
if iterator_type == "np":
it = adapter.get_numpy_iterator()
expected_class = np.ndarray
elif iterator_type == "tf":
it = adapter.get_tf_dataset()
expected_class = tf.Tensor
elif iterator_type == "jax":
it = adapter.get_jax_iterator()
expected_class = jax.Array
elif iterator_type == "torch":
it = adapter.get_torch_dataloader()
expected_class = torch.Tensor
sample_order = []
for i, batch in enumerate(it):
if use_sample_weight:
self.assertEqual(len(batch), 3)
bx, by, bsw = batch
else:
self.assertEqual(len(batch), 2)
bx, by = batch
self.assertIsInstance(bx, expected_class)
self.assertIsInstance(by, expected_class)
self.assertEqual(bx.dtype, by.dtype)
self.assertContainsExactSubsequence(str(bx.dtype), "float32")
if i < 2:
self.assertEqual(bx.shape, (16, 4))
self.assertEqual(by.shape, (16, 2))
else:
self.assertEqual(bx.shape, (2, 4))
self.assertEqual(by.shape, (2, 2))
if use_sample_weight:
self.assertIsInstance(bsw, expected_class)
for i in range(by.shape[0]):
sample_order.append(by[i, 0])
self.assertAllClose(sample_order, list(range(34)))
@parameterized.named_parameters(
named_product(
generator_type=["tf", "jax", "scipy"], iterator_type=["tf", "jax"]
)
)
def test_scipy_sparse_tensors(self, generator_type, iterator_type):
if generator_type == "tf":
x = tf.SparseTensor([[0, 0], [1, 2]], [1.0, 2.0], (2, 4))
y = tf.SparseTensor([[0, 0], [1, 1]], [3.0, 4.0], (2, 2))
elif generator_type == "jax":
x = jax_sparse.BCOO(([1.0, 2.0], [[0, 0], [1, 2]]), shape=(2, 4))
y = jax_sparse.BCOO(([3.0, 4.0], [[0, 0], [1, 1]]), shape=(2, 2))
elif generator_type == "scipy":
x = scipy.sparse.coo_matrix(([1.0, 2.0], ([0, 1], [0, 2])), (2, 4))
y = scipy.sparse.coo_matrix(([3.0, 4.0], ([0, 1], [0, 1])), (2, 2))
def generate():
for _ in range(4):
yield x, y
adapter = generator_data_adapter.GeneratorDataAdapter(generate())
if iterator_type == "tf":
it = adapter.get_tf_dataset()
expected_class = tf.SparseTensor
elif iterator_type == "jax":
it = adapter.get_jax_iterator()
expected_class = jax_sparse.BCOO
for batch in it:
self.assertEqual(len(batch), 2)
bx, by = batch
self.assertIsInstance(bx, expected_class)
self.assertIsInstance(by, expected_class)
self.assertEqual(bx.shape, (2, 4))
self.assertEqual(by.shape, (2, 2))
| keras/keras/trainers/data_adapters/generator_data_adapter_test.py/0 | {
"file_path": "keras/keras/trainers/data_adapters/generator_data_adapter_test.py",
"repo_id": "keras",
"token_count": 2683
} | 174 |
import os
def count_loc(directory, exclude=("_test",), extensions=(".py",), verbose=0):
loc = 0
for root, _, fnames in os.walk(directory):
skip = False
for ex in exclude:
if root.endswith(ex):
skip = True
if skip:
continue
for fname in fnames:
skip = False
for ext in extensions:
if not fname.endswith(ext):
skip = True
break
for ex in exclude:
if fname.endswith(ex + ext):
skip = True
break
if skip:
continue
fname = os.path.join(root, fname)
if verbose:
print(f"Count LoCs in {fname}")
with open(fname) as f:
lines = f.read().split("\n")
string_open = False
for line in lines:
line = line.strip()
if not line or line.startswith("#"):
continue
if not string_open:
if not line.startswith('"""'):
loc += 1
else:
if not line.endswith('"""'):
string_open = True
else:
if line.startswith('"""'):
string_open = False
return loc
| keras/keras/utils/code_stats.py/0 | {
"file_path": "keras/keras/utils/code_stats.py",
"repo_id": "keras",
"token_count": 876
} | 175 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.