id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st207400 | I trained the H5 model through the computer and passed tf.lite.tfliteconverter.from_ keras_ Mode to convert it into tflite model, but it seems that it can’t be used on Android devices. The reasoning result obtained through interpreter.run is Nan. Where did I make a configuration error? |
st207401 | Hi @jun_yin
I am a little bit confused
Can you use the tflite model with the python api? |
st207402 | In addition, the model of my android demo seems to be unint8, which needs to be connected with ByteBuffer, while the model trained by Python is float32. In addition, when I use tflite from Android demo in Python, it is extremely unsatisfactory. Even if the class type of the incoming value has been adjusted to uint8, the speed is extremely slow, and the result is not ideal |
st207403 | Can you upload somewhere the model to take a look?
It seems that the procedure inside android is not ideal but if you say that it works with Python api then we can find a solution. |
st207404 | OK, please wait a moment. By the way, can you tell me some good learning methods of tensorflow? Because I feel like I’ve been studying for a month, but there’s still no big breakthrough |
st207405 | This is the address of the compressed package of my demo:Google Drive: Sign-in 2 |
st207406 | Hello, can’t the connection published above be used? Is this OK? Because I seldom use Google disk
drive.google.com
demo.zip 2
Google Drive file. |
st207407 | This is the visualization that netron.app gives for your project:
Capture601×536 21.8 KB
Is this for segmentation task? |
st207408 | I’m not sure. I’ve just started learning tensorflow. I shouldn’t have made that progress yet |
st207409 | Check the examples TensorFlow Lite provides to see what suits your case best.
TensorFlow
TensorFlow Lite Examples | Machine Learning Mobile Apps 1
Sample ML apps for Android, iOS and Raspberry Pi. See end-to-end examples with complete instructions to train, test and deploy models on mobile devices.
If you need any help ping me.
Regards |
st207410 | My dataset is a combination of time series and non-time series data. what model can I use to train my machine using both types of data? |
st207411 | @Md_Samiul_Basir, Model selection is purely based business problem. Is is possible to share sample data to understand more? |
st207412 | @chunduriv Thanks for responding. Unfortunately, I don’t have the dataset yet. It’s about an agricultural problem. I have some growth and weather data depending on time, but some soil data is not related to time. both the types are important equally. I knew that LSTM works for multivariate time series forecusting. but what to do with the non-time series data? |
st207413 | how and where can I add drop out layer in the following code:
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Conv2DTranspose, Concatenate, Input
from tensorflow.keras.models import Model
from tensorflow.keras.applications import ResNet50
def conv_block(input, num_filters):
x = Conv2D(num_filters, 3, padding=“same”)(input)
x = BatchNormalization()(x)
x = Activation(“relu”)(x)
x = Conv2D(num_filters, 3, padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
return x
def decoder_block(input, skip_features, num_filters):
x = Conv2DTranspose(num_filters, (2, 2), strides=2, padding=“same”)(input)
x = Concatenate()([x, skip_features])
x = conv_block(x, num_filters)
return x
def resnet50_unet(input_shape):
“”" Input “”"
inputs = Input(input_shape)
""" Pre-trained ResNet50 Model """
resnet50 = ResNet50(include_top=False, weights=None, input_tensor=inputs)
""" Encoder """
s1 = resnet50.layers[0].output ## (512 x 512)
s2 = resnet50.get_layer("conv1_relu").output ## (256 x 256)
s3 = resnet50.get_layer("conv2_block3_out").output ## (128 x 128)
s4 = resnet50.get_layer("conv3_block4_out").output ## (32 x 32)
""" Bridge """
b1 = resnet50.get_layer("conv4_block6_out").output ## (32 x 32)
""" Decoder """
d1 = decoder_block(b1, s4, 256) ## (64 x 64)
d2 = decoder_block(d1, s3, 128) ## (128 x 128)
d3 = decoder_block(d2, s2, 64) ## (256 x 256)
d4 = decoder_block(d3, s1, 32) ## (512 x 512)
""" Output """
outputs = Conv2D(1, 1, padding="same", activation="sigmoid")(d4)
model = Model(inputs, outputs, name="ResNet50_U-Net")
return model
if name == “main”:
input_shape = (256, 256, 3)
model = resnet50_unet(input_shape)
#model.summary() |
st207414 | @Aleena_Suhail,
Can you take a look at this thread 7 which can throw you some insights on the same? |
st207415 | I am trying to build a CNN LSTM classifier for 1d sequential data.Input is of length 20 and contains 4 features.
I have trained the model and saved it. However I am unable to get good performance in both training as well as test data:-
Below is my code for the tensorflow model.
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(filters=128, kernel_size=8, padding = 'same', activation='relu', input_shape = (20,4)))
model.add(tf.keras.layers.Conv1D(filters=128, kernel_size=5, padding = 'same', activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.01)))
model.add(tf.keras.layers.Conv1D(filters=128, kernel_size=3, padding = 'same', activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.01)))
model.add(tf.keras.layers.MaxPooling1D(pool_size=2))
model.add(tf.keras.layers.LSTM(units = 128))
model.add(tf.keras.layers.Dense(units = 1, activation = 'sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics = 'accuracy')
model.build()
model.summary()
history = model.fit(X_tf, y_tf, epochs=60, batch_size=256, validation_data = (X_tf_,y_tf_))
Here are the logs that I am getting while training.
Epoch 5/60 19739/19739 [==============================] - 1212s 61ms/step - loss: 0.5858 - accuracy: 0.7055 - val_loss: 0.5854 - val_accuracy: 0.7062
I need help in how can I further improve the performance.What are the various techniques that I can apply to sequential data?
My training dataset has 4.8 million rows and test set has 1.2 million rows. |
st207416 | You can make the model bigger: add more LSTM layers, increase the number of units in the layers, make them bidirectional, add dense layers with activations after the last LSTM or experiment with other architectures.
Other way is to change the number of epochs, batch size and learning rate and see how it affects the results.
If nothing helps, check for class imbalance and how both classes are distributed between the train and validation sets. Apply some basic techniques for imbalances data like using sample weights and generating synthetic data for underrepresented class.
Add more features, if it is possible, or generate new features from existing ones. |
st207417 | I have tried balancing the dataset as well as increasing the size and depth of model.However I am unable to get success in this problem.
I have posted the data set on kaggle here: binary seq-classification of input_shape(20,9) | Kaggle 4
Can someone please help in how can I solve this problem with good accuracy.
The problems I am facing:
Without resampling: majority class prevails and very less minority class.
With resampling: I am getting many false positives on train and test data. |
st207418 | Meteorological Neural Network 2 (MetNet-2) - a new probabilistic weather model featuring a forecasting range of up to 12 hours of lead time at a frequency of 2 minutes.
Blog post: Google AI Blog: MetNet-2: Deep Learning for 12-Hour Precipitation Forecasting 8
Within weather forecasting, deep learning techniques have shown particular promise for nowcasting 1 — i.e., predicting weather up to 2-6 hours ahead. Previous work has focused on using direct neural network models for weather data 1, extending neural forecasts from 0 to 8 hours with the MetNet architecture, generating continuations of radar data for up to 90 minutes ahead, and interpreting the weather information learned by these neural networks. Still, there is an opportunity for deep learning to extend improvements to longer-range forecasts.
To that end, in “Skillful Twelve Hour Precipitation Forecasts Using Large Context Neural Networks”, we push the forecasting boundaries of our neural precipitation model to 12 hour predictions while keeping a spatial resolution of 1 km and a time resolution of 2 minutes. By quadrupling the input context, adopting a richer weather input state, and extending the architecture to capture longer-range spatial dependencies, MetNet-2 substantially improves on the performance of its predecessor, MetNet. Compared to physics-based models, MetNet-2 outperforms the state-of-the-art HREF ensemble model for weather forecasts up to 12 hours ahead.
Interpreting What MetNet-2 Learns About Weather
Because MetNet-2 does not use hand-crafted physical equations, its performance inspires a natural question: What kind of physical relations about the weather does it learn from the data during training? Using advanced interpretability tools 1, we further trace the impact of various input features on MetNet-2’s performance at different forecast timelines. Perhaps the most surprising finding is that MetNet-2 appears to emulate the physics described by Quasi-Geostrophic Theory, which is used as an effective approximation of large-scale weather phenomena. MetNet-2 was able to pick up on changes in the atmospheric forces, at the scale of a typical high- or low-pressure system (i.e., the synoptic scale), that bring about favorable conditions for precipitation, a key tenet of the theory.
Conclusion
MetNet-2 represents a step toward enabling a new modeling paradigm for weather forecasting that does not rely on hand-coding the physics of weather phenomena, but rather embraces end-to-end learning from observations to weather targets and parallel forecasting on low-precision hardware. Yet many challenges remain on the path to fully achieving this goal, including incorporating more raw data about the atmosphere directly (rather than using the pre-processed starting state from physical models), broadening the set of weather phenomena, increasing the lead time horizon to days and weeks, and widening the geographic coverage beyond the United States.
MetNet-2 architecture1448×575 208 KB
(MetNet-2 architecture - paper 5)
Also, check out:
[Research ] Nowcasting the Next Hour of Rain (by DeepMind) Research & Models
GitHub: https://github.com/deepmind/deepmind-research/tree/master/nowcasting
Colab: Google Colab
Paper: Skilful precipitation nowcasting using deep generative models of radar | Nature
Recently introduced deep learning methods use radar to directly predict future rain rates, free of physical constraints5,6. While they accurately predict low-intensity rainfall, their operational utility is limited because their lack of constraints produces blurry nowcasts at longer lead times, yielding poor … |
st207419 | Do Geometric Brownian Motion or a SABR model sound familiar? Here’s an interested library:
GitHub
GitHub - google/tf-quant-finance: High-performance TensorFlow library for... 13
High-performance TensorFlow library for quantitative finance. - GitHub - google/tf-quant-finance: High-performance TensorFlow library for quantitative finance.
From the repository:
This library provides high-performance components leveraging the hardware acceleration support and automatic differentiation of TensorFlow. The library will provide TensorFlow support for foundational mathematical methods, mid-level methods, and specific pricing models. The coverage is being expanded over the next few months.
The library is structured along three tiers:
Foundational methods. Core mathematical methods - optimisation, interpolation, root finders, linear algebra, random and quasi-random number generation, etc.
Mid-level methods. ODE & PDE solvers, Ito process framework, Diffusion Path Generators, Copula samplers etc.
Pricing methods and other quant finance specific utilities. Specific Pricing models (e.g., Local Vol (LV), Stochastic Vol (SV), Stochastic Local Vol (SLV), Hull-White (HW)) and their calibration. Rate curve building, payoff descriptions, and schedule generation.
…
If you are not familiar with TensorFlow, an excellent place to get started is with the following self-study introduction to TensorFlow notebooks:
Introduction to TensorFlow Part 1 - Basics 2.
Introduction to TensorFlow Part 2 - Debugging and Control Flow.
Introduction to TensorFlow Part 3 - Advanced Tensor Manipulation 1.
Development roadmap
We are working on expanding the coverage of the library. Areas under active development are:
Ito Processes: Framework for defining Ito processes. Includes methods for sampling paths from a process and for solving the associated backward Kolmogorov equation.
Implementation of the following specific processes/models:
Brownian Motion
Geometric Brownian Motion
Ornstein-Uhlenbeck
One-Factor Hull-White model
Heston model
Local volatility model.
Quadratic Local Vol model.
SABR model
Copulas: Support for defining and sampling from copulas.
Model Calibration:
Dupire local vol calibration.
SABR model calibration.
Rate curve fitting: Hagan-West algorithm for yield curve bootstrapping and the Monotone Convex interpolation scheme.
Support for dates, day-count conventions, holidays, etc.
Examples
See tf_quant_finance/examples/ 2 for end-to-end examples. It includes tutorial notebooks such as:
American Option pricing under the Black-Scholes model 2
Monte Carlo via Euler Scheme 1
Black Scholes: Price and Implied Vol 4
Forward and Backward mode gradients in TFF 1
Root search using Brent’s method
Optimization 1
Swap Curve Fitting 2
Vectorization and XLA compilation
The above links will open Jupyter Notebooks in Colab.
…
Community
GitHub repository 13: Report bugs or make feature requests.
TensorFlow Blog 1: Stay up to date on content from the TensorFlow team and best articles from the community.
[email protected]: Open mailing list for discussion and questions of this library.
TensorFlow Probability: This library will leverage methods from TensorFlow Probability (TFP).
More info:
GitHub
GitHub - google/tf-quant-finance: High-performance TensorFlow library for... 13
High-performance TensorFlow library for quantitative finance. - GitHub - google/tf-quant-finance: High-performance TensorFlow library for quantitative finance. |
st207420 | #DeepReinforcementLearning #PhysicsSimulator #ReinforcementLearning
In case you missed the announcement from DeepMind on 18 October 2021: Opening up a physics simulator for robotics | DeepMind 1
The rich-yet-efficient contact model of the MuJoCo physics simulator 4 has made it a leading choice by robotics researchers and today, we’re proud to announce that, as part of DeepMind’s mission of advancing science, we’ve acquired MuJoCo and are making it freely available 1 for everyone, to support research everywhere. Already widely used within the robotics community, including as the physics simulator of choice for DeepMind’s robotics team, MuJoCo features a rich contact model, powerful scene description language, and a well-designed API. Together with the community, we will continue to improve MuJoCo as open-source software under a permissive licence. As we work to prepare the codebase, we are making MuJoCo freely available 1 as a precompiled library.
MuJoCo in DeepMind. Our robotics team has been using MuJoCo as a simulation platform for various projects, mostly via our dm_control 4 Python stack. In the carousel below, we highlight a few examples to showcase what can be simulated in MuJoCo. Of course, these clips represent only a tiny fraction of the vast possibilities for how researchers might use the simulator. For higher quality versions of these clips, please click here 1.
Website: https://mujoco.org/ 4
Docs: Overview — MuJoCo documentation 1
GitHub: GitHub - deepmind/mujoco: Multi-Joint dynamics with Contact. A general purpose physics simulator.
Related - dm_control (for physics-based simulation): GitHub - deepmind/dm_control: DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo. 4
Related deep reinforcement learning research: Emergence of Locomotion Behaviours in Rich Environments 1 (DeepMind):
… Benchmark tasks: We consider three continuous control tasks for benchmarking the algorithms. All environments rely on the Mujoco physics engine…
Demo of Emergence of Locomotion Behaviours in Rich Environments:
Emergence of Locomotion Behaviours in Rich Environments |
st207421 | Blog post: Google AI Blog: Model Ensembles Are Faster Than You Think 10 (10 November, 2021)
Conclusion
As we have seen, ensemble/cascade-based models obtain superior efficiency and accuracy over state-of-the-art models from several standard architecture families. In our paper 1 we show more results for other models and tasks. For practitioners, this outlines a simple procedure to boost accuracy while retaining efficiency using off-the-shelf models. We encourage you to try it out!
(Ensembles and cascades - 2-model combinations for both ensembles and cascades - from the blog post 10)
Inference latency - Model cascades vs single models - Average latency of cascades on TPUv3 for online processing1728×1008 164 KB
(Inference latency - Model cascades vs single models - Average latency of cascades on TPUv3 for online processing - from the blog post 10)
Paper (arXiv): Wisdom of Committees: An Overlooked Approach To Faster and More Accurate Models
3
We show that even this method already outperforms state-of-the-art architectures found by costly
neural architecture search (NAS) methods. Note that this method works with off-the-shelf models
and does not use specialized techniques…
…Our analysis shows that committee-based models provide a simple complementary paradigm to achieve superior efficiency without tuning the architecture. One can often improve accuracy while reducing inference and training cost by building committees out of existing networks.
… We show that committee-based models, i.e., model ensembles or cascades, provide a simple complementary paradigm to obtain efficient models without tuning the architecture. Notably, cascades can match or exceed the accuracy of state-of-the-art models on a variety of tasks while being drastically more efficient. Moreover, the speedup of model cascades is evident in both FLOPs and on-device latency and throughput. The fact that these simple committee-based models outperform sophisticated NAS methods, as well as manually designed architectures, should motivate future research to include them as strong baselines whenever presenting a new architecture. For practitioners, committee-based models outline a simple procedure to improve accuracy while maintaining efficiency that only needs off-the-shelf models. |
st207422 | image1434×759 112 KB
Following the last presentation 1 at the ML Community Day about Chip Floorplanning with Deep Reinforcement Learning, here are some more resources:
Blog posts and research papers
Google AI blog: Google AI Blog: Chip Design with Deep Reinforcement Learning 2 (April 2020)
Nature: A graph placement methodology for fast chip design 1 (June 2021) - article link from the above Google AI blog post.
Nature: Google AI beats humans at designing computer chips 1 (June 2021)
Paper (arXiv): Chip Placement with Deep Reinforcement Learning (April 2020)
Related paper (arXiv): Placement Optimization with Deep Reinforcement Learning (March 2020)
Related paper (arXiv): Transferable Graph Optimizers for ML Compilers (October 2020)
Videos
Graph Representation Learning for Chip Design (MLSyS 2021)
SlidesLive
Graph Representation Learning for Chip Design 1
Solving Optimization Problems in Systems and Chip Design: Google Brain Research (March 2021)
Solving Optimization Problems in Systems and Chip Design: Google Brain Research
Reinforcement Learning for Hardware Design - Stanford MLSys Seminar (February 2021)
Reinforcement Learning for Hardware Design feat. Anna Goldie | Stanford MLSys Seminar Episode 14
ML for Computer Systems (NeurIPS 2019)
SlidesLive
ML for Computer Systems 1 |
st207423 | def expend_as(tensor, rep):
return layers.Lambda(lambda x, repnum: K.repeat_elements(x, repnum, axis=3),
arguments={‘repnum’: rep})(tensor)
def double_conv_layer(x, filter_size, size, dropout, batch_norm=False):
axis = 3
conv = layers.Conv2D(size, (filter_size, filter_size), padding='same')(x)
if batch_norm is True:
conv = layers.BatchNormalization(axis=axis)(conv)
conv = layers.Activation('relu')(conv)
conv = layers.Conv2D(size, (filter_size, filter_size), padding='same')(conv)
if batch_norm is True:
conv = layers.BatchNormalization(axis=axis)(conv)
conv = layers.Activation('relu')(conv)
if dropout > 0:
conv = layers.Dropout(dropout)(conv)
shortcut = layers.Conv2D(size, kernel_size=(1, 1), padding='same')(x)
if batch_norm is True:
shortcut = layers.BatchNormalization(axis=axis)(shortcut)
res_path = layers.add([shortcut, conv])
return res_path
def gating_signal(input, out_size, batch_norm=False):
“”"
resize the down layer feature map into the same dimension as the up layer feature map
using 1x1 conv
:param input: down-dim feature map
:param out_size:output channel number
:return: the gating feature map with the same dimension of the up layer feature map
“”"
x = layers.Conv2D(out_size, (1, 1), padding=‘same’)(input)
if batch_norm:
x = layers.BatchNormalization()(x)
x = layers.Activation(‘relu’)(x)
return x
def attention_block(x, gating, inter_shape):
shape_x = K.int_shape(x)
shape_g = K.int_shape(gating)
theta_x = layers.Conv2D(inter_shape, (2, 2), strides=(2, 2), padding='same')(x) # 16
shape_theta_x = K.int_shape(theta_x)
phi_g = layers.Conv2D(inter_shape, (1, 1), padding='same')(gating)
upsample_g = layers.Conv2DTranspose(inter_shape, (3, 3),
strides=(shape_theta_x[1] // shape_g[1], shape_theta_x[2] // shape_g[2]),
padding='same')(phi_g) # 16
concat_xg = layers.add([upsample_g, theta_x])
act_xg = layers.Activation('relu')(concat_xg)
psi = layers.Conv2D(1, (1, 1), padding='same')(act_xg)
sigmoid_xg = layers.Activation('sigmoid')(psi)
shape_sigmoid = K.int_shape(sigmoid_xg)
upsample_psi = layers.UpSampling2D(size=(shape_x[1] // shape_sigmoid[1], shape_x[2] // shape_sigmoid[2]))(sigmoid_xg) # 32
upsample_psi = expend_as(upsample_psi, shape_x[3])
y = layers.multiply([upsample_psi, x])
result = layers.Conv2D(shape_x[3], (1, 1), padding='same')(y)
result_bn = layers.BatchNormalization()(result)
return result_bn
def Attention_ResUNet(input_shape, NUM_CLASSES=1, dropout_rate=0.0, batch_norm=True):
FILTER_NUM = 64 # number of basic filters for the first layer
FILTER_SIZE = 3 # size of the convolutional filter
UP_SAMP_SIZE = 2
# input data
# dimension of the image depth
inputs = layers.Input((512, 512, 3), dtype=tf.float32)
axis = 3
# Downsampling layers
# DownRes 1, double residual convolution + pooling
conv_512 = double_conv_layer(inputs, 3, 64, dropout_rate, batch_norm)
pool_256 = layers.MaxPooling2D(pool_size=(2,2))(conv_512)
# DownRes 2
conv_256 = double_conv_layer(pool_256, 3, 2*64, dropout_rate, batch_norm)
pool_128 = layers.MaxPooling2D(pool_size=(2,2))(conv_256)
# DownRes 3
conv_128 = double_conv_layer(pool_128, 3, 4*64, dropout_rate, batch_norm)
pool_64 = layers.MaxPooling2D(pool_size=(2,2))(conv_128)
# DownRes 4
conv_64 = double_conv_layer(pool_64, 3, 8*64, dropout_rate, batch_norm)
pool_32 = layers.MaxPooling2D(pool_size=(2,2))(conv_64)
# DownRes 5, convolution only
conv_32 = double_conv_layer(pool_32, 3, 16*64, dropout_rate, batch_norm)
# Upsampling layers
# UpRes 6, attention gated concatenation + upsampling + double residual convolution
gating_64 = gating_signal(conv_32, 8*64, batch_norm)
att_64 = attention_block(conv_64, gating_64, 8*64)
up_64 = layers.UpSampling2D(size=(2, 2), data_format="channels_last")(conv_32)
up_64 = layers.concatenate([up_64, att_64], axis=axis)
up_conv_64 = double_conv_layer(up_64, 3, 8*64, dropout_rate, batch_norm)
# UpRes 7
gating_128 = gating_signal(up_conv_64, 4*64, batch_norm)
att_128 = attention_block(conv_128, gating_128, 4*64)
up_128 = layers.UpSampling2D(size=(2, 2), data_format="channels_last")(up_conv_64)
up_128 = layers.concatenate([up_128, att_128], axis=axis)
up_conv_128 = double_conv_layer(up_128, 3, 4*64, dropout_rate, batch_norm)
# UpRes 8
gating_256 = gating_signal(up_conv_128, 2*64, batch_norm)
att_256 = attention_block(conv_256, gating_256, 2*64)
up_256 = layers.UpSampling2D(size=(2, 2), data_format="channels_last")(up_conv_128)
up_256 = layers.concatenate([up_256, att_256], axis=axis)
up_conv_256 = double_conv_layer(up_256, 3, 2*64, dropout_rate, batch_norm)
# UpRes 9
gating_512 = gating_signal(up_conv_128, 64, batch_norm)
att_512 = attention_block(conv_512, gating_512, 64)
up_512 = layers.UpSampling2D(size=(2, 2), data_format="channels_last")(up_conv_256)
up_512 = layers.concatenate([up_512, att_512], axis=axis)
up_conv_512 = double_conv_layer(up_512, 3, 64, dropout_rate, batch_norm)
# 1*1 convolutional layers
# valid padding
# batch normalization
# sigmoid nonlinear activation
conv_final = layers.Conv2D(NUM_CLASSES, kernel_size=(1,1))(up_conv_512)
conv_final = layers.BatchNormalization(axis=axis)(conv_final)
conv_final = layers.Activation('sigmoid')(conv_final)
# Model integration
model = models.Model(inputs, conv_final, name="AttentionResUNet")
return model
input_shape=(512,512,3)
model=Attention_ResUNet( input_shape, NUM_CLASSES=1,dropout_rate=0.0, batch_norm=True)
model.summary()
The code for training:
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import backend as K
os.environ[“TF_CPP_MIN_LOG_LEVEL”] = “2” #set to 1 for warnings and errors
import numpy as np
import cv2
import keras
import keras.utils
from glob import glob
from sklearn.utils import shuffle
import tensorflow as tf
from tensorflow.keras.callbacks import ModelCheckpoint, CSVLogger, ReduceLROnPlateau, EarlyStopping, TensorBoard
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import Recall, Precision
H = 512
W = 512
from focal_loss import BinaryFocalLoss #for tough to classify segement class
def create_dir(path):
“”" Create a directory. “”"
if not os.path.exists(path):
os.makedirs(path)
def shuffling(x, y):
x, y = shuffle(x, y, random_state=42)
return x, y
def load_data(path):
x = sorted(glob(os.path.join(path, “image”, “.png")))
y = sorted(glob(os.path.join(path, “mask”, ".png”)))
return x, y
def read_image(path):
path = path.decode()
x = cv2.imread(path, cv2.IMREAD_COLOR)
x = x/255.0
x = x.astype(np.float32)
return x
def read_mask(path):
path = path.decode()
x = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
x = x/255.0
x = x > 0.5
x = x.astype(np.float32)
x = np.expand_dims(x, axis=-1)
return x
def tf_parse(x, y):
def _parse(x, y):
x = read_image(x)
y = read_mask(y)
return x, y
x, y = tf.numpy_function(_parse, [x, y], [tf.float32, tf.float32])
x.set_shape([H, W, 3])
y.set_shape([H, W, 1])
return x, y
def tf_dataset(x, y, batch=8):
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(tf_parse)
dataset = dataset.batch(batch)
dataset = dataset.prefetch(10)
return dataset
if name == “main”:
“”" Seeding “”"
np.random.seed(42)
tf.random.set_seed(42)
""" Directory for storing files """
create_dir("files")
""" Hyperparameters """
batch_size = 2
lr = 0.002
num_epochs = 60
model_path = os.path.join("files", "model.h5")
csv_path = os.path.join("files", "data.csv")
""" Dataset """
train_path = os.path.join("/content/drive/MyDrive/Data_brain/train/")
valid_path = os.path.join("/content/drive/MyDrive/Data_brain/test/")
train_x, train_y = load_data(train_path)
train_x, train_y = shuffling(train_x, train_y)
valid_x, valid_y = load_data(valid_path)
print(f"Train: {len(train_x)} - {len(train_y)}")
print(f"Valid: {len(valid_x)} - {len(valid_y)}")
train_dataset = tf_dataset(train_x, train_y, batch=batch_size)
valid_dataset = tf_dataset(valid_x, valid_y, batch=batch_size)
""" Model """
model = Attention_ResUNet(input_shape)
metrics = [jacard_coef, Recall(), Precision()]
model.compile(loss=BinaryFocalLoss(gamma=2), optimizer=Adam(lr), metrics=metrics)
callbacks = [
ModelCheckpoint(model_path, verbose=1, save_best_only=True),
#ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, min_lr=1e-7, verbose=1),
CSVLogger(csv_path),
TensorBoard(),
#EarlyStopping(monitor='val_loss', patience=50, restore_best_weights=False),
]
model.fit(
train_dataset,
epochs=num_epochs,
validation_data=valid_dataset,
callbacks=callbacks,
shuffle=False)
I am getting the following error while training the model(the results are absurd):
Train: 1280 - 1280
Valid: 32 - 32
/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py:497: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
category=CustomMaskWarning)
Epoch 1/60
640/640 [==============================] - 279s 414ms/step - loss: 0.0857 - jacard_coef: 0.0047 - recall: 0.0555 - precision: 0.0049 - val_loss: 0.0365 - val_jacard_coef: 0.0044 - val_recall: 0.0000e+00 - val_precision: 0.0000e+00
Epoch 00001: val_loss improved from inf to 0.03647, saving model to files/model.h5
Epoch 2/60
640/640 [==============================] - 263s 411ms/step - loss: 0.0235 - jacard_coef: 0.0045 - recall: 0.0000e+00 - precision: 0.0000e+00 - val_loss: 0.0159 - val_jacard_coef: 0.0043 - val_recall: 0.0000e+00 - val_precision: 0.0000e+00
Epoch 00002: val_loss improved from 0.03647 to 0.01592, saving model to files/model.h5
Epoch 3/60
39/640 [>…] - ETA: 4:05 - loss: 0.0159 - jacard_coef: 0.0045 - recall: 0.0000e+00 - precision: 0.0000e+00 |
st207424 | Hi Aleena,
It’s a little bit hard to understand your question. Can you rephrase a little bit? maybe highlight the key parts? |
st207425 | I am trying to train a model however I am getting negative validation loss
The code is as follows:
MODEL
def conv_block(x, filter_size, size, dropout, batch_norm=False):
conv = layers.Conv2D(size, (filter_size, filter_size), padding="same")(x)
if batch_norm is True:
conv = layers.BatchNormalization(axis=3)(conv)
conv = layers.Activation("relu")(conv)
conv = layers.Conv2D(size, (filter_size, filter_size), padding="same")(conv)
if batch_norm is True:
conv = layers.BatchNormalization(axis=3)(conv)
conv = layers.Activation("relu")(conv)
if dropout > 0:
conv = layers.Dropout(dropout)(conv)
return conv
def repeat_elem(tensor, rep):
# lambda function to repeat Repeats the elements of a tensor along an axis
#by a factor of rep.
# If tensor has shape (None, 256,256,3), lambda will return a tensor of shape
#(None, 256,256,6), if specified axis=3 and rep=2.
return layers.Lambda(lambda x, repnum: K.repeat_elements(x, repnum, axis=3),
arguments={'repnum': rep})(tensor)
def res_conv_block(x, filter_size, size, dropout, batch_norm=False):
‘’’
Residual convolutional layer.
Two variants…
Either put activation function before the addition with shortcut
or after the addition (which would be as proposed in the original resNet).
1. conv - BN - Activation - conv - BN - Activation
- shortcut - BN - shortcut+BN
2. conv - BN - Activation - conv - BN
- shortcut - BN - shortcut+BN - Activation
Check fig 4 in https://arxiv.org/ftp/arxiv/papers/1802/1802.06955.pdf
'''
conv = layers.Conv2D(size, (filter_size, filter_size), padding='same')(x)
if batch_norm is True:
conv = layers.BatchNormalization(axis=3)(conv)
conv = layers.Activation('relu')(conv)
conv = layers.Conv2D(size, (filter_size, filter_size), padding='same')(conv)
if batch_norm is True:
conv = layers.BatchNormalization(axis=3)(conv)
#conv = layers.Activation('relu')(conv) #Activation before addition with shortcut
if dropout > 0:
conv = layers.Dropout(dropout)(conv)
shortcut = layers.Conv2D(size, kernel_size=(1, 1), padding='same')(x)
if batch_norm is True:
shortcut = layers.BatchNormalization(axis=3)(shortcut)
res_path = layers.add([shortcut, conv])
res_path = layers.Activation('relu')(res_path) #Activation after addition with shortcut (Original residual block)
return res_path
def gating_signal(input, out_size, batch_norm=False):
“”"
resize the down layer feature map into the same dimension as the up layer feature map
using 1x1 conv
:return: the gating feature map with the same dimension of the up layer feature map
“”"
x = layers.Conv2D(out_size, (1, 1), padding=‘same’)(input)
if batch_norm:
x = layers.BatchNormalization()(x)
x = layers.Activation(‘relu’)(x)
return x
def attention_block(x, gating, inter_shape):
shape_x = K.int_shape(x)
shape_g = K.int_shape(gating)
Getting the x signal to the same shape as the gating signal
theta_x = layers.Conv2D(inter_shape, (2, 2), strides=(2, 2), padding='same')(x) # 16
shape_theta_x = K.int_shape(theta_x)
Getting the gating signal to the same number of filters as the inter_shape
phi_g = layers.Conv2D(inter_shape, (1, 1), padding='same')(gating)
upsample_g = layers.Conv2DTranspose(inter_shape, (3, 3),
strides=(shape_theta_x[1] // shape_g[1], shape_theta_x[2] // shape_g[2]),
padding='same')(phi_g) # 16
concat_xg = layers.add([upsample_g, theta_x])
act_xg = layers.Activation('relu')(concat_xg)
psi = layers.Conv2D(1, (1, 1), padding='same')(act_xg)
sigmoid_xg = layers.Activation('sigmoid')(psi)
shape_sigmoid = K.int_shape(sigmoid_xg)
upsample_psi = layers.UpSampling2D(size=(shape_x[1] // shape_sigmoid[1], shape_x[2] // shape_sigmoid[2]))(sigmoid_xg) # 32
upsample_psi = repeat_elem(upsample_psi, shape_x[3])
y = layers.multiply([upsample_psi, x])
result = layers.Conv2D(shape_x[3], (1, 1), padding='same')(y)
result_bn = layers.BatchNormalization()(result)
return result_bn
def Attention_ResUNet(input_shape, NUM_CLASSES=1, dropout_rate=0.0, batch_norm=True):
‘’’
Rsidual UNet, with attention
'''
# network structure
FILTER_NUM = 64 # number of basic filters for the first layer
FILTER_SIZE = 3 # size of the convolutional filter
UP_SAMP_SIZE = 2 # size of upsampling filters
# input data
# dimension of the image depth
inputs = layers.Input(input_shape, dtype=tf.float32)
axis = 3
# Downsampling layers
# DownRes 1, double residual convolution + pooling
conv_512 = res_conv_block(inputs, FILTER_SIZE, FILTER_NUM, dropout_rate, batch_norm)
pool_256 = layers.MaxPooling2D(pool_size=(2,2))(conv_512)
# DownRes 2
conv_256 = res_conv_block(pool_256, FILTER_SIZE, 2*FILTER_NUM, dropout_rate, batch_norm)
pool_128 = layers.MaxPooling2D(pool_size=(2,2))(conv_256)
# DownRes 3
conv_128 = res_conv_block(pool_128, FILTER_SIZE, 4*FILTER_NUM, dropout_rate, batch_norm)
pool_64 = layers.MaxPooling2D(pool_size=(2,2))(conv_128)
# DownRes 4
conv_64 = res_conv_block(pool_64, FILTER_SIZE, 8*FILTER_NUM, dropout_rate, batch_norm)
pool_32 = layers.MaxPooling2D(pool_size=(2,2))(conv_64)
# DownRes 5, convolution only
conv_32 = res_conv_block(pool_32, FILTER_SIZE, 16*FILTER_NUM, dropout_rate, batch_norm)
# Upsampling layers
# UpRes 6, attention gated concatenation + upsampling + double residual convolution
gating_64 = gating_signal(conv_32, 8*FILTER_NUM, batch_norm)
att_64 = attention_block(conv_64, gating_64, 8*FILTER_NUM)
up_64 = layers.UpSampling2D(size=(UP_SAMP_SIZE, UP_SAMP_SIZE), data_format="channels_last")(conv_32)
up_64 = layers.concatenate([up_64, att_64], axis=axis)
up_conv_64 = res_conv_block(up_64, FILTER_SIZE, 8*FILTER_NUM, dropout_rate, batch_norm)
# UpRes 7
gating_128 = gating_signal(up_conv_64, 4*FILTER_NUM, batch_norm)
att_128 = attention_block(conv_128, gating_128, 4*FILTER_NUM)
up_128 = layers.UpSampling2D(size=(UP_SAMP_SIZE, UP_SAMP_SIZE), data_format="channels_last")(up_conv_64)
up_128 = layers.concatenate([up_128, att_128], axis=axis)
up_conv_128= res_conv_block(up_128, FILTER_SIZE, 4*FILTER_NUM, dropout_rate, batch_norm)
# UpRes 8
gating_256 = gating_signal(up_conv_128, 2*FILTER_NUM, batch_norm)
att_256 = attention_block(conv_256, gating_256, 2*FILTER_NUM)
up_256 = layers.UpSampling2D(size=(UP_SAMP_SIZE, UP_SAMP_SIZE), data_format="channels_last")(up_conv_128)
up_256= layers.concatenate([up_256, att_256], axis=axis)
up_conv_256 = res_conv_block(up_256, FILTER_SIZE, 2*FILTER_NUM, dropout_rate, batch_norm)
# UpRes 9
gating_512= gating_signal(up_conv_256, FILTER_NUM, batch_norm)
att_512 = attention_block(conv_512, gating_512, FILTER_NUM)
up_512 = layers.UpSampling2D(size=(UP_SAMP_SIZE, UP_SAMP_SIZE), data_format="channels_last")(up_conv_256)
up_512 = layers.concatenate([up_512, att_512], axis=axis)
up_conv_512 = res_conv_block(up_512, FILTER_SIZE, FILTER_NUM, dropout_rate, batch_norm)
# 1*1 convolutional layers
conv_final = layers.Conv2D(NUM_CLASSES, kernel_size=(1,1))(up_conv_512)
conv_final = layers.BatchNormalization(axis=axis)(conv_final)
conv_final = layers.Activation('sigmoid')(conv_final) #Change to softmax for multichannel
# Model integration
model = models.Model(inputs, conv_final, name="AttentionResUNet")
return model
input_shape = (512,512,3)
model=Attention_ResUNet(input_shape, NUM_CLASSES=1, dropout_rate=0.0, batch_norm=True)
model.summary()
Loss I am using is BinaryFocalLoss or DICE_BC with BinaryFocalLoss I am getting negative loss the dice binary cross entropy loss crashes the gpu:
def DiceBCELoss(y_true, y_pred, smooth=1e-15):
#flatten label and prediction tensors
y_true = tf.keras.layers.Flatten()(y_true)
y_pred = tf.keras.layers.Flatten()(y_pred)
BCE = tf.nn.softmax_cross_entropy_with_logits (y_true, y_pred)
intersection = K.sum(K.dot(y_true, y_pred))
dice_loss = 1 - (2. * intersection + smooth) / (tf.reduce_sum(y_true) + tf.reduce_sum(y_pred) + smooth)
Dice_BCE = BCE + dice_loss
return Dice_BCE
Code for training the model:
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import backend as K
os.environ[“TF_CPP_MIN_LOG_LEVEL”] = “2” #set to 1 for warnings and errors
import numpy as np
import cv2
import keras
import keras.utils
from glob import glob
from sklearn.utils import shuffle
import tensorflow as tf
from tensorflow.keras.callbacks import ModelCheckpoint, CSVLogger, ReduceLROnPlateau, EarlyStopping, TensorBoard
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import Recall, Precision
H = 512
W = 512
from focal_loss import BinaryFocalLoss #for tough to classify segement class
def create_dir(path):
“”" Create a directory. “”"
if not os.path.exists(path):
os.makedirs(path)
def shuffling(x, y):
x, y = shuffle(x, y, random_state=42)
return x, y
def load_data(path):
x = sorted(glob(os.path.join(path, “image”, “.png")))
y = sorted(glob(os.path.join(path, “mask”, ".png”)))
return x, y
def read_image(path):
path = path.decode()
x = cv2.imread(path, cv2.IMREAD_COLOR)
x = x/255.0
x = x.astype(np.float32)
return x
def read_mask(path):
path = path.decode()
x = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
x = x/255.0
x = x > 0.5
x = x.astype(np.float32)
x = np.expand_dims(x, axis=-1)
return x
def tf_parse(x, y):
def _parse(x, y):
x = read_image(x)
y = read_mask(y)
return x, y
x, y = tf.numpy_function(_parse, [x, y], [tf.float32, tf.float32])
x.set_shape([H, W, 3])
y.set_shape([H, W, 1])
return x, y
def tf_dataset(x, y, batch=8):
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(tf_parse)
dataset = dataset.batch(batch)
dataset = dataset.prefetch(10)
return dataset
if name == “main”:
“”" Seeding “”"
np.random.seed(42)
tf.random.set_seed(42)
""" Directory for storing files """
create_dir("files")
""" Hyperparameters """
batch_size = 2
lr = 0.001
num_epochs = 60
model_path = os.path.join("files", "model.h5")
csv_path = os.path.join("files", "data.csv")
""" Dataset """
train_path = os.path.join("/content/drive/MyDrive/Data_brain/train/")
valid_path = os.path.join("/content/drive/MyDrive/Data_brain/test/")
train_x, train_y = load_data(train_path)
train_x, train_y = shuffling(train_x, train_y)
valid_x, valid_y = load_data(valid_path)
print(f"Train: {len(train_x)} - {len(train_y)}")
print(f"Valid: {len(valid_x)} - {len(valid_y)}")
train_dataset = tf_dataset(train_x, train_y, batch=batch_size)
valid_dataset = tf_dataset(valid_x, valid_y, batch=batch_size)
""" Model """
model = Attention_ResUNet(input_shape)
metrics = [ 'binary_accuracy',Recall(), Precision()]
model.compile(loss=DiceBCELoss, optimizer=Adam(lr), metrics=metrics)
callbacks = [
ModelCheckpoint(model_path, verbose=1, save_best_only=True),
#ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, min_lr=1e-7, verbose=1),
CSVLogger(csv_path),
TensorBoard(),
#EarlyStopping(monitor='val_loss', patience=50, restore_best_weights=False),
]
model.fit(
train_dataset,
epochs=num_epochs,
validation_data=valid_dataset,
callbacks=callbacks,
shuffle=False) |
st207426 | I wanted to share a project I have been collaborating on with Soumik 1.
We take the problem of segmenting 3D point clouds that are important for modeling geometric properties from data. Today, we are delighted to open-source our repository 4 that implements the PointNet [1] model family for this purpose. We provide TensorFlow implementations with full support for TPUs and distributed training with mixed-precision (for GPUs). We provide models pre-trained on the four categories of the ShapeNet core dataset [2]. Here’s also a blog post 2 we have prepared to make it easier for getting started.
car964×525 94.9 KB
airplane966×525 74 KB
As always, don’t hesitate to reach out if you have any questions.
References:
[1] https://arxiv.org/abs/1612.00593 2
[2] https://shapenet.org/ 2 |
st207427 | This is great! I’m always interested in deep learning for new domains and data types. |
st207428 | Hello all,
I have a question about which solution will fits this case the best:
Case:
I have 3 kinds of text blocks: ingredients, preparation and dosage, and want to classify this types
I have a lot data that are already categorized for training
I hope someone have some papers, GitHub links or even better experience for this case.
best regards !! |
st207429 | You can start with this tutorial: Classify structured data with feature columns | TensorFlow Core 2
It demonstrates how to deal with various data types. |
st207430 | I’m sorry I don’t mean data types i want to classify if a text is a ingredient statement preparation statement or dosage statement |
st207431 | It seems that your problem is just a 3 classes classification task on a single receipit step.
But more in general on this specific topic I suggest to take a look at cooking receipts NER applied to the RecipeDB dataset as it is more interesting:
https://arxiv.org/abs/2004.12184 2 |
st207432 | Maybe experiment with Transformers, which are arguably the most advanced types of architectures for quite a few domains.
Some ideas:
BERT example: Classify text with BERT | Text | TensorFlow 1 (uses an “out-of-the-box” IMDB dataset)
A Keras example from the community: Text classification with Switch Transformer (uses an “out-of-the-box” IMDB dataset)
(External) Multi-Class Classification With Transformers: https://towardsdatascience.com/multi-class-classification-with-transformers-6cf7b59a033a (haven’t tested this but may be useful)
Ekaterina_Dranitsyna:
You can start with this tutorial: Classify structured data with feature columns | TensorFlow Core
It demonstrates how to deal with various data types.
+1 (note that Keras preprocessing layers and/or TF Text API may be more recommended cc @markdaoust)
Pre-processing the dataset and creating a pipeline could be a challenging task especially with a custom dataset. How about also: BERT Preprocessing with TF Text | TensorFlow and Text classification with an RNN | TensorFlow (if you want to try a less complex model first). |
st207433 | There is community-ticket tfjs #3835 1 - it asks to add rotation angle for face mesh.
I have submitted MR for it #844 1
I surely understand tfjs team has it’s own priorities, however my MR dangles there for already two weeks.
I would be glad if some response would be given, so I can continue with MR. |
st207434 | Thank you for reminding us. This is a highly sought-after feature, we are working on it. Replied the ticket. |
st207435 | hi I want to create a Python Bot That Can Solve Multiple-Choice Question From Any Given Image |
st207436 | Hi,
I trained a Fast RCNN Model to detect water puddle, and the model predicted well. However, there is an issue with the model on decoding a video stream running at 30fps. As shown in the attached images,
frame# 476 - detected a puddle with 100% confidence level
frame# 477 - did not detect anything
frame# 478 - detected the same puddle again at 100% confidence level.
I would like to know if anyone has similar experience with Fast RCNN model, and what did you do to fix it?
FYI, I also did training with two other models, MobileNet v2 SSD and ResNet. These two models gave gradual prediction results (conf. level fluctuates) as camera is panned over the subject. Fast RCNN behaves erratically, for the most part, confidence level of the detected object is either > 98% or close to zero. Please share if there is a way to fix this!
frame 476: frame476.jpg - Google Drive 1
frame 477: frame477.jpg - Google Drive 1 |
st207437 | I’m looking into implementing high-quality image-processing operations using TF. For example, I’d like to have a higher-quality downsampling method, like Lanczos as a TF model. Please forward any references to this sort of work you are aware of.
For example, a basic Gaussian blur can be implemented by passing a custom-width kernel to tf.conv2d() (I’m using TFJS). This works great, but has the expected issues along the image boundary. Production-quality image processing tools solve this edge problem in one of a few ways, typically by adjusting the kernel weights outside the image to zero. However, I’m not experienced enough at how to set different kernels along the image boundaries.
Can anyone provide some tips?
For more context, here’s code that does a simple NxN Gaussian blur, without handling the borders. I’d love to figure out how to enhance this code to provide different kernels along the boundary rows and columns to do a better job of handling the edges (ie. not blending with zero).
const lanczos = (x, a) => {
if (x === 0) return 1
if (x >= -a && x < a) {
return (a * Math.sin(Math.PI * x) * Math.sin(Math.PI * (x / a))) / (Math.PI * Math.PI * x * x)
}
return 0
}
const gaussian = (x, theta = 1 /* ~ -3 to 3 */) => {
const C = 1 / Math.sqrt(2 * Math.PI * theta * theta)
const k = -(x * x) / (2 * theta * theta)
return C * Math.exp(k)
}
const filters = {
Lanczos3: x => lanczos(x, 3),
Lanczos2: x => lanczos(x, 2),
Gaussian: x => gaussian(x, 1),
Bilinear: () => 1,
Nearest: () => 1,
}
const normalizedValues = (size, filter) => {
let total = 0
const values = []
for (let y = -size; y <= size; ++y) {
const i = y + size
values[i] = []
for (let x = -size; x <= size; ++x) {
const j = x + size
values[i][j] = []
const f = filter(x) * filter(y)
total += f
for (let c = 0; c < 3; ++c) {
values[i][j][c] = [ f, f, f ]
}
}
}
const kernel = values.map(row => row.map(col => col.map(a => a.map(b => b / total))))
// for (let x = -size; x <= size; ++x) values[x + size] = filter(x)
// const kernel = tf.einsum('i,j->ij', values, values)
// const sum = tf.sum(values)
const normalized = tf.div(kernel, total * 3)
return normalized
}
const frame = async (tensor, args) => {
const filter = filters[args.filter]
// const [ height, width ] = tensor.shape
// const res = args.resolution === 'Source' ? [ width, height ] : resolutions[args.resolution]
// const strides = [ width / res[0], height / res[1] ]
const { zoom, kernelWidth } = args
const strides = Math.max(1, zoom)
const size = Math.max(3, kernelWidth) * strides
const kernel = normalizedValues(size, filter)
const pad = 'valid' // sample to the edge, even when filter extends beyond image
const dst = tf.conv2d(tensor, kernel, strides, pad)
return { tensor: dst }
} |
st207438 | Dan_Wexler:
I’d love to figure out how to enhance this code to provide different kernels along the boundary rows and columns to do a better job of handling the edges (ie. not blending with zero)
Could you be more specific here?
You always have the option to slice the image into (overlapping): center, edges, corners. Apply the Conv to the center portion, apply modified convs to the edges and corners. And then 2d stack them back together. But there may be a short-cut depending on how you plan to modify the kernel at the edges. |
st207439 | Right - slicing the image up into “center” and then a set of rows and columns is one option. The other is to change the kernel for those “pixels” and keep the image in a single buffer. Specifically, we need to adjust the kernel by renormalizing it after excluding values “outside” the source buffer, rather than by assuming those values are zero and using them with a constant kernel as in tf.conv2d. Wikipedia calls this approach “kernel crop”. Note that “extending” the edges is also acceptable in most situations with image processing, and is often easier to implement on the CPU. The “wrap” and “mirror” options are more specific to texture rendering in 3D.
Assume a 5x5 kernel for simplicity (it should always be an odd width). Ignoring the corners for now, the first row would want a 3x5 kernel, renormalized after removing the two rows that are outside the source buffer. The second row would require a renormalized 4x5 kernel.
Note that the GPU hardware supports the “extend”, “mirror” and “wrap” modes in the core texture sampling hardware. I don’t know how this maps to the various TF backends, but I’m curious to learn. The hardware also supports non-integer bilerp sampling, which would also help, but that’s another question. So, in addition to the slicing you propose, we can also pad the input by extending the final row and column out to half the kernel width.
I’m happy to dive in and learn these items and was mostly looking for tips as to where to start as this is my first spelunking into the building of models from scratch in TF. |
st207440 | One simple but inefficient pattern I’ve used is to make an image of 1s.
Run the weight kernel over the image of 1s, with zero padding. The result is an weight-image where the value of each pixel is weight of the kernel that was valid at that location.
Then run the actual kernal over the actual image, with zero padding. to get the conv-image.
Then divide the conv-image by the weight image.
I’m pretty sure that’s equivalent. |
st207441 | My hope is to find a near optimal, single-pass algorithm that avoids data copies. Otherwise, I think it makes more sense to get the Tensor’s texture ID 1 and perform the IP using WebGL shaders. |
st207442 | What is your goal? Can you make an example about your input and expected output? |
st207443 | I mean if I have a lots of ideas in my mind then how can I crack using tensorflow .? Is it possible or not ? |
st207444 | There are many projects that are trying to understand brain signals with TF like e.g.:
NVIDIA Developer Blog – 19 Jul 21
Transforming Brain Waves into Words with AI | NVIDIA Developer Blog 3
New research out of the University of California, San Francisco has given a paralyzed man the ability to communicate by translating his brain signals into computer generated writing.
I have a lots of ideas i ideas in my mind
This seems a little bit too generic.
See also:
github.com
GitHub - SuperBruceJia/EEG-DL: A Deep Learning library for EEG Tasks (Signals)... 3
A Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow. - GitHub - SuperBruceJia/EEG-DL: A Deep Learning library for EEG Tasks (Signals) Classification, based on Tensor... |
st207445 | In the case you want just to play a little bit with an opensource BCI and TF take a look at:
github.com
GitHub - CrisSherban/BrainPad: Classification of EEG signals from the brain... 3
Classification of EEG signals from the brain through OpenBCI hardware and Tensorflow-Keras API - GitHub - CrisSherban/BrainPad: Classification of EEG signals from the brain through OpenBCI hardware... |
st207446 | I am trying the following text Classification code from François Chollet’s Deep Learning Book on MacOS BigSur M1 Chip (Version 11.2.3) for the first time. I am using tensorflow version 2.5.0
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence
max_features = 2000
max_len = 500
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
model = keras.models.Sequential()
model.add(layers.Embedding(max_features, 128,input_length=max_len,name='embed'))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer='rmsprop',loss='binary_crossentropy',metrics=['acc'])
callbacks=[tf.keras.callbacks.TensorBoard(log_dir='my_log_dir',histogram_freq=1,embeddings_freq=1,)]
history = model.fit(x_train, y_train, epochs=20, batch_size=128,
validation_split=0.2,callbacks=callbacks)
Why these accuracy and loss values are very high. I am getting different values, if I run the same code on Windows 10.
Results on MacOS:
Results on MacOS2863×1613 477 KB
Results on Windows 10:
Results on Window101184×561 89.5 KB
Even running any other code, using Adam’s Optimizer, I am getting a notice that Kernel appears to have died.
Results on MacOS:
MAC2721×1221 339 KB
And how to avoid these warnings (in pink box)?
Please help!! |
st207447 | Hi i`m trying to make a motion tracking application with Movenet in React Native
Confirmed keypoints are detected and shown up on console but having trouble to enable tracker
How can I enable built in keypoints tracker in Movenet???
Attached my source code below
import React, { useState, useEffect, useCallback, useMemo } from ‘react’;
import { View, StyleSheet, Platform, TouchableOpacity, Text } from ‘react-native’;
import Icon from ‘react-native-vector-icons/Ionicons’
import { Colors } from ‘react-native-paper’;
import { Camera } from ‘expo-camera’;
import * as tf from ‘@tensorflow/tfjs’;
import {cameraWithTensors} from ‘@tensorflow/tfjs-react-native’;
import * as poseDetection from ‘@tensorflow-models/pose-detection’;
import ‘@tensorflow/tfjs-backend-webgl’;
import ‘@mediapipe/pose’;
let coords = []
export const CameraView = () => {
const [hasPermission, setHasPermission] = useState(null);
const [poseDetector, setPoseDetector] = useState(null);
const [frameworkReady, setFrameworkReady] = useState(false);
const backCamera = Camera.Constants.Type.back
const frontCamera = Camera.Constants.Type.front
const [camType, setCamType] = useState(backCamera)
const TensorCamera = cameraWithTensors(Camera);
let requestAnimationFrameId = 0;
const textureDims = Platform.OS === "ios"? { width: 1080, height: 1920 } : { width: 1600, height: 1200 };
const tensorDims = { width: 152, height: 200 };
const iconPressed = useCallback(() => camType === backCamera? setCamType(frontCamera):setCamType(backCamera),[camType])
const model = poseDetection.SupportedModels.MoveNet;
const detectorConfig = {
modelType: poseDetection.movenet.modelType.MULTIPOSE_LIGHTNING,
enableTracking: true,
trackerType: poseDetection.TrackerType.Keypoint,
trackerConfig: {maxTracks: 4,
maxAge: 1000,
minSimilarity: 1,
keypointTrackerParams:{
keypointConfidenceThreshold: 1,
keypointFalloff: [],
minNumberOfKeypoints: 4
}
}
}
const detectPose = async (tensor) =>{
if(!tensor) return
const poses = await poseDetector.estimatePoses(tensor)
if (poses[0] !== undefined) {
const points = poses[0].keypoints.map(point => [point.x,point.y,point.name])
console.log(points)
coords = points
} else {
coords = []
}
///console.log(coords)
}
const handleCameraStream = (imageAsTensors) => {
const loop = async () => {
const nextImageTensor = await imageAsTensors.next().value;
await detectPose(nextImageTensor);
requestAnimationFrameId = requestAnimationFrame(loop);
};
if (true) loop();
}
useEffect(() => {
if(!frameworkReady) {
;(async () => {
const { status } = await Camera.requestPermissionsAsync();
console.log(`permissions status: ${status}`);
setHasPermission(status === 'granted');
await tf.ready();
setPoseDetector(await poseDetection.createDetector(model, detectorConfig))
setFrameworkReady(true);
})();
}
}, []);
useEffect(() => {
return () => {
cancelAnimationFrame(requestAnimationFrameId);
};
}, [requestAnimationFrameId]);
return(
<View style={styles.cameraView}>
<TensorCamera
style={styles.camera}
type={camType}
zoom={0}
cameraTextureHeight={textureDims.height}
cameraTextureWidth={textureDims.width}
resizeHeight={tensorDims.height}
resizeWidth={tensorDims.width}
resizeDepth={3}
onReady={(imageAsTensors) => handleCameraStream(imageAsTensors)}
autorender={true}
>
</TensorCamera>
<TouchableOpacity style={[styles.absoluteView]} activeOpacity={0.1}>
<Icon name="camera-reverse-outline" size={40} color="white" onPress={iconPressed}/>
</TouchableOpacity>
</View>
)
}
const styles = StyleSheet.create({
camera:{flex:1},
cameraView:{flex:1},
absoluteView:{
position:'absolute',
right:30,
bottom: Platform.select({ios:40, android:30}),
padding: 10,
},
tracker:{
position:'absolute',
width:10,
height:10,
borderRadius:5,
backgroundColor: Colors.blue500
}
}) |
st207448 | Hi @11130 , you need to lower minSimilarity, the semantics of this field is that the similarity between current pose and tracked pose, if their similarity is larger than minSimlarity, then we consider them as the same preson. 1 is the largest possible score for similarity, if you set it to 1, then that means we only consider this is the same person if current pose is exactly same as before. The default minSimilarity is 0.15. If you want to use the default, you can omit this field. Same for keypointConfidenceThreshold, the default is 0.3. Also you need to set values for keypointFalloff, default is [
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
]. I suggest just use default, to use default, just omit keypointTrackerParams. |
st207449 | Thanks for the reply
At the first time, I already tried it without TrackerConfig but nothing happened
That`s why i tried to input parameters manually
Wondering if its because im using SINGLEPOSE_LIGHTING model |
st207450 | It is only needed for multipose, from your shared code, it seems you are using multipose_lightning. Can you clarify what do you mean by nothing happened? Did the poseId change every time? |
st207451 | Yes i can see Id is updated
What i said nothing happened means even though i enabled tracker w/ or w/o TrackConfig, tried either SINGLEPOSE or PULTIPOSE model, couldn`t see tracker is working on screen |
st207452 | Can you open an issue in our tfjs repo? We’ll have someone look into it. Thanks. |
st207453 | @11130 , could you share the link to your repo. It seems the code is not working , and are you running the expo app locally that is the models and weights are saved on the device ? |
st207454 | @lina128 , do you have a recommended tutorial suggestion on how to run movenet in react native expo . The information seems to be scattered everywhere and would be nice , to check if there is a working example (with react native expo) . It could be great if you could point to that ? |
st207455 | Hi all,
I’m actually trying to implement an Algorithm that i found by reading the paper Robust and Communication-Efficient Federated Learning from Non-IID Data by [Simon Wiedemann] [Klaus-Robert Müller] [Wojciech].
I wanted to try to implement it inside the simple_fedavg offered by Tensorflow Federated. I have actually already created the algorithm and seems to works fine in test case, the real problem is to put it inside the simple_fedavg project. I don’t get where i could change what the client send to the server and what the server expect to recieve.
So, basically, from client_update i don’t want to send weights_delta, but instead i want to send a simple list like [ [list of negatives indexes] [list of positives indexes] [average value] ], then on the server side i will recreate the weights like explained in the paper. But i can’t understand how to change this behaviour.
English is not my main language, so i hope i have explained the problem good enough.
test = weights_delta.copy()
for index in range(len(weights_delta)):
original_shape = tf.shape(weights_delta[index])
tensor = tf.reshape(test[index], [-1])
negatives, positives, average = test_stc.stc_compression(tensor, sparsification_rate)
test[index] = test_stc.stc_decompression(negatives, positives, average, tensor.get_shape().as_list(), original_shape)
test[index] = test_stc.stc_compression(tensor, sparsification_rate)
client_weight = tf.cast(num_examples, tf.float32)
return ClientOutput(test, client_weight, loss_sum / client_weight)
This is the behaviour that i would like, stc_compression return a tuple. Then i would like to access to each “test” variable sent from a client inside the server and recreate all the weights. |
st207456 | @lgusm Can we have some TF federated team member subscribed to this federated tag?
Thanks |
st207457 | I’m using Tensorflow Federated, but i’m actually have some problem while trying to executes some operation on the server after reading the client update.
This is the function
@tff.federated_computation(federated_server_state_type,
federated_dataset_type)
def run_one_round(server_state, federated_dataset):
"""Orchestration logic for one round of computation.
Args:
server_state: A `ServerState`.
federated_dataset: A federated `tf.data.Dataset` with placement
`tff.CLIENTS`.
Returns:
A tuple of updated `ServerState` and `tf.Tensor` of average loss.
"""
tf.print("run_one_round")
server_message = tff.federated_map(server_message_fn, server_state)
server_message_at_client = tff.federated_broadcast(server_message)
client_outputs = tff.federated_map(
client_update_fn, (federated_dataset, server_message_at_client))
weight_denom = client_outputs.client_weight
tf.print(client_outputs.weights_delta)
round_model_delta = tff.federated_mean(
client_outputs.weights_delta, weight=weight_denom)
server_state = tff.federated_map(server_update_fn, (server_state, round_model_delta))
round_loss_metric = tff.federated_mean(client_outputs.model_output, weight=weight_denom)
return server_state, round_loss_metric, client_outputs.weights_delta.comp
I want to print the client_outputs.weights_delta and doing some operation on the weights that the client sent to the server before using the tff.federated_mean but i don’t get how to do so.
When i try to print i get this
Call(Intrinsic('federated_map', FunctionType(StructType([FunctionType(StructType([('weights_delta', StructType([TensorType(tf.float32, [5, 5, 1, 32]), TensorType(tf.float32, [32]), ....]) as ClientOutput, PlacementLiteral('clients'), False)))]))
Any way to modify those elements?
I tried with using return client_outputs.weights_delta.comp doing the modification in the main (i can do that) and then i tried to invocate a new method for doing the rest of the operations for the server update, but the error is:
AttributeError: 'IterativeProcess' object has no attribute 'calculate_federated_mean'
where calculate_federated_mean was the name of the new function i created.
This is the main:
for round_num in range(FLAGS.total_rounds):
print("--------------------------------------------------------")
sampled_clients = np.random.choice(train_data.client_ids, size=FLAGS.train_clients_per_round, replace=False)
sampled_train_data = [train_data.create_tf_dataset_for_client(client) for client in sampled_clients]
server_state, train_metrics, value_comp = iterative_process.next(server_state, sampled_train_data)
print(f'Round {round_num}')
print(f'\tTraining loss: {train_metrics:.4f}')
if round_num % FLAGS.rounds_per_eval == 0:
server_state.model_weights.assign_weights_to(keras_model)
accuracy = evaluate(keras_model, test_data)
print(f'\tValidation accuracy: {accuracy * 100.0:.2f}%')
tf.print(tf.compat.v2.summary.scalar("Accuracy", accuracy * 100.0, step=round_num))
Based on the simple_fedavg project from github Tensorflow Federated simple_fedavg as basic project. |
st207458 | So I tried to swap out ResNet for Efficientdet-B3 in the Eager Few Shot OD Training TF2 tutorial 3.
Now, based on all the positive feedback Efficientdet got I am very surprised that ResNet outperformed Efficientdet on this tutorial. In total Efficientdet got trained on 1700 batches in the tutorial, while I ran ResNet through the standard batch size of 100.
Efficientdet-B3 for the last 1000 batches I run of a total of 1700:
batch 950 of 1000, loss=0.21693243
batch 955 of 1000, loss=0.18070191
batch 960 of 1000, loss=0.1715184
batch 965 of 1000, loss=0.23656633
batch 970 of 1000, loss=0.16813375
batch 975 of 1000, loss=0.23602965
batch 980 of 1000, loss=0.14852181
batch 985 of 1000, loss=0.18400437
batch 990 of 1000, loss=0.22741726
batch 995 of 1000, loss=0.20477971
Done fine-tuning!
ResNet for 100 batches:
batch 0 of 100, loss=1.1079819
batch 10 of 100, loss=0.07644452
batch 20 of 100, loss=0.08746071
batch 30 of 100, loss=0.019333005
batch 40 of 100, loss=0.0071129226
batch 50 of 100, loss=0.00465827
batch 60 of 100, loss=0.0041421074
batch 70 of 100, loss=0.0026128457
batch 80 of 100, loss=0.0023376464
batch 90 of 100, loss=0.002139934
Done fine-tuning!
Why does Efficientdet need so much more training time than ResNet, is it due to that the number of parameters is only about 12 mill for Efficientdet-B3 (the one I tested) and about 25 mill for the ResNet50? Or are their other reasons?
The end result (the .gif at the end of the tutorial) also shows a huge different in accuracy, where ResNet performs much better.
Thanks for any input! |
st207459 | hello, when you mean “outperform” are you referring to the evaluation/test metrics? you should always monitor the gap between the training and validation losses, they should behave similar and then evaluate to see the final performance on a test set. Otherwise ResNet might be just overfitting the data and that’s why you get a extremely small loss, are these losses you posted training losses? |
st207460 | Thank you for your reply.
This is based on the loss during training and the final .gif that is rendered after predictions has been performed on the test images. |
st207461 | You would need to compare the evaluation metrics on a larger set of test/evaluation examples to conclude which model is better, just by the training losses I can infer ResNet is overfitting, but the final conclusion always comes from evaluating the model on a large set of test cases |
st207462 | Thank you again!
The result for EfficientDet after 200 batches is like this when running through the test:
Efficientdet test result (gif) 5
and for ResNet with 200 batches:
ResNet test result (gif) 4
So it does not seem to be overfitting (?) as ResNet has high accuracy on the test data.
I am just trying to understand why there is such huge different on the result for the models
Thanks! |
st207463 | What’s your Efficientdet train script? Provide a colab link if it’s possible.
Batch size=4 is too small for efficientdet |
st207464 | Hello , i need help regarding using tensorflow v1 model on zed camera . I have trained my model using custom data , but it is fluctuating a lot. Displaying random boxes on screen |
st207465 | Deepmind
Nowcasting 3
Our latest research and state-of-the-art model advances the science of Precipitation Nowcasting.
GitHub: https://github.com/deepmind/deepmind-research/tree/master/nowcasting 15
Colab: Google Colab 15
Paper: Skilful precipitation nowcasting using deep generative models of radar | Nature 9
Recently introduced deep learning methods use radar to directly predict future rain rates, free of physical constraints5,6. While they accurately predict low-intensity rainfall, their operational utility is limited because their lack of constraints produces blurry nowcasts at longer lead times, yielding poor performance on rarer medium-to-heavy rain events. Here we present a deep generative model for the probabilistic nowcasting of precipitation from radar that addresses these challenges.
https://lh3.googleusercontent.com/8PmJxGsCJ01Usa4ZN5cRKng8bIJMVAYHQwmzwBe5mZqWMazGljujwUplM0VCP1ZEzghp6Ie65gJALkLWzR2fGLopN8bIAKbFBvc4zJi4HzNHR4OX3Vc=w2048-rw-v1(image larger than 4096KB) 6 |
st207466 | this is super cool and there’s a colab for you to try the model: https://colab.sandbox.google.com/github/deepmind/deepmind-research/blob/master/nowcasting/Open_sourced_dataset_and_model_snapshot_for_precipitation_nowcasting.ipynb#scrollTo=wFD0zFFyuHzH 10 |
st207467 | I am stucking with one problem while defining one model in tf2. I am using one dense layer with a softmax activation function. Now I want to extract the index with the highest probability value for that softmax layer output . So that I can use that index for later layer definition while building the model. Please help me out to implement the above problem, Waiting for your quick reply. |
st207468 | tf.math.argmax is the tensorflow operations which will allow you to do this. If you have a Keras model you can use a Custom layers, as described in the tutorial: Custom layers | TensorFlow Core 1
The final code will look something like:
class IndexOfMaxLayer(tf.keras.layers.Layer):
def __init__(self):
super(IndexOfMaxLayer, self).__init__()
def call(self, inputs):
return tf.math.argmax(inputs)
Now you can use this code after whichever dense / softmax layer you want to extract the max value from. |
st207469 | Animesh_Sinha:
class IndexOfMaxLayer(tf.keras.layers.Layer):
def __init__(self):
super(IndexOfMaxLayer, self).__init__()
def call(self, inputs):
return tf.math.argmax(inputs)
I want to select one layer from one list (5 layers are already stacked inside this list) for this corresponding index. But this above custom layer returns one keras tensor which does not help to select one layer from that list. |
st207470 | I didn’t really get your question, so you have a neural network, with 5 different outputs, and now you want to take the one with the presumably max probability, and propagate it’s outputs ahead? In this case it’s pretty much the same, except that you will first have to take the max of the outputs of each of those layers, put them together in a tensor, take the argmax of that tensor, and use it to select which layer you want to propagate from. I am not sure I get your problem fully, an example would help. |
st207471 | Thanks for your response… I am just giving example.
import tensorflow as tf
from transformers import BertTokenizerFast
from transformers import TFBertModel
mlm_model = TFBertModel.from_pretrained(‘bert-base-uncased’,output_attentions=True)
input_word_ids = tf.keras.layers.Input(shape=(50,), dtype=tf.int32,name=“input_word_ids”)
token_type_ids = tf.keras.layers.Input(shape=(50,), dtype=tf.int32,name=“token_type_ids”)
attention_mask = tf.keras.layers.Input(shape=(50,), dtype=tf.int32,name=“attention_mask”)
mlm_out=mlm_model([input_word_ids,token_type_ids,attention_mask])
print(“Attention Heads are”,mlm_out[2])
This mlm_out[2] has 12 attention heads now I want to extract the one attention head from mlm_out[2] which will have the highest probability. Is it possible or not … |
st207472 | As of today, different variants of MLP-Mixers [1] are now available on TensorFlow Hub. Below are the details:
Models: https://tfhub.dev/sayakpaul/collections/mlp-mixer/1 10
Code: https://git.io/JzR68 7
Here is some example usage:
References:
[1] MLP-Mixer: An all-MLP Architecture for Vision by Tolstikhin et al. 4 |
st207473 | Hi guys, I try to implement the model for tensorflow2.5.0, but when I run the model, its print my loss return ‘none’, and show the error message: “RuntimeError: Attempting to capture an EagerTensor without building a function”.
image938×543 28.2 KB
Hope guys help me find the bug.
This is my model code:
encode model:
image936×1257 153 KB
decode model:
image825×806 90.4 KB
discriminator model:
image1171×687 79.6 KB
training step:
image1512×1142 219 KB
loss function:
image884×65 7.83 KB
image1288×130 17.7 KB
image737×178 22.7 KB
image890×58 10.1 KB
image946×56 9.48 KB
There is I have check:
I checked my dataset. There is not none data.
I checked my loss function, there is no nd.array, I change in tf.tensor.
This is my first time ask question on the website, if I need provide other code information to solve problem, I will upload. Thanks. |
st207474 | Here is colab playground:
colab.research.google.com
Google Colaboratory 5 |
st207475 | I use these code to generate a transformer pb file.
import torch
import torch.nn as nn
src = torch.rand((10,32,10))
class Former(nn.Module):
def __init__(self):
super(Former, self).__init__()
self.linear1 = nn.Linear(10,512)
self.linear2 = nn.Linear(10,512)
self.transformer = nn.Transformer()
def forward(self,input):
input1 = self.linear1(input)
input2 = self.linear2(input)
output = self.transformer(input1,input2)
return output
src = torch.rand(1,1,10)
model = Former()
torch.onnx.export(model,src,"linear.onnx",verbose = True,input_names=["input"], opset_version= 11)
import onnx
from onnx_tf.backend import prepare
onnx_model = onnx.load("./transformer.onnx") # load onnx model
tf_rep = prepare(onnx_model) # prepare tf representation
tf_rep.export_graph("./transformer") # export the model
After this I want to convert the model to h5 model so I can use profile to cout its FLOPs.
from onnx2keras import onnx_to_keras
import keras
import onnx
onnx_model = onnx.load('linear.onnx')
k_model = onnx_to_keras(onnx_model, ['input'])
keras.models.save_model(k_model,'./kerasModel.h5',overwrite=True,include_optimizer=True)
But the trouble is
File "/home/kh/anaconda3/envs/3.8/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 288, in _constant_impl
tensor_util.make_tensor_proto(
File "/home/kh/anaconda3/envs/3.8/lib/python3.8/site-packages/tensorflow/python/framework/tensor_util.py", line 564, in make_tensor_proto
append_fn(tensor_proto, proto_values)
File "tensorflow/python/framework/fast_tensor_util.pyx", line 127, in tensorflow.python.framework.fast_tensor_util.AppendObjectArrayToTensorProto
File "/home/kh/anaconda3/envs/3.8/lib/python3.8/site-packages/tensorflow/python/util/compat.py", line 86, in as_bytes
raise TypeError('Expected binary or unicode string, got %r' %
TypeError: Expected binary or unicode string, got 1536 |
st207476 | Hello everyone, I have a question related to time series. I have a year of data of a specific machine on its fuel consumption, I would like to predict an approximate total consumption for the next month (the value). Following the example of Tensorflow, the prediction works for me, but only for periods already known (checking the prediction). but I have modified the Window of time and I have not been able to predict a future date.
I am following this procedure
https://www.tensorflow.org/tutorials/structured_data/time_series 17
any recommendation or any idea? I really precited!
Thanks , Ricardo. |
st207477 | Hey @Ricardo can you share the requirement that specifics about your needs, especially with the data?
Modeling time series issues are based on how you outline the samples and labels. There are basically two different ways roughly from high level views:
Single step to predict
Multi steps to predict.
This is my point of view.
Actually it is about how you want to choose the time window to train your model. |
st207478 | Thanks for your answer! I have the predictions in a single step and also in multi steps, I am going to show you one of them.
Dataset
11262×907 86.3 KB
Time window
here some questions,
How can I see the date instead of time (h)?
For this problem is it a good idea to work with hours?
How can I see a date in the future? I only see dates that already have a label
21352×915 126 KB
Prediction
For example, if I wanted this prediction, how can I get the data as a value?
31347×1080 172 KB
Thank you so much, Ricardo. |
st207479 | Ever wanted to use Vision Transformers with TFHub and Keras? Well, pull your socks up now and get started. 16 different models are available for classification and fine-tuning. More details:
github.com
GitHub - sayakpaul/ViT-jax2tf: This repository hosts code for converting the... 71
This repository hosts code for converting the original Vision Transformer models (JAX) to TensorFlow. - GitHub - sayakpaul/ViT-jax2tf: This repository hosts code for converting the original Vision...
The story does not end here. Using this notebook you can convert any supported model from the AugReg pool (> 50,000 models!) and use that inside TFHub and Keras: https://colab.research.google.com/github/sayakpaul/ViT-jax2tf/blob/main/conversion.ipynb 21.
carbon1408×616 75.1 KB |
st207480 | Well done Sayak! great work as usual!!!
Just to complement, link to the the TFHub collection: TensorFlow Hub 22 |
st207481 | @lgusm It could be nice to have also the official garden implementations in TFHub when they will be finalized:
github.com
models/official/vision/beta/projects/vit at master · tensorflow/models 21
master/official/vision/beta/projects/vit
Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on GitHub.
I think soon or later we also need to understand if we need to handle a duplicates policy in TFHUB. If not at some point we could confuse users. |
st207482 | In that specific case it was introduced by a community PR 6 so I think that we could start to coordinate with the model garden review team to suggest to the users an extendend contribution to TFhub when It is merged in TF model garden. |
st207483 | I know that
flops = tf.profiler.profile(graph, options=tf.profiler.ProfileOptionBuilder.float_operation())
can calculate the FLOPs.
But where can I find the graph of transformer?
Please help me. |
st207484 | There Is a quite long thread for this in TF 2.x:
github.com/tensorflow/tensorflow
TF 2.0 Feature: Flops calculation 29
opened
Sep 25, 2019
pzobel
stat:awaiting tensorflower
type:feature
comp:tfdbg
TF 2.0
<em>Please make sure that this is a feature request. As per our [GitHub Policy](…https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template</em>
**System information**
- TensorFlow version (you are using): TF 2.0 RC2
- Are you willing to contribute it (Yes/No):
**Describe the feature and the current behavior/state.**
I am missing the opportunity to compute the number of floating point operations of a tf.keras Model in TF 2.0.
In TF 1.x tf.profiler was available [see here](https://stackoverflow.com/questions/45085938) but I can find anything equivalent for TF 2.0 yet.
**Will this change the current api? How?**
**Who will benefit with this feature?**
Everbody interested in the computational complexity of a TensorFlow model.
**Any Other info.** |
st207485 | Hi. Very new to this forum and more to Tensorflow, and I had a quick question that I hope someone can help me in.
Basically, I’m working with a set of languages (hausa, yoruba and igbo) that do not have a reliable sentiment analysis model to process text with - unless I missed something. What I want is to create a custom model for each of these languages where the model scores and returns the sentiment of a sentence as accurately as possible.
I’m not sure how to approach this. What I did first is got a training dataset where the text and its sentiment (human scored), vectorized the text and created a model (using an Embedding layer). The accuracy wasn’t the best but I don’t know if that is the way to continue. Selecting the right hyperparameters seems like a separate job on its own.
Can anyone recommend on how you might approach this? And if there’s documentation on how a sentiment analysis model using these languages (or any non-English) language is created?
Any help would be appreciated. Thanks. |
st207486 | Hi, welcome to the forum, Is this for
https://lacunafund.org/language-2020-awards/ 2 |
st207487 | Hello.
No, this is an internal company project (for now). We are trying to perform sentiment analysis on the Nigerian languages without translating to English but haven’t found anything of note. |
st207488 | I suggest you to contact this group:
HausaNLP Research Group
HausaNLP Research Group 4
My research group description.
It would be really nice if you will contribute a dataset on these low resources launguage to our datasets collection at:
TensorFlow
Contribute to the TFDS repository | TensorFlow Datasets 2
More in general you could start to explore something like:
ACL Anthology
Leveraging Multilingual Resources for Language Invariant Sentiment Analysis
Allen Antony, Arghya Bhattacharya, Jaipal Goud, Radhika Mamidi. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.
ACL Anthology
Creating and Evaluating Resources for Sentiment Analysis in the Low-resource...
Wazir Ali, Naveed Ali, Yong Dai, Jay Kumar, Saifullah Tumrani, Zenglin Xu. Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. 2021.
ACL Anthology
A Survey on Recent Approaches for Natural Language Processing in Low-Resource...
Michael A. Hedderich, Lukas Lange, Heike Adel, Jannik Strötgen, Dietrich Klakow. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021. |
st207489 | Hi,
Is there a way to use KerasTuner on tensorflow_decision_forests?
Any tutorial?
Thanks
Fadi |
st207490 | Out of interest I checked that the basic KerasTuner logic described in the tutorial here (Getting started with KerasTuner 5) works with decision forest model the same way as with neural networks.
def build_model(hp):
"""Function initializes the model and defines search space.
:param hp: Hyperparameters
:return: Compiled TensorFlow model
"""
model = tfdf.keras.GradientBoostedTreesModel(
num_trees=hp.Int('num_trees', min_value=10, max_value=510, step=50),
max_depth=hp.Int('max_depth', min_value=3, max_value=16, step=1))
model.compile(metrics=['accuracy'])
return model
tuner = kt.RandomSearch(
build_model,
objective='val_loss',
max_trials=5)
tuner.search(X_train, y_train, epochs=1, validation_data=(X_valid, y_valid)) |
st207491 | Today I tried to use it in a Kaggle competition: KerasTuner + TF Decision Forest | Kaggle 8. It’s the first version. I think it could be improved with more trials. |
st207492 | I was looking for this in order to use it in the September’s Kaggle Competition as well.
Great work! and thanks once again |
st207493 | Sorry for spamming down the forum, but I have problems understanding the Eager Few Shot OD Training TF2 tutorial 3.
For this part:
detection_model = model_builder.build(
model_config=model_config, is_training=True)
# Set up object-based checkpoint restore --- RetinaNet has two prediction
# `heads` --- one for classification, the other for box regression. We will
# restore the box regression head but initialize the classification head
# from scratch (we show the omission below by commenting out the line that
# we would add if we wanted to restore both heads)
fake_box_predictor = tf.compat.v2.train.Checkpoint(
_base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,
# _prediction_heads=detection_model._box_predictor._prediction_heads,
# (i.e., the classification head that we *will not* restore)
_box_prediction_head=detection_model._box_predictor._box_prediction_head,
)
fake_model = tf.compat.v2.train.Checkpoint(
_feature_extractor=detection_model._feature_extractor,
_box_predictor=fake_box_predictor)
ckpt = tf.compat.v2.train.Checkpoint(model=fake_model)
ckpt.restore(checkpoint_path).expect_partial()
# Run model through a dummy image so that variables are created
image, shapes = detection_model.preprocess(tf.zeros([1, 640, 640, 3]))
prediction_dict = detection_model.predict(image, shapes)
_ = detection_model.postprocess(prediction_dict, shapes)
print('Weights restored!')
I don’t see how we actually restore the weights? As far as I can understand we create a checkpoint called fake_model that loads features from the model itself (bare ssd_resnet50 architecture with no weights, expect for random initial values).
We run restore on the provided checkpoint, but this is not linked to the model (detection_model) that is going to be trained in any way? Hence, we call restore on a checkpoint that is not linked to the model we are going to train?
So the model (detection_model) does not contain any of the weights from the checkpoint file.
In my mind this should be:
fake_box_predictor = tf.compat.v2.train.Checkpoint(
_base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,
# _prediction_heads=detection_model._box_predictor._prediction_heads,
# (i.e., the classification head that we *will not* restore)
_box_prediction_head=detection_model._box_predictor._box_prediction_head,
)
fake_model = tf.compat.v2.train.Checkpoint(
_feature_extractor=detection_model._feature_extractor,
_box_predictor=fake_box_predictor,
model=detection_model)
fake_model.restore(checkpoint_path).expect_partial()
Thanks for any help and clarification! |
st207494 | It took me sometime to understand this too!
Think like this:
detection_model is loaded from a configuration with random weights
this structure is used as the base for fake_box_predictor and fake_model.
the weights are loaded on fake_model. detection_model is part of the fake_model so it’s weights will also be populated on the load.
finally, run a fake image over detection_model so that everything is structured properly
does it makes sense? |
st207495 | Thank you for your reply! I think I might understand now:
Since we set the feature extractor and box predictor to be detection_models feature extractor and box_predictor (detection_model._feature_extractor and box_predictor), the values for these weights in the detection_model get set to whatever values that are in the checkpoint for these specific weights?
And the other weights are still just random initial values since they are not provided as arguments to the checkpoint function, hence not restored?
Thanks! |
st207496 | arXiv: [2109.01652] Finetuned Language Models Are Zero-Shot Learners 31
This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning—finetuning language models on a collection of tasks described via instructions—substantially boosts zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 19 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of tasks and model scale are key components to the success of instruction tuning.
724×522 198 KB
Language models (LMs) at scale, such as GPT-3 (Brown et al., 2020), have been shown to perform few-shot learning remarkably well. They are less successful at zero-shot learning, however. For example, GPT-3’s zero-shot performance is much worse than few-shot performance on tasks such as reading comprehension, question answering, and natural language inference. One potential reason is that, without few-shot exemplars, it is harder for models to perform well on prompts that are not similar to the format of the pretraining data.
…
Our empirical results underscore the ability of language models to perform tasks described using natural language instructions. More broadly, as shown in Figure 2, instruction tuning combines appealing characteristics of the pretrain–finetune and prompting paradigms by using supervision via finetuning to improve the ability of language models to respond to inference-time text interactions.
Model architecture and pretraining. In our experiments, we use a dense left-to-right, decoder-only transformer language model of 137B parameters. This model is pretrained on a collection of web documents (including those with computer code), dialog data, and Wikipedia tokenized into 2.81T BPE tokens with a vocabulary of 32K tokens using the SentencePiece library (Kudo & Richardson, 2018). Approximately 10% of the pretraining data was non-English. This dataset is not as clean as the GPT-3 training set and also has a mixture of dialog and code, and so we expect the zero and few-shot performance of this pretrained LM on NLP tasks to be slightly lower. We henceforth refer to this pretrained model as Base LM. This same model was also previously used for program synthesis (Austin et al., 2021).
Instruction tuning procedure. FLAN is the instruction-tuned version of Base LM. Our instruction tuning pipeline mixes all datasets and randomly samples examples from each dataset. Some datasets have more than ten million training examples (e.g., translation), and so we limit the number of training examples per dataset to 30,000. Other datasets have few training examples (e.g., CommitmentBank only has 250), and so to prevent these datasets from being marginalized, we follow the examples-proportional mixing scheme (Raffel et al., 2020) with a mixing rate maximum of 3,000.3 We finetune all models for 30,000 gradient updates at a batch size of 8,192 using the Adafactor Optimizer (Shazeer & Stern, 2018) with a learning rate of 3e-5. The input and target sequence lengths used in our finetuning procedure are 1024 and 256 respectively. We use packing (Raffel et al., 2020) to combine multiple training examples into a single sequence, separating inputs from targets using a special end-of-sequence token.
arXiv.org
Finetuned Language Models Are Zero-Shot Learners 31
This paper explores a simple method for improving the zero-shot learning
abilities of language models. We show that instruction tuning -- finetuning
language models on a collection of tasks described via instructions --
substantially boosts zero-shot... |
st207497 | Source code for loading the instruction tuning
dataset used for FLAN is made publicly available at
https://github.com/google-research/flan 21
But it isn’t. |
st207498 | Hi team,
We have trained the ssd_mobilenet_v3 object detection model on VOC dataset and exported the model as .tflite file which I have attached below. When running the model, We’re facing an error in the reshape operator as shown below:
.reshape.cpp:70 num_input_elements != num_output_elements (126 != 21)
.reshape.cpp:70 num_input_elements != num_output_elements (126 != 21)
.reshape.cpp:77 ReshapeOutput(context, node) != kTfLiteOk (1 != 0)
Node RESHAPE (number 147f) failed to prepare with status 1
AllocateTensors() failed
Failed initializing model
When we looked into the model using model tool, we think this is the node.
How to solve this error? Please guide us. Thanks in Advance.
Regards,
Ramson Jehu K |
st207499 | I am using the few_od_shot_training example to try detect multiple object in a picture (Google Colab 3).
I am using the default rubber ducks in addition to a image containing 6 rubber ducks. After I have labeled all the ducks, converted it into a TFRecord and then loaded the TFRecord file into the application I try to train the model.
However, if I don’t add the image with 6 ducks and only stick to 1 duck per image the training works. But if I add the image with 6 ducks I get the following error:
./custom_training.py:225 train_step_fn *
losses_dict = model.loss(prediction_dict, shapes)
/home/hoster/.local/lib/python3.8/site-packages/object_detection/meta_architectures/ssd_meta_arch.py:842 loss *
(batch_cls_targets, batch_cls_weights, batch_reg_targets,
/home/hoster/.local/lib/python3.8/site-packages/object_detection/meta_architectures/ssd_meta_arch.py:1044 _assign_targets *
groundtruth_boxlists = [
/home/hoster/.local/lib/python3.8/site-packages/object_detection/core/box_list.py:56 __init__ **
raise ValueError('Invalid dimensions for box data: {}'.format(
ValueError: Invalid dimensions for box data: (1, 6, 4)
Here is the labels and bounding box lists:
Labels: [<tf.Tensor: shape=(1, 6), dtype=float32, numpy=array([[1., 0., 0., 0., 0., 0.]], dtype=float32)>, <tf.Tensor: shape=(1, 6), dtype=float32, numpy=array([[1., 0., 0., 0., 0., 0.]], dtype=float32)>, <tf.Tensor: shape=(1, 6), dtype=float32, numpy=array([[1., 1., 1., 1., 1., 1.]], dtype=float32)>]
Bounding boxes:` [<tf.Tensor 'groundtruth_boxes_list:0' shape=(1, 6, 4) dtype=float32>, <tf.Tensor 'groundtruth_boxes_list_1:0' shape=(1, 6, 4) dtype=float32>, <tf.Tensor 'groundtruth_boxes_list_2:0' shape=(1, 6, 4) dtype=float32>]`
How can I make the model accept multiple labels in a image?
I am using the EfficientDet-D2 model.
Thanks for any help! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.