id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st206000 | I meant that I am NOT using the tf.experimental.tensorrt.Converter in the post above. |
st206001 | I was looking at the sample in: Action Recognition with an Inflated 3D CNN | TensorFlow Hub 5
and works just fine but how to run this against a life feed? is it possible? and how to train my own model if required? |
st206002 | For streaming I suggest you to take a look at the streaming models in:
https://tfhub.dev/google/collections/movinet/ 8
You can finetune these on your data or train from scratch. |
st206003 | Thank you for answering i did review that one but i am a bit confuse, i would like to test this against a webcam on life feed. i dont see how the code would work for this, as i believe this will read the hole video but in live data there is no end. |
st206004 | You can pass a stream chunk as you can see in the example at:
https://tfhub.dev/tensorflow/movinet/a5/stream/kinetics-600/classification/2 6
You need to access to the camera with your code (Opencv, TFIO, Video4Linux etc…)
Instead If you want to run this on Android you need to use TF lite and write your own demo/example.
You can also try to use Mediapipe if you like:
Medium – 21 Feb 21
MediaPipe with custom tflite model 3
Getting started with MediaPipe and using it with your own tflite model
Reading time: 9 min read |
st206005 | Thank you very much, this TFHUB Is new to me but this seems to be the solution.
I really appreciate your time, thank you. |
st206006 | After post-training quantization, is it possible to change the dense-layer weights in TF Lite models?
An example of what I would like to do:
interpreter = tf.lite.Interpreter(model_path=Flags.tfl_file_name)
interpreter.allocate_tensors()
tensor_details = interpreter.get_tensor_details()
weight_idx = 0
for tensor in tensor_details:
if tensor['name'] == 'sequential/dense/MatMul':
weight_shape = tensor['shape']
weight_idx = tensor['index']
weight = interpreter.get_tensor(weight_idx)
weight = np.zeros(weight_shape,dtype='int8')
print(weight)
interpreter.set_tensor(weight_idx, weight)
This feature is needed for my hardware-accelerated Fully_Connected kernel. |
st206007 | Hi all,
I have a pandas DataFrame with features as columns and rows as observations. One of the columns is a Series where each element is a 512-long tf.Tensor. I am trying to pass this Tensor vector, along with the other features, into a tf.estimator.BoostedTreesClassifier model. However, I am receiving the following error when passing the tf.Tensor column:
AttributeError: Tensor.name is meaningless when eager execution is enabled.
Your help is much appreciated! Below is a reproducible example. Many thanks in advance for your help!
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
df = pd.DataFrame({"Text": ['This is text one', 'This is text two', 'And well, this is just the third text']})
model_url = "https://tfhub.dev/google/universal-sentence-encoder/4"
encodings = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(dtype=tf.string),
hub.KerasLayer(model_url, input_shape=[], dtype=tf.string),
]
)
def encodes_text(txt):
return encodings(tf.constant([txt]))
df['embeddings'] = df.map(lambda x: encodes_text(x))
tree_class = tf.estimator.BoostedTreesClassifier(
df.embedding,
max_depth=3,
n_classes=2,
n_trees,50,
n_batches_per_layer=1
) |
st206008 | If you’re just getting started on this project my advice is don’t use anything in tf.estimator. Use TensorFlow Decision Forests 3 which takes advantage of modern APIs.
If you’re going to ignore that advice, and do it with tf.estimator anyway, then the fix is to note that first argument isn’t meant to be the data. It’s meant to be a list of tf.feature_column objects that describe how the model should process the data.
See:
tf.estimator.BoostedTreesClassifier | TensorFlow Core v2.5.0 2
Module: tf.feature_column | TensorFlow Core v2.5.0 3 |
st206009 | Many thanks, @markdaoust for the pointers! I’ll be happy to use tfdf instead, given this model will be run on a linux cloud.
Incidentally, will tf.estimator models be deprecated? And your advise not to use them is just based on a new API being available via tfdf or on other things like model performance, stability, etc? |
st206010 | Estimators are fundamentally a TF1 thing. Supporting TF1 takes resources we’d rather spend on making TF2 better. We’d like to resolve that eventually. The less estimator code there is out there the easier that will be. |
st206011 | Hello all: this is my first post and I am happy to share some cool stuff for you.
We have a YouTube channel built/maintained by Machine Learning GDEs.
Feel free to check it out and subscribe at
YouTube
ML GDEs 3
Machine Learning (ML) Google Developers Experts (GDEs) are a global network of experts who are passionate about helping developers about ML. This channel 1) includes video content uploaded by ML GDEs, and 2) features talks by GDEs from other...
Thank you! |
st206012 | @Soonson_Kwon massive share! Thank you.
Just a small nit. If you put the links in separate lines, the forum will generate nice previews. For this post I have taken care of it. |
st206013 | I have to define a custom F1 metric in keras for a multiclass classification problem. Since it is a streaming metric the idea is to keep track of the true positives, false negative and false positives so as to gradually update the f1 score batch after batch. Here’s the code:
data = load_iris()
X = data.data
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
def compute_confusion_matrix(true, pred, K):
result = tf.zeros((K, K), dtype=tf.int32)
for i in range(len(true)):
result = tf.tensor_scatter_nd_add(tensor = result, indices=tf.constant([[true[i], pred[i]]]),
updates=tf.constant([1]))
return result
def f1_function(y_true, y_pred):
k = 3
y_pred_lab = np.argmax(y_pred, axis=1)
y_true = np.ravel(y_true)
conf_mat= compute_confusion_matrix(y_true, y_pred_lab, K = k)
tp = tf.linalg.tensor_diag_part(conf_mat)
fp = tf.reduce_sum(conf_mat, axis = 0) - tp
fn = tf.reduce_sum(conf_mat, axis = 1) - tp
support = tf.reduce_sum(conf_mat, axis = 1)
return tp, fp, fn, support
class F1Metric(keras.metrics.Metric):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.f1_fn = f1_function
self.tp_count = self.add_weight("tp_count", initializer="zeros", shape = (3,), dtype=tf.float32)
self.fp_count = self.add_weight("fp_count", initializer="zeros", shape = (3,), dtype=tf.float32)
self.fn_count = self.add_weight("fn_count", initializer="zeros", shape = (3,), dtype=tf.float32)
self.support_total = self.add_weight("support_total", initializer = "zeros", shape = (3,),
dtype=tf.float32)
def update_state(self, y_true, y_pred, sample_weight=None):
tp, fp, fn, support = self.f1_fn(y_true, y_pred)
print(tp)
print(self.tp_count)
self.tp_count.assign_add(tf.cast(tp, dtype=tf.float32))
self.fp_count.assign_add(tf.cast(fp, dtype=tf.float32))
self.fn_count.assign_add(tf.cast(fn, dtype=tf.float32))
self.support_total.assign_add(tf.cast(support, dtype=tf.float32))
def result(self):
precisions = self.tp_count / (self.tp_count + self.fp_count)
recalls = self.tp_count / (self.tp_count + self.fn_count)
f1 = tf.constant(2, dtype=tf.float32) * (precisions*recalls) / (precisions + recalls)
weighted_f1 = (f1 * self.support_total) / tf.reduce_sum(tf.cast(self.support_total, dtype=tf.float32))
return recalls
model = keras.models.Sequential([
keras.layers.Dense(200, activation = "relu", input_shape = X_train.shape[1:]),
keras.layers.Dense(4, activation = "softmax")
])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=10,
restore_best_weights=True)
#compile the model
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam", metrics=[F1Metric()],
run_eagerly=True)
#fit the model
history = model.fit(X_train, y_train, epochs = 100,
validation_split=0.1,
callbacks = [early_stopping_cb],
)
It gives the following error:
“Cannot assign to variable tp_count:0 due to variable shape (3,) and value shape () are incompatible”
Alternatively, I tried to use the tfa F1 metric but I can’t use it in a grid search (indeed I want to find the optimal model architecture and I want to use the f1 metric as the scorer) since it gives the following error:
“ValueError: The list/tuple elements must be unique strings of predefined scorers. One or more of the elements were callables. Use a dict of score name mapped to the scorer callable. Got [<tensorflow_addons.metrics.f_scores.F1Score object at 0x7f8ac9516be0>]”
Any idea? Thank you |
st206014 | Did you try using the version from TensorFlow Addons?
TensorFlow
tfa.metrics.F1Score | TensorFlow Addons 5
Computes F-1 Score. |
st206015 | It was removed from Keras some years ago:
github.com/keras-team/keras
Precision, Recall and F1 Metrics Removed 1
opened
Mar 15, 2017
closed
Mar 16, 2017
Lif3line
It appears Precision, Recall and F1 metrics have been removed from metrics.py as… of today but I couldn't find any reference to their removal in the commit logs. Was this intentional?
Also in Addons It used in a callback (see point 2):
github.com/tensorflow/addons
Problem with using Tensorflow addons' metrics correctly in functional API 2
opened
Dec 30, 2019
JoBerkner
bug
metrics
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04…): **Windows 10 / Google Colab**
- TensorFlow version and how it was installed (source or binary): **2.1.0-rc1 (pip install)**
- TensorFlow-Addons version and how it was installed (source or binary): **0.6.0 (pip install)**
- Python version: **3.6.9**
- Is GPU used? (yes/no):
**Describe the bug**
I have an LSTM model to perform binary classification of human activities using multivariate smartphone sensor data. The two classes are imbalanced (1:50). Therefore I would like to use F1-score as a metric, which is why I came across the TensorFlow Addons.
I now have a problem to apply this score to my functional API.
If I use another value for the metric argument `average` (e.g., `average=None` or `average="macro"`) then I get an error message when fitting the model:
> ValueError: Dimension 0 in both shapes must be equal, but are 2 and 1. Shapes are [ 2 ] and [ 1 ]. for 'AssignAddVariableOp' (op: 'AssignAddVariableOp') with input shapes: [ ], [ 1 ].
And if I use the value `average="micro"` I am not getting the error, but the F1-score is `0` throughout the learning process, while my loss decreases.
I believe I am still doing something wrong here. Can anybody provide an explanation for me?
**Code to reproduce the issue**
```
import tensorflow as tf
import tensorflow_addons as tfa
from tensorflow import kerasdef
create_model(n_neurons=150, learning_rate=0.01, activation="relu", loss="binary_crossentropy"):
#create input layer and assign to current output layer
input_ = keras.layers.Input(shape=(X_train.shape[1],X_train.shape[2]))
#add LSTM layer
lstm = keras.layers.LSTM(n_neurons, activation=activation)(input_)
#Output Layer
output = keras.layers.Dense(1, activation="sigmoid")(lstm)
#Create Model
model = keras.models.Model(inputs=[input_], outputs=[output])
#Add optimizer
optimizer=keras.optimizers.SGD(lr=learning_rate, clipvalue=0.5)
#Compile model
model.compile(loss=loss, optimizer=optimizer, metrics=[tfa.metrics.F1Score(num_classes=2, average="micro")])
print(model.summary())
return model
#Create the model
model = create_model()
#fit the model
history = model.fit(X_train,y_train,
epochs=300,
validation_data=(X_val, y_val))
```
**Other info / logs** |
st206016 | I’ve read the issue #825 in the second link and it says that there are no problems related to the tfa implementation of the F1 metric when used together with tf.keras instead of multi-backend keras. However, I still haven’t figured out how to make it work in a grid search and that’s the reason why I tried a custom implementation. Is there a way to solve the problem? Thank you again. |
st206017 | This is an example search with Kerastuner:
github.com/keras-team/autokeras
F1 score support for objective 10
opened
Dec 24, 2019
closed
Jul 28, 2020
alexcombessie
bug report
pinned
Today objective = "val_f1" returns an error
Failed to train : <class 'ValueErro…r'> : Could not infer optimization direction ("min" or "max") for unknown metric "val_f1". Please specify the objective asa `kerastuner.Objective`, for example `kerastuner.Objective("val_f1", direction="min")`.
I think you can try the same with TFA or use the custom impl. |
st206018 | Hello,
MaxPool2D layer does not have dilations as argument. Is there any equivalent layer to MaxPool2D with dilations argument ?
Have a nice day. |
st206019 | From the documentation 8:
Bool. Defaults to False . If True , this Model 's logic will not be wrapped in a tf.function 8. Recommended to leave this as None unless your Model cannot be run inside a tf.function 8. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy 7.
What is the significance of being wrapped in tf.function and what are the practical advantage/disadvantages of setting run_eagerly = True? |
st206020 | When something is wrapped inside tf.function it has the advantage of being run in graph mode. All the backend compilation engineering is handled by TensorFlow itself in this case. The advantage is when all the operations are available as a graph we know much resources to allocate and how to best optimize the graph with the available resources. For more details refer to the following:
TensorFlow
Better performance with tf.function | TensorFlow Core 18
run_eagerly=True lets figure out what exactly is going inside your model training loop. Let’s say you have implemented a custom loop and put that inside the train_step() method of a subclasses model. Setting run_eagerly to True will help you debug that loop if anything goes wrong. For practical applications of this, refer to the following guide:
keras.io
Keras documentation: Keras debugging tips 19 |
st206021 | Thanks very much! From the second link,
Thankfully, there’s an easy way to run your code in “debug mode”, fully eagerly: pass run_eagerly=True to compile(). Your call to fit() will now get executed line by line, without any optimization. It’s slower, but it makes it possible to print the value of intermediate tensors, or to use a Python debugger. Great for debugging.
does this mean that when run_eagerly = False, values of intermediate tensors in the train_step() cannot be saved? If I save some intermediate tensor(s) as an instance variable in a subclassed Model object, can I extract the values of these tensors after .fit() is complete as say a numpy array? |
st206022 | I slightly modified the first example from this tutorial 8 by saving the y_pred from each epoch
class CustomModel(keras.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.saved_pred = []
def train_step(self, data):
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, y_pred)
# Return a dict mapping metric names to current value
self.saved_pred.append(y_pred)
return {m.name: m.result() for m in self.metrics}
import numpy as np
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs = inputs, outputs = outputs)
model.compile(optimizer=“adam”, loss=“mse”, metrics=[“mae”])
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.fit(x, y, epochs=3)
Here’s the output of model.saved_preds:
ListWrapper([<tf.Tensor ‘custom_model_1/dense_5/BiasAdd:0’ shape=(None, 1) dtype=float32>, <tf.Tensor ‘custom_model_1/dense_5/BiasAdd:0’ shape=(None, 1) dtype=float32>])
There’s no numpy attribute for the Tensors in this list, so I’m wondering if it’s possible to extract their values. |
st206023 | Looks like a list. Can you do something like model.saved_preds[0].numpy()? The model is supposed to be returning predictions i.e. some distribution or a set of values that must not contain any operations like custom_model_1/dense_5/BiasAdd:0. |
st206024 | When trying .numpy() and tf.keras.backend.get_values(), I got an error saying that Tensor object has no attribute numpy. I also tried .eval which gave this error:
ValueError: Cannot use the given session to evaluate tensor: the tensor’s graph is different from the session’s graph. |
st206025 | Since you are running with run_eagerly=True now, you can print self.saved_preds inside your train_step() function directly for a better debugging experience. Could you do while calling .fit() which I suppose you are doing currently. |
st206026 | But if I want to set run_eagerly = False (for deployment), then there’s no way to extract the value of the Tensors from self.saved_preds? |
st206027 | Why would you wanna do that when you can simply call .predict() or even the model(x) directly? Just trying to better understand the situation here. |
st206028 | Sorry I gave that as a simplified example. I would like to see if it’s possible to save intermediate Tensor results in the train_step(), with y_pred being a stand in for any arbitrary Tensor that was computed inside train_step() in a subclassed Model class. |
st206029 | Yeah, it should be possible. I have been able to save entire models with train_step(). Here’s one such example:
keras.io
Keras documentation: Self-supervised contrastive learning with SimSiam 12 |
st206030 | From the documentation on tf.keras.layers.Embedding 2:
input_dim:
Integer. Size of the vocabulary, i.e. maximum integer index + 1.
mask_zero:
Boolean, whether or not the input value 0 is a special “padding” value that should be masked out. This is useful when using recurrent layers which may take variable length input. If this is True, then all subsequent layers in the model need to support masking or an exception will be raised. If mask_zero is set to True, as a consequence, index 0 cannot be used in the vocabulary (input_dim should equal size of vocabulary + 1).
If my vocabulary size is n but they are encoded with index values from 1 to n (0 is left for padding), is input_dim equal to n or n+1? The maximum integer index + 1 part of the documentation is confusing me.
If the inputs are padded with zeroes, what are the consequences of leaving mask_zero = False?
If mask_zero = True, based on the documentation, I would have to increment the answer from my first question by one? What is the expected behaviour if this was not done? |
st206031 | Hello everyone,
what is the term for an autoencoder, that is trained not to reconstruct X, but to create another output y instead. I’ve heard the term somewhere, but can’t remember it.
For example the autoencoder the autoencoder is trained on the MNIST dataset not to reconstruct the given number X_hat, but to create an image of another number Y.
The loss function would be like that: |
st206032 | Hi, I am working on an RL model in TF. I am working on a pointer network (that outputs a sequence of indices). When training the model, I want to build a custom reward function where tf output sequences can be passed through a different function individually. For example, if the output is [1,2,3,4], I want 1,2,3, and 4 individually to a function, sat F, can gives out reward values for 1, 2, 3, 4 individually. However, I get the error:
Cannot convert a symbolic Tensor (strided_slice_1:0) to a numpy array. This error may indicate that you’re trying to pass a Tensor to a NumPy call, which is not supported
I am not able to convert output into numpy type array which I can pass through to the custom function. I have seen it can be directly done in pytorch but I tried everything I could find on stack overflow and other places but could not figure out how to do that in tensorflow. Let me know if someone can help with this. Some code:
here I am getting sequence of indices for a batch
for step in range(1,self.max_length): # sample from POINTER
query = tf.nn.relu(tf.matmul(query1, W_1) + tf.matmul(query2, W_2) + tf.matmul(query3, W_3))
logits = pointer(encoded_ref=encoded_ref, query=query, mask=self.mask_, W_ref=W_ref, W_q=W_q, v=v, C=self.C, temperature=self.temperature)
prob = distr.Categorical(logits) # logits = masked_scores
idx = prob.sample()
idx_list.append(idx) # tour index
log_probs.append(prob.log_prob(idx)) # log prob
entropies.append(prob.entropy()) # entropies
self.mask_ = self.mask_ + tf.one_hot(idx, self.max_length) # mask
idx_ = tf.stack([tf.range(self.batch_size,dtype=tf.int32), idx],1) # idx with batch
query3 = query2
query2 = query1
query1 = tf.gather_nd(actor_encoding, idx_) # update trajectory (state)
idx_list.append(idx_list[0]) # return to start
self.tour = tf.stack(idx_list, axis=1) # permutations
i want to pass this tour (that has size batch size x input dimension x dimension) and return reward values of size [batch]
thank you! Any pointer or help is highly appreciated |
st206033 | I am unable to download ANY tfds dataset.
In windows 10 pro, VScode, python 3.8.8 (also 3.7), TF 2.4.1 (also 2.3 and 2.1)
I get this error:
Failed to rename: d:/data/tensorflow/mnist\1.0.0.incomplete7247AV to: d:/data/tensorflow/mnist\1.0.0 : Access is denied.
From this line of code:
(ds_train, ds_test), ds_info = tfds.load( “mnist”, split=[“train”, “test”], shuffle_files=True, data_dir=‘d:/data/tensorflow/’, as_supervised=True,
with_info=True,)
Full permissions are assigned to Everyone. Notice how the slashes change from ‘/’ to ‘’. Is there anyone using Windows that is successfully downloading the tfds datasets? |
st206034 | I think the issue may be because of something else and not of any OS. Can you please try to uninstall and reinstall tfds as mentioned here 5 |
st206035 | An update: While using VScode, i’m still unable to download any dataset from tfds. I switched to spyder. Using the SAME code file I am able to download cifar100, mnist, titanic. I still get the same error mentioned above with other datasets, including cifar 10. Thanks for any suggestions. |
st206036 | Is there a way to run bazel test in a git checkout without compiling TensoFflow?
E.g. just using an installed TensorFlow wheel with pip install tensorflow. |
st206037 | But it is very inconvenient every time to copy back and forth single test files or the whole module.
Also it is inconvenient to change the related feature code directly from the target directory where the wheels is nstalled.
I was looking for a way to do everything python related in the source directory using the c++ part (the .so libraries) installed by the wheel (without in source compiling and packaging the whole TF). |
st206038 | We are discussing a PR at Run pytest targets without compiling by bhack · Pull Request #50163 · tensorflow/tensorflow · GitHub 6 /cc @angerson @mihaimaruseac @perfinion |
st206039 | We are doing this for nightly builds: build a pip package, install it and then run TF tests against the package.
See tensorflow/tools/ci_build/rel/ubuntu/cpu_py36_pip.sh and https://cs.opensource.google/tensorflow/tensorflow/+/master:tensorflow/tools/ci_build/builds/pip_new.sh;l=483;drc=2a21421f01df1f3cc43f2cff42f62afec24247dd 4 |
st206040 | But we need something more.
When you are working on a python PR we need also to edit python source file in the checkout dir not only python tests.
Instead I think when run these tests we are still using python files installed from the wheel. |
st206041 | Oh, definitely.
A thing I saw helped was to copy the edited python files to their equivalent in .../site-packages/tensorflow/... (or lib-packages, depending on sandboxing, if present) to fake an updated wheel |
st206042 | Yes is the hack the I use all the days. But if I need to copy file forth and back every time it is quite useless.
Can we just use only the so installed from the wheel? |
st206043 | Yes it doesn’t work.
That’s why we have this thread.
Any hint on how we could achieve this is very appreciated, I will expand the PR. |
st206044 | We’ll probably need to eliminate big shared objects and the API generation step, these seem to be the bottlenecks and requiring to build these for every test is what causes most of TF to build for just one test.
This is a huge effort though, don’t know if we can put a timeline on it. |
st206045 | I suppose that we need only a bazel build option to build without the so target or not?
Then we could find a workaround to load the so from the wheels. |
st206046 | There is a --build_tests_only option but I am unsure how it works with regards to the .so dependencies |
st206047 | mihaimaruseac:
--build_tests_only
Is what we have in tensorflow/tools/ci_build/builds/run_pip_tests.sh so I don’t think that we could solve with this. |
st206048 | I think that as a first step we could technically evaluate what we need to do at dependecies level in bazel to compile this target with a new option:
bazel build //tensorflow/python:all
without triggering TF c++ targets compilation. |
st206049 | I’ve closed the PR as the current design doesn’t let us to separate python and c++ targets. |
st206050 | I was thinking we should be able to make
bazel build //tensorflow/python/some:test
only build the C++ bits needed for that test and nothing else. So, instead of compiling all kernels and generating huge libraries and then generating all TF Python API we’ll only compile the needed kernel and generate a small subset of TF that provides the vertical needed for this test |
st206051 | I had a similar idea but I supposed that the refactoring impact in the bazel dependecies was quite the same (and so too big to start to work on this without a pre-approval).
Is this simpler? |
st206052 | E.g. Yesterday I was working on def_function.py and def_function_test.py
If you query the dependencies it is really hard to find a min cut point in the graph :
bazel query "deps(//tensorflow/python/eager:def_function) --output graph"
bazel query "deps(//tensorflow/python/eager:def_function_test) --output graph"
bazel query "buildfiles(deps(//tensorflow/python/eager:def_function)) --output package"
bazel query "buildfiles(deps(//tensorflow/python/eager:def_function_test)) --output package"
Probably it is not the easiest target or the best query command but I don’t think that it is quite easier to find a min-cut also for other targets. |
st206053 | It’s not easy right now. I estimate ~2 quarters worth of work to fix this issue but I hope the gains would justify this time |
st206054 | Hi Sayak,
I have running .pb model file converted into .tflite file but my model was showing error this
" KeyError: “The name ‘TfPoseEstimator/split:0’ refers to a Tensor which does not exist. The operation, ‘TfPoseEstimator/split’, does not exist in the graph.”
Can you please help me out how to solve it.
I have using tflite_convert to convert .pb to .tflite file.
Even input tensor name ‘split’ exist in the model graph. |
st206055 | Hi @Sumit_Singh - I moved this to a new topic for you (we like to make sure that each thread on the forum stays on-topic). Thanks for coming to our community for help! |
st206056 | Have you tried the following?
Run the conversion using TensorFlow 2.5?
Run the conversion using tf-nightly?
Run the conversion using Flex ops 3?
Also, if it’s possible please share a Colab Notebook demoing the conversion process you are currently running. |
st206057 | I have share colab notebook that convert .tf model to .tflite .
can you share mail-id. |
st206058 | Given a vector = [1, 2, 3, 4, 5,6]
Is there a function which can convert it in to a Toeptliz matrix of 4 columns as given below ?
[ [1,2,4,5],[2,4,5,6], [3, 4,5,6]
I want to apply this transform to a batch of vectors. |
st206059 | No, that does this
col = [1., 2., 3.]
row = [1., 4., -9.]
operator = LinearOperatorToeplitz(col, row)operator.to_dense()
[[1., 4., -9.], [2., 1., 4.], [3., 2., 1.]]
my requirement is different. |
st206060 | Currently I use the following function do it. I would like to know whether there is any builtin function does it, so that I can avoid the for loop used here.
tf.stack([ Inf[ i:i+ width] for i in range(length-width+1)]) |
st206061 | I don’t understand your 4 columns output Matrix example.
[ [1,2,4,5],[2,4,5,6], [3, 4,5,6]
Is it a Toeplitz matrix? |
st206062 | I still think the operation mentioned is the solution (it’s actually more general that what OP asks for, but can me made to work for OP’s case):
>>> import tensorflow as tf
>>> v = [1, 2, 3, 4, 5,6]
>>> tf.linalg.LinearOperatorToeplitz(v, v).to_dense()
<tf.Tensor: shape=(6, 6), dtype=int32, numpy=
array([[1, 2, 3, 4, 5, 6],
[2, 1, 2, 3, 4, 5],
[3, 2, 1, 2, 3, 4],
[4, 3, 2, 1, 2, 3],
[5, 4, 3, 2, 1, 2],
[6, 5, 4, 3, 2, 1]], dtype=int32)> |
st206063 | Let’s say input vector is,
`v = [1, 2, 3, 4, 5, 6, 7,…100],
If I want to get a output matrix with number of columns 6, then output is
[ [ 1, 2 , 3, 4, 5, 6 ],
[ 2, 3, 4, 5, 6, 7 ] ,
[ 3, 4, 5, 6, 7, 8 ],
.............,
[ 95, 96, 97, 98, 99,100 ] ]
`
I don’t think tf.linalg.LinearOperatorToeplitz(v, v).to_dense() will give the above output. |
st206064 | I made a mistake. That’s is not a toeplitz matrix.
I am looking for tensorflow equivalent of below method in pytorch
x = torch.arange(1., 8)
x
tensor([ 1., 2., 3., 4., 5., 6., 7.])
x.unfold(0, 2, 1)
tensor([[ 1., 2.],
[ 2., 3.],
[ 3., 4.],
[ 4., 5.],
[ 5., 6.],
[ 6., 7.]])
x.unfold(0, 2, 2)
tensor([[ 1., 2.],
[ 3., 4.],
[ 5., 6.]]) |
st206065 | For these specific input cases you can use
import tensorflow as tf
x = tf.range(1,8)
out_frame_2_1 = tf.signal.frame(x, 2, 1)
out_frame_2_2 = tf.signal.frame(x, 2, 2)
print(out_frame_2_1)
print(out_frame_2_2)
But more in general for pytorch unfold see
github.com/tensorflow/tensorflow
torch.unfold function is needed..
opened
Jul 26, 2019
closed
Aug 6, 2019
seolhokim
API review
TF 1.14
comp:apis
type:feature
<em>Please make sure that this is a feature request. As per our [GitHub Policy](…https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template</em>
**System information**
- TensorFlow version (you are using):1.14
- Are you willing to contribute it (Yes/No):Yes
**Describe the feature and the current behavior/state.**
I think sending the function page in torch is the best. https://pytorch.org/docs/stable/_modules/torch/nn/modules/fold.html
**Will this change the current api? How?**
I guess it doesn't.
**Who will benefit with this feature?**
I think it will be popular in Vision Models, cause self attention is arising now to find out relationship in input pixels.(Stand-Alone Self-Attention in Vision Models)
**Any Other info.**
TensorFlow
tf.image.extract_patches | TensorFlow Core v2.5.0
Extract patches from images.
For more complex use cases you could have some performance issues to solve. See:
github.com/pytorch/xla
Lowering `unfold`
opened
Jun 18, 2020
ibeltagy
nostale
op lowering
## 🚀 Feature
Add a lowering for `unfold`.
## Motivation
I want to run Lo…ngformer ([model code on HF repo](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_longformer.py)) on pytroch-xla, and this requires an overlapping sliding window operation which needs a lowering for `unfold`.
## Pitch
Add a lowering for `unfold`
## Alternatives
Use `as_strided` but the current implementation is limited as discussed in [this issue](https://github.com/pytorch/xla/issues/2238).
## Additional context
Below is the metric report for the forward pass of Longformer with `unfold`. It has entries for `aten::unfold`.
```
Metric: CompileTime [194/1996]
TotalSamples: 40
Accumulator: 06m12s060ms761.186us
ValueRate: 985ms703.787us / second
Rate: 0.105865 / second
Percentiles: 1%=002ms604.019us; 5%=002ms103.276us; 10%=002ms209.085us; 20%=031ms487.158us; 50%=11s482ms222.482us; 80%=14s789ms136.836us; 90%=14s427ms259.848us; 95%=15s075ms200.017us; 99%=15s212ms201.$
81us
Metric: DeviceLockWait
TotalSamples: 73
Accumulator: 277.621us
ValueRate: 000.765us / second
Rate: 0.201229 / second
Percentiles: 1%=002.159us; 5%=002.515us; 10%=002.707us; 20%=002.944us; 50%=003.671us; 80%=004.275us; 90%=004.708us; 95%=004.854us; 99%=015.004us
Metric: ExecuteTime
TotalSamples: 73
Accumulator: 03s919ms069.706us
ValueRate: 008ms722.713us / second
Rate: 0.193129 / second
Percentiles: 1%=001ms485.104us; 5%=002ms714.332us; 10%=002ms000.342us; 20%=002ms237.048us; 50%=003ms337.952us; 80%=098ms610.960us; 90%=126ms721.599us; 95%=139ms781.481us; 99%=154ms800.680us
Metric: InboundData
TotalSamples: 72
Accumulator: 234.19MB
ValueRate: 634.49KB / second
Rate: 0.190499 / second
Percentiles: 1%=1.00B; 5%=1.00B; 10%=1.00B; 20%=8.00KB; 50%=6.00MB; 80%=6.00MB; 90%=7.50MB; 95%=7.50MB; 99%=7.50MB
Metric: InputOutputAliasCount
TotalSamples: 1
Accumulator: 271.00
Percentiles: 1%=271.00; 5%=271.00; 10%=271.00; 20%=271.00; 50%=271.00; 80%=271.00; 90%=271.00; 95%=271.00; 99%=271.00
Metric: IrValueTensorToXlaData
TotalSamples: 331
Accumulator: 03s006ms264.150us
ValueRate: 008ms922.872us / second
Rate: 0.872335 / second
Percentiles: 1%=863.555us; 5%=967.491us; 10%=001ms069.569us; 20%=001ms215.703us; 50%=002ms606.635us; 80%=007ms211.581us; 90%=022ms513.355us; 95%=029ms074.835us; 99%=067ms409.847us
Metric: OutboundData
TotalSamples: 335
Accumulator: 1.01GB
ValueRate: 2.73MB / second
Rate: 0.881721 / second
Percentiles: 1%=3.00KB; 5%=3.00KB; 10%=3.00KB; 20%=3.00KB; 50%=14.00KB; 80%=2.25MB; 90%=10.50MB; 95%=10.50MB; 99%=18.00MB
Metric: ReleaseDataHandlesTime
TotalSamples: 81
Accumulator: 333ms705.496us
ValueRate: 880.219us / second
Rate: 0.214297 / second
Percentiles: 1%=382.511us; 5%=474.639us; 10%=522.986us; 20%=611.054us; 50%=001ms050.138us; 80%=001ms216.637us; 90%=003ms012.896us; 95%=031ms474.989us; 99%=038ms143.816us
Metric: TensorsGraphSize
TotalSamples: 73
Accumulator: 83903.00
ValueRate: 222.06 / second
Rate: 0.193203 / second
Percentiles: 1%=4.00; 5%=4.00; 10%=4.00; 20%=23.00; 50%=67.00; 80%=2874.00; 90%=3673.00; 95%=4075.00; 99%=4474.00
Metric: TransferFromServerTime [141/1996]
TotalSamples: 72
Accumulator: 850ms040.762us
ValueRate: 002ms249.054us / second
Rate: 0.190499 / second
Percentiles: 1%=850.857us; 5%=001ms079.063us; 10%=001ms135.278us; 20%=001ms285.135us; 50%=015ms444.166us; 80%=021ms375.969us; 90%=027ms938.459us; 95%=030ms432.630us; 99%=046ms339.680us
Metric: TransferToServerTime
TotalSamples: 335
Accumulator: 03s025ms272.057us
ValueRate: 008ms972.967us / second
Rate: 0.882877 / second
Percentiles: 1%=857.302us; 5%=959.191us; 10%=001ms060.268us; 20%=001ms210.822us; 50%=002ms606.569us; 80%=007ms260.753us; 90%=021ms492.181us; 95%=029ms982.476us; 99%=067ms384.995us
Metric: TransferToServerTransformTime
TotalSamples: 335
Accumulator: 460ms996.455us
ValueRate: 001ms210.712us / second
Rate: 0.881721 / second
Percentiles: 1%=087.734us; 5%=094.554us; 10%=099.654us; 20%=107.230us; 50%=268.367us; 80%=612.733us; 90%=003ms313.737us; 95%=006ms138.063us; 99%=009ms517.447us
Counter: CachedCompile
Value: 33
Counter: CreateCompileHandles
Value: 40
Counter: CreateDataHandles
Value: 692
Counter: CreateXlaTensor
Value: 3897
Counter: DestroyDataHandles
Value: 343
Counter: DestroyXlaTensor
Value: 3608
Counter: MarkStep
Value: 1
Counter: ReleaseDataHandles
Value: 343
Counter: UncachedCompile
Value: 40
Counter: XRTAllocateFromTensor_Empty
Value: 20
Counter: XrtCompile_Empty
Value: 144
Counter: XrtExecuteChained_Empty
Value: 144
Counter: XrtExecute_Empty
Value: 144
Counter: XrtRead_Empty
Value: 144
Counter: XrtReleaseAllocationHandle_Empty
Value: 144
Counter: XrtReleaseCompileHandle_Empty
Value: 144
Counter: XrtSessionCount
Value: 10
Counter: XrtSubTuple_Empty
Value: 144
Counter: aten::_local_scalar_dense
Value: 12
Counter: aten::unfold
Value: 60
Counter: xla::_softmax
Value: 12
Counter: xla::_unsafe_view
Value: 72
Counter: xla::add
Value: 27
Counter: xla::add_
Value: 84
Counter: xla::addcmul
Value: 25
Counter: xla::addmm
Value: 1
Counter: xla::as_strided
Value: 271
Counter: xla::bmm
Value: 36
Counter: xla::clone
Value: 24
Counter: xla::constant_pad_nd
Value: 48
Counter: xla::copy_
Value: 394
Counter: xla::cumsum
Value: 1
Counter: xla::div_
Value: 12
Counter: xla::embedding
Value: 3
Counter: xla::empty
Value: 359
Counter: xla::empty_strided
Value: 271
Counter: xla::eq
Value: 48
Counter: xla::expand
Value: 120
Counter: xla::fill_
Value: 36
Counter: xla::flip
Value: 48
Counter: xla::gelu
Value: 12
Counter: xla::gt
Value: 12
Counter: xla::index_select
Value: 3
Counter: xla::le
Value: 12
Counter: xla::lt
Value: 12
Counter: xla::masked_fill_
Value: 72
Counter: xla::max
Value: 12
Counter: xla::mm
Value: 72
Counter: xla::mul
Value: 2
Counter: xla::native_batch_norm
Value: 25
Counter: xla::native_layer_norm
Value: 25
Counter: xla::ne
Value: 13
Counter: xla::permute
Value: 180
Counter: xla::rsub
Value: 1
Counter: xla::select
Value: 97
Counter: xla::slice
Value: 999
Counter: xla::squeeze
Value: 24
Counter: xla::sum
Value: 12
Counter: xla::t
Value: 73
Counter: xla::tanh
Value: 1
Counter: xla::transpose
Value: 240
Counter: xla::tril
Value: 24
Counter: xla::unsqueeze
Value: 170
Counter: xla::view
Value: 644
Counter: xla::zero_
Value: 1
Metric: XrtAllocateFromTensor
TotalSamples: 48135
Accumulator: 01m10s487ms137.203us
Mean: 002ms504.791us
StdDev: 006ms961.073us
Rate: 1.03083 / second
Percentiles: 25%=295.798us; 50%=458.079us; 80%=002ms686.172us; 90%=003ms916.758us; 95%=004ms695.148us; 99%=008ms407.314us
Metric: XrtCompile
TotalSamples: 2122
Accumulator: 10m56s974ms699.040us
Mean: 505ms763.352us
StdDev: 02s338ms482.396us
Rate: 0.114957 / second
Percentiles: 25%=008ms570.206us; 50%=008ms862.980us; 80%=008ms259.798us; 90%=009ms638.784us; 95%=611ms324.713us; 99%=13s233ms291.015us
Metric: XrtExecute
TotalSamples: 20796
Accumulator: 02m59s103ms661.768us
Mean: 004ms131.993us
StdDev: 017ms650.704us
Rate: 0.114971 / second
Percentiles: 25%=851.542us; 50%=956.518us; 80%=001ms210.393us; 90%=002ms377.763us; 95%=006ms024.523us; 99%=110ms012.002us
Metric: XrtExecutorEvict
TotalSamples: 0
Accumulator: nanB
Mean: nanB
StdDev: nanB
Percentiles:
Metric: XrtReadLiteral
TotalSamples: 10335
Accumulator: 05s641ms262.404us
Mean: 774.616us
StdDev: 002ms725.146us
Rate: 0.114966 / second
Percentiles: 25%=269.442us; 50%=343.087us; 80%=470.896us; 90%=583.015us; 95%=005ms062.565us; 99%=010ms496.053us
Metric: XrtReleaseAllocation
TotalSamples: 34172
Accumulator: 02s911ms410.970us
Mean: 185.145us
StdDev: 322.759us
Rate: 0.115061 / second
Percentiles: 25%=020.634us; 50%=033.172us; 80%=338.061us; 90%=648.733us; 95%=861.389us; 99%=002ms549.868us
Metric: XrtReleaseCompilation
TotalSamples: 518
Accumulator: 002ms770.287us
Mean: 003.418us
StdDev: 002.299us
Rate: 81.2152 / second
Percentiles: 25%=002.889us; 50%=003.118us; 80%=003.383us; 90%=003.659us; 95%=003.945us; 99%=019.823us
``` |
st206066 | I am trying to convert a pretrained model (Efficientnet) which I have trained on some custom images and new labels. But when using tf2onnx to convert it to onnx format it requires a checkpoint.meta file? But I can’t see this file anywhere? I only see a .index and .data file from the model when I have trained it.
How can I convert a custom model which is using transfer learning? I have downloaded the model from Tensorflow Model Zoo.
Thanks for any help! |
st206067 | You can try to export checkpoint file to .pb with OD api exporters then use tf2onnx. But i dont know if it works with efficientnets. Script is in models\research\object_detection\exporter_main_v2.py |
st206068 | Thank you for your reply.I was able to run
python3 models/research/object_detection/exporter_main_v2.py --output_directory output_model --pipeline_config_path config/pipeline.config --trained_checkpoint_dir trained_checkpoints
which created a new .pb file in the outputs directory.
Do you know if it use any of the checkpoint information in the pipeline.config or will it only use the one in the trained_checkpoints directory? Hence, does it actually just copy the pipeline.config file, but don’t use any information from it? |
st206069 | It should use latest checkpoint from trained_checkpoint_dir.
I think pipeline.config is needed to build model |
st206070 | Thanks for your reply. Seems that I got it working by following your tip above.
Thanks again! |
st206071 | I heard someone saying that tensorflow and keras is much faster then pytorch in terms of production inference. If i wish to deoply some trained model to production then should i code in keras? |
st206072 | Hey there,
I just want to know, if I want to expand only one dimension in the middle for a tensor like [3, 5], both of those statements give me [3, 1, 5], which one is better?
y1 = np.random.randn(3, 5)
y1_exp = tf.expand_dims(y1, axis=1)
print(y1_exp.shape)
y2_exp = RepeatVector(1)(y1)
print(y2_exp.shape)
🙇🏻♂️ |
st206073 | I tried use the FAQ:
keras.io
Keras documentation: Keras FAQ 3
doing this way:
import numpy as np
import tensorflow as tf
import os
import random as rn
os.environ[‘PYTHONHASHSEED’] = ‘0’
np.random.seed(123)
rn.seed(123)
tf.random.set_seed(1234)
but sinply dont work
i using to make a RNN LSTM
pls Help! |
st206074 | This is likely because of the non-deterministic CUDA kernels being fired at the backend. You can use the tensorflow-determinism tool from NVIDIA to fix this.
Here’s a guide 29 that takes a deep dive into good reproducibility practices in TensorFlow. |
st206075 | Hi, I’m new to this field and I’m trying to do minority class sampling.
I have about 754975 cropped CT images, the size of each one is 19 * 19 * 19, saved as .npy on my local disk.
The truth table is saved as .csv, with the state of each image non-nodule or nodule (0,1), the data is imbalanced with 1186 image = 1 and the total rest is = 0.
I need to do minority class sampling as follow :
2000 images for validating set ( 700 nodule, 1300 non-nodule).
752975 images for training set ( 486 nodule, 752489 non-nodule).
I tried to do it using the following code, but the problem was the allocating memory exceeds my PC memory (32 gb)
nodules_path = "~/cropped_nodules/"
nodules_csv = pandas.read_csv("~/cropped_nodules_2.csv")
positive = 0
negative = 0
x_val = []
x_train = []
y_train = []
y_val = []
for nodule in nodules_csv.iterrows():
if nodule.state == 1 and positive <= 700 and len(x_val) <= 2000 :
positive += 1
x_val_img = str(nodule.SN) + ".npy"
x_val.append(np.load(os.path.join(nodules_path,x_val_img)))
y_val.append(nodule.state)
elif nodule.state == 0 and negative <= 1300 and len(x_val) <= 2000:
x_val_img = str(nodule.SN) + ".npy"
negative += 1
x_val.append(np.load(os.path.join(nodules_path,x_val_img)))
y_val.append(nodule.state)
else:
if len(x_train) % 10000 == 0:
gc.collect()
print("gc done")
x_train_img = str(nodule.SN) + ".npy"
x_train.append(np.load(os.path.join(nodules_path,x_train_img)))
y_train.append(nodule.state)
print("x_train len= ", len(x_train))
print("Size of list1: " + str(sys.getsizeof(x_train)) + "bytes")
I tried to do many things to stop filling the momery, but I think the solution is not load the whole data to the memory at all, and I should find another method.
This post in stackoverflow summeraize my problem and my attempts to solve the memory problem.
stackoverflow.com
pd.iterrows() consume all the memory and gives an error (Process finished with exit code 137 (interrupted by signal 9: SIGKILL)) 1
python, pandas, numpy
asked by
Mustafa Mahmood
on 07:02PM - 25 Apr 21 UTC
I couldn’t figure out how to proparly load it using tensorflow dataset, or any other method.
I know the data is really imbalanced, I’ll try to do many things to overcome the imbalance
(Minority class sampling, data augmentation, minority oversampling, and weighted loss like binary cross entropy loss).
Any help will be appreciated, thanks in advance. |
st206076 | See if this works for you:
TensorFlow
tf.data: Build TensorFlow input pipelines | TensorFlow Core 4
Earlier this summer I implemented a stratified sampler with tf.data that you could refer to as well:
github.com
sayakpaul/PAWS-TF/blob/main/utils/labeled_loader.py 2
from . import multicrop_loader, config
import tensorflow as tf
import numpy as np
import os
GLOBAL_SCALE = [0.75, 1.0]
AUTO = tf.data.AUTOTUNE
(X_TRAIN, Y_TRAIN), (_, _) = tf.keras.datasets.cifar10.load_data()
def onehot_encode(labels, label_smoothing=0.1):
"""
One-hot encode labels with label smoothing.
:param labels: (batch_size, )
return: one-hot encoded labels with optional label smoothing
"""
labels = tf.one_hot(labels, depth=10)
# Reference: https://t.ly/CSYO)
labels *= 1.0 - label_smoothing
This file has been truncated. show original
The script is a bit involved so please feel free to ask questions as needed. |
st206077 | Closely related to:
https://discuss.tensorflow.org/t/model-checkpointing-best-practices-when-using-train-step
When I am using this the callback is unable to keep track of the metric values correctly:
image2336×460 58.6 KB
I notice something similar in the SimSiam example I linked in my previous post as well. Am I missing out on? |
st206078 | Are these metrics outputs both before or after the weight update step?
Cause if not these printed values are not the same |
st206079 | Also have you implemted def metrics(self) or called reset_states() if you have used a custom model with a custom train loop?
TensorFlow
Customize what happens in Model.fit | TensorFlow Core 2 |
st206080 | Refer to this example: Self-supervised contrastive learning with SimSiam 9. It’s all laid out there. |
st206081 | @Bhack you are actually right. I probably need to implement a tracker for this to work. I will do that and update here. Sorry. |
st206082 | Dear everyone:
I want to know which version of tensorflow have the function to use class_weight? that recent versions of tensorflow have restricted the use of class_weight
Thanks
best wishes |
st206083 | @jiachen_luo , I think it is not restricted and you can still use it in latest version. Please take a look at this 7 |
st206084 | Thanks. I tried to follow the tutorial, but it still reported an error as follows: ValueError: class_weight not supported for 3+ dimensional targets.
the coding: class_weight={0: 4, 1: 15, 2: 15, 3: 3, 4: 1, 5: 6, 6: 3},the error is ValueError: class_weight not supported for 3+ dimensional targets.
def train_model(self):
checkpoint = ModelCheckpoint(self.PATH, monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
if self.modality == "audio":
model = self.get_audio_model()
model.compile(optimizer='adadelta', loss='categorical_crossentropy', sample_weight_mode='temporal')
elif self.modality == "text":
model = self.get_text_model()
model.compile(optimizer='adadelta', loss='categorical_crossentropy', sample_weight_mode='temporal')
elif self.modality == "bimodal":
model = self.get_bimodal_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', sample_weight_mode='temporal')
early_stopping = EarlyStopping(monitor='val_loss', patience=10)
model.fit(self.train_x, self.train_y,
epochs=self.epochs,
batch_size=self.batch_size,
sample_weight=self.train_mask,
shuffle=True,
class_weight=class_weight,
callbacks=[early_stopping, checkpoint],
validation_data=(self.val_x, self.val_y, self.val_mask))
self.test_model()
Could you give me more detailed guidance to solve this problem?
Thanks.
Best wishes. |
st206085 | Yes, this is known. For that you need to follow sample_weights. The following tutorial demos one use-case:
TensorFlow
Image segmentation | TensorFlow Core 1
Immense thanks to @markdaoust for providing this section. |
st206086 | We already had a thread at
`class_weight` not supported for 3+ dimensional targets General Discussion
Dear everyone:
I’m new to tensorflow. The coding as follows:
def train_model(self):
checkpoint = ModelCheckpoint(self.PATH, monitor=‘val_loss’, verbose=1, save_best_only=True, mode=‘auto’)
if self.modality == “audio”:
model = self.get_audio_model()
model.compile(optimizer=‘adadelta’, loss=‘categorical_crossentropy’, sample_weight_mode=‘temporal’)
elif self.modality == “text”:
model = self.get_text_model()
model.compile(optimizer=‘adadelta’, loss=‘categorical_crossentropy’, sample_weigh… |
st206087 | Dear everyone:
I’m new to tensorflow. The coding as follows:
def train_model(self):
checkpoint = ModelCheckpoint(self.PATH, monitor=‘val_loss’, verbose=1, save_best_only=True, mode=‘auto’)
if self.modality == “audio”:
model = self.get_audio_model()
model.compile(optimizer=‘adadelta’, loss=‘categorical_crossentropy’, sample_weight_mode=‘temporal’)
elif self.modality == “text”:
model = self.get_text_model()
model.compile(optimizer=‘adadelta’, loss=‘categorical_crossentropy’, sample_weight_mode=‘temporal’)
elif self.modality == “bimodal”:
model = self.get_bimodal_model()
model.compile(optimizer=‘adam’, loss=‘categorical_crossentropy’, sample_weight_mode=‘temporal’)
early_stopping = EarlyStopping(monitor=‘val_loss’, patience=10)
model.fit(self.train_x, self.train_y,
epochs=self.epochs,
batch_size=self.batch_size,
sample_weight=self.train_mask,
class_weight = {0:4.0, 1:15.0, 2:15.0, 3:3.0, 4:1.0, 5:6.0, 6:3.0},
shuffle=True,
callbacks=[early_stopping, checkpoint],
validation_data=(self.val_x, self.val_y, self.val_mask))
self.test_model()
To be honest, the class_weight = {0:4.0, 1:15.0, 2:15.0, 3:3.0, 4:1.0, 5:6.0, 6:3.0} was added by myself to adjust the class weight. However, it reported the error : ValueError: class_weight not supported for 3+ dimensional targets.
The full of error as follows:
ValueError Traceback (most recent call last)
~\baseline.py in
288 model.test_model()
289 else:
→ 290 model.train_model()
~\baseline.py in train_model(self)
219
220 early_stopping = EarlyStopping(monitor=‘val_loss’, patience=10)
→ 221 model.fit(self.train_x, self.train_y,
222 epochs=self.epochs,
223 batch_size=self.batch_size,
F:\Anaconda\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1106 training_utils.RespectCompiledTrainableState(self):
1107 # Creates a tf.data.Dataset and handles batch and epoch iteration.
→ 1108 data_handler = data_adapter.get_data_handler(
1109 x=x,
1110 y=y,
F:\Anaconda\lib\site-packages\keras\engine\data_adapter.py in get_data_handler(*args, **kwargs)
1346 if getattr(kwargs[“model”], “_cluster_coordinator”, None):
1347 return _ClusterCoordinatorDataHandler(*args, **kwargs)
→ 1348 return DataHandler(*args, **kwargs)
1349
1350
F:\Anaconda\lib\site-packages\keras\engine\data_adapter.py in init(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model, steps_per_execution, distribute)
1156 self._insufficient_data = False
1157
→ 1158 self._configure_dataset_and_inferred_steps(strategy, x, steps_per_epoch,
1159 class_weight, distribute)
1160
F:\Anaconda\lib\site-packages\keras\engine\data_adapter.py in _configure_dataset_and_inferred_steps(failed resolving arguments)
1168 dataset = self._adapter.get_dataset()
1169 if class_weight:
→ 1170 dataset = dataset.map(_make_class_weight_map_fn(class_weight))
1171 self._inferred_steps = self._infer_steps(steps_per_epoch, dataset)
1172
F:\Anaconda\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py in map(self, map_func, num_parallel_calls, deterministic)
1923 warnings.warn(“The deterministic argument has no effect unless the "
1924 "num_parallel_calls argument is specified.”)
→ 1925 return MapDataset(self, map_func, preserve_cardinality=True)
1926 else:
1927 return ParallelMapDataset(
F:\Anaconda\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py in init(self, input_dataset, map_func, use_inter_op_parallelism, preserve_cardinality, use_legacy_function)
4481 self._use_inter_op_parallelism = use_inter_op_parallelism
4482 self._preserve_cardinality = preserve_cardinality
→ 4483 self._map_func = StructuredFunctionWrapper(
4484 map_func,
4485 self._transformation_name(),
F:\Anaconda\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py in init(self, func, transformation_name, dataset, input_classes, input_shapes, input_types, input_structure, add_to_graph, use_legacy_function, defun_kwargs)
3710 resource_tracker = tracking.ResourceTracker()
3711 with tracking.resource_tracker_scope(resource_tracker):
→ 3712 self._function = fn_factory()
3713 # There is no graph to add in eager mode.
3714 add_to_graph &= not context.executing_eagerly()
F:\Anaconda\lib\site-packages\tensorflow\python\eager\function.py in get_concrete_function(self, *args, **kwargs)
3132 or tf.Tensor or tf.TensorSpec.
3133 “”"
→ 3134 graph_function = self._get_concrete_function_garbage_collected(
3135 *args, **kwargs)
3136 graph_function._garbage_collector.release() # pylint: disable=protected-access
F:\Anaconda\lib\site-packages\tensorflow\python\eager\function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
3098 args, kwargs = None, None
3099 with self._lock:
→ 3100 graph_function, _ = self._maybe_define_function(args, kwargs)
3101 seen_names = set()
3102 captured = object_identity.ObjectIdentitySet(
F:\Anaconda\lib\site-packages\tensorflow\python\eager\function.py in _maybe_define_function(self, args, kwargs)
3442
3443 self._function_cache.missed.add(call_context_key)
→ 3444 graph_function = self._create_graph_function(args, kwargs)
3445 self._function_cache.primary[cache_key] = graph_function
3446
F:\Anaconda\lib\site-packages\tensorflow\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3277 arg_names = base_arg_names + missing_arg_names
3278 graph_function = ConcreteFunction(
→ 3279 func_graph_module.func_graph_from_py_func(
3280 self._name,
3281 self._python_function,
F:\Anaconda\lib\site-packages\tensorflow\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
997 _, original_func = tf_decorator.unwrap(python_func)
998
→ 999 func_outputs = python_func(*func_args, **func_kwargs)
1000
1001 # invariant: func_outputs contains only Tensors, CompositeTensors,
F:\Anaconda\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py in wrapped_fn(*args)
3685 attributes=defun_kwargs)
3686 def wrapped_fn(*args): # pylint: disable=missing-docstring
→ 3687 ret = wrapper_helper(*args)
3688 ret = structure.to_tensor_list(self._output_structure, ret)
3689 return [ops.convert_to_tensor(t) for t in ret]
F:\Anaconda\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py in wrapper_helper(*args)
3615 if not _should_unpack(nested_args):
3616 nested_args = (nested_args,)
→ 3617 ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
3618 if _should_pack(ret):
3619 ret = tuple(ret)
F:\Anaconda\lib\site-packages\tensorflow\python\autograph\impl\api.py in wrapper(*args, **kwargs)
693 except Exception as e: # pylint:disable=broad-except
694 if hasattr(e, ‘ag_error_metadata’):
→ 695 raise e.ag_error_metadata.to_exception(e)
696 else:
697 raise
ValueError: in user code:
F:\Anaconda\lib\site-packages\keras\engine\data_adapter.py:1385 _class_weights_map_fn *
raise ValueError("`class_weight` not supported for "
ValueError: `class_weight` not supported for 3+ dimensional targets.
How to fix it? Thanks
Best wishes.
jiachen |
st206088 | The long story is
github.com/tensorflow/tensorflow
Adding a utility to penalize majority class pixels in the Segmentation tutorial 55
opened
Apr 23, 2021
closed
May 4, 2021
sayakpaul
stat:awaiting tensorflower
type:docs-bug
type:feature
Thank you for submitting a TensorFlow documentation issue. Per our GitHub
polic…y, we only address code/doc bugs, performance issues, feature requests, and
build/installation issues on GitHub.
The TensorFlow docs are open source! To get involved, read the documentation
contributor guide: https://www.tensorflow.org/community/contribute/docs
## URL(s) with the issue:
Please provide a link to the documentation entry, for example:
https://www.tensorflow.org/tutorials/images/segmentation
## Description of issue (what needs changing):
It's not really an issue, a suggestion rather.
### Clear description
Semantic segmentation datasets can be highly imbalanced meaning that particular class pixels can be present more inside images than that of other classes. Since segmentation problems can be treated as per-pixel classification problems we can deal with the imbalance problem by weighing the loss function to account for this. It's a simple and elegant way to deal with this problem. Other solutions include always ensuring that a batch of samples (during training) always contain some proportion (which is prefixed) of positive classes.
However, TensorFlow does not yet support the `class_weight` argument in `model.fit()` for targets that are 3D (for segmentation problems, we are essentially predicting a map of shape `[batch_size, height, width, nb_channels]`). One way to get around this problem is to use `sample_weight` instead. But then again, it's not very clear as to how to do that properly particularly with `tf.data` pipelines.
Multiple folks have tried several hacks to get around this problem but it keeps coming back (see [here](https://github.com/keras-team/keras/issues/3653)). Therefore, I think the tutorial under question is a perfect opportunity to demonstrate the use case.
Cc: @MarkDaoust
github.com/keras-team/keras
Keras - how to use class_weight with 3D data 73
opened
Sep 1, 2016
bsafacicek
Hi,
I am using Keras to segment images to road and background pixels. As you c…an imagine percentage of road pixels are much lower than that of background pixels. Hence, I want to use class_weight= {0:0.05, 1:0.95} while fitting the model so that cnn won't predict every pixel as background. But, when I do this I got the following error:
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 597, in fit
sample_weight=sample_weight)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1035, in fit
batch_size=batch_size)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 973, in _standardize_user_data
in zip(y, sample_weights, class_weights, self.sample_weight_modes)]
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 387, in standardize_weights
raise Exception('class_weight not supported for '
Exception: class_weight not supported for 3+ dimensional targets.
My training labels are in this form: (number_of_training_samples=10000, number_of_pixels_in_patch=16384, number_of_classes=2). How can I weight the classes in Keras?
Thanks in advance.
/cc @Sayak_Paul @markdaoust |
st206089 | I tried to follow the tutorial, but it still reported an error as follows: ValueError: class_weight not supported for 3+ dimensional targets.
the coding: class_weight={0: 4, 1: 15, 2: 15, 3: 3, 4: 1, 5: 6, 6: 3},the error is ValueError: class_weight not supported for 3+ dimensional targets.
def train_model(self):
checkpoint = ModelCheckpoint(self.PATH, monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
if self.modality == "audio":
model = self.get_audio_model()
model.compile(optimizer='adadelta', loss='categorical_crossentropy', sample_weight_mode='temporal')
elif self.modality == "text":
model = self.get_text_model()
model.compile(optimizer='adadelta', loss='categorical_crossentropy', sample_weight_mode='temporal')
elif self.modality == "bimodal":
model = self.get_bimodal_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', sample_weight_mode='temporal')
early_stopping = EarlyStopping(monitor='val_loss', patience=10)
model.fit(self.train_x, self.train_y,
epochs=self.epochs,
batch_size=self.batch_size,
sample_weight=self.train_mask,
shuffle=True,
class_weight=class_weight,
callbacks=[early_stopping, checkpoint],
validation_data=(self.val_x, self.val_y, self.val_mask))
self.test_model() |
st206090 | Is it possible to use the TensorFlow Matrix Compression Operator 5 inside a Keras model? If so, is there an example that shows how to do it? |
st206091 | For tf.keras.layers.RNN, there’s a reset_states method that resets the recorded state to some pre-defined input. Is there some way to extract what the current recorded state is? |
st206092 | The documentation mentions a states attribute which looks like it’s readable. There’s also a return_state argument in the declaration if you want to include the state in your graph along with your output. |
st206093 | You can also take a look at this example that shows how to retrieve the current state of an RNN layer like LSTM and pass that to initialize another:
TensorFlow
Neural machine translation with attention | Text | ... 2 |
st206094 | -2
All these three seems to be very closely related and are used post model is created and trained.
Explainable AI :- Shows why model has given the prediction that was in the output. It shows you which part of image was more prominent to make a decision or which feature in text is more prominent.
TensorBoard :- provide a holistic view on metrics, loss, fairness, hyperparameter tuning, model performance on machine
What-If Tool:- This combines both explainable AI and TensorBoard techniques but only for text classifications and text regression models.
It is very difficult to judge when to use which tool especially for beginners , any help in this regard? Any easy cheat sheet which can clearly tell the difference and hence help me understand when to use what. |
st206095 | All these three seems to be very closely related and are used post model is created and trained.
Explainable AI :- Shows why model has given the prediction the way it is. It shows you which part of image was more prominent to make a decision or which feature in text is more prominent using feature attribution
TensorBoard :- provide a holistic view on metrics, loss, fairness, hyperparameter tuning, model performance on machine
What-If Tool:- This is part of TensorBoard, but you can also use it outside of TensorBoard, in a Jupyter notebook as well. Here you pass the final output model as well the data to it and using both of this this tools will give you much boarder visual picture on loss measure , model fairness …etc.
In Summary
Explainable AI is used get deeper understanding on model apart from your normal evaluation metrics like ROC/AUC , RMSE …etc. It will give you details about which feature in your trained model is impacting the result and to what extend . Note here you cannot modify the value dynamically add test the model again for new values
TensorBoard:- This is a visual tool, is used to get insight about your normal evaluation metrics like ROC/AUC, RMSE …etc. Apart from this you can train your model for several hyperparameter’s so using Tensorboard you can get a visual view on the performance of model for different hyperparameter’s and hence TensorBoard helps in hyperparameter tuning
What-if Tools:- As mentioned above it is part of TensorBoard. Now what this gives is that you can dynamically modify the input data and then see how the model performs so its a dynamic training that’s happening in What-if.
Conclusion: All three tools are very different from each other, they are not same though they might look same at the beginning. |
st206096 | When I run
tensorflowjs_converter --input_format=tf_hub ‘tfhub.dev/google/universal-sentence-encoder-large/5’ use-large
(the URL in the command starts with https:// but I can’t post URLs to this forum)
It responds
ValueError: Signature “default” does on exist in the saved model
I believe I correctly followed the directions in tensorflow/tfjs/tree/master/tfjs-converter on Github
How can I proceed? |
st206097 | Ken_Kahn:
tensorflowjs_converter --input_format=tf_hub ‘tfhub.dev/google/universal-sentence-encoder-large/5’
you should specify the --signature_name for the converter CLI, since tfhub module seems to have default signature as ‘serving_default’ now, instead of ‘default’.
tensorflowjs_converter --input_format=tf_hub --signature_name=serving_default ‘https://tfhub.dev/google/universal-sentence-encoder-large/5’ /tmp/web_model |
st206098 | Thanks! It now says
ValueError: Unsupported Ops in the model before optimization
SegmentMean, ParallelDynamicStitch, StringJoin, SegmentSum, UnsortedSegmentSum, DynamicPartition, StaticRegexReplace
So am I right in assuming that without a huge effort I have to stick with the lite version of USE that has already been converted to TFJS? Or should I try converting the larger versions of USE to TensorFlow Lite which can be loaded into the browser? |
st206099 | In TFJS if you find a model with not supported ops you can still run in native mode with Node.js:
blog.tensorflow.org
Run a TensorFlow SavedModel in Node.js directly without conversion 2
The TensorFlow blog contains regular news from the TensorFlow team and the community, with articles on Python, TensorFlow.js, TF Lite, TFX, and more. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.