id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st207600 | After some reading I came up with these two models @Kzyh @Bhack – still unsure if this is the right implementation of the architecture described above(image shared in replies and description in the intro) way to go about things.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.python.ops.gen_array_ops import reshape
model = keras.Sequential(
[
keras.Input(shape=(128, 800, 2)),
layers.Conv2D(16, (3, 3), activation='relu', padding="same"),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(5, 4)),
layers.Conv2D(8, kernel_size=(4,1),strides=(4,1), activation='relu', padding="same"),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(6, 5)),
layers.Reshape((40,8)),
layers.Bidirectional(layers.LSTM(8, return_sequences=True)),
layers.Dense(7),
]
)
model.summary()
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential(
[
keras.Input(shape=(128, 800, 2)),
layers.Conv2D(16, (3, 3), activation='relu', padding='same'),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(5, 4)),
layers.Conv2D(8, (4, 1), activation='relu', strides=(4,1), padding='same'),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(6, 5)),
layers.TimeDistributed(layers.Flatten()),
layers.Bidirectional(layers.LSTM(8, return_sequences=True)),
layers.Dense(7),
]
)
model.summary()
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer = keras.optimizers.Adadelta(),
metrics = ["accuracy"],
)
If some one can look at the code and architecture described above and let me know in case I am missing something that will be really helpful. |
st207601 | First model seems good. Dont know about the second one. Did you try training them? |
st207602 | Nope. I have never worked with speech input before so am unsure about the data pipeline for the model as well.
So as of now
I have a 52-minute long audio which I have annotated into 7 different categories by preparing a .txt file that looks like:
start-time, end-time, class
and
for feature extraction I have this code
import numpy as np
import librosa
sr = 44100
frame_length = 4096
hop_length = 1024
stream = librosa.stream('final.wav', block_length=800, frame_length=frame_length, hop_length=hop_length)
mel_specs_log_zcr = []
for y in stream:
mel = librosa.feature.melspectrogram(y, sr=sr, center=False, n_fft=4096, hop_length = 1024)
log_mel=librosa.power_to_db(mel)
zcr = librosa.feature.zero_crossing_rate(y, center=False, frame_length = 4096, hop_length = 1024 )
zcr = np.tile(zcr, (128, 1))
mel_spec_log_zcr = np.stack((log_mel, zcr), axis=2)
mel_specs_log_zcr.append(mel_spec_log_zcr) |
st207603 | import numpy as np
import librosa
sr = 44100
frame_length = 4096
hop_length = 1024
stream = librosa.stream('final.wav', block_length=800, frame_length=frame_length, hop_length=hop_length)
mel_specs_log_zcr = []
for y in stream:
mel = librosa.feature.melspectrogram(y, sr=sr, center=False, n_fft=4096, hop_length = 1024)
log_mel=librosa.power_to_db(mel)
zcr = librosa.feature.zero_crossing_rate(y, center=False, frame_length = 4096, hop_length = 1024 )
zcr = np.tile(zcr, (128, 1))
mel_spec_log_zcr = np.stack((log_mel, zcr), axis=2)
mel_specs_log_zcr.append(mel_spec_log_zcr)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.python.ops.gen_array_ops import reshape
model = keras.Sequential(
[
keras.Input(shape=(128, 800, 2)),
layers.Conv2D(16, (3, 3), activation='relu', padding="same"),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(5, 4)),
layers.Conv2D(8, kernel_size=(4,1),strides=(4,1), activation='relu', padding="same"),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(6, 5)),
layers.Reshape((40,8)),
layers.Bidirectional(layers.LSTM(8, return_sequences=True)),
layers.Dense(7),
]
)
model.summary()
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer = keras.optimizers.Adadelta(),
metrics = ["accuracy"],
)
with open("final.txt", "r") as f:
text = f.read()
labels = []
text = text.split("\n")
for t in text:
line = t.split("\t")
labels.append(line)
x_train = mel_specs_log_zcr
y_train = np.array(labels)
model.fit(x_train, y_train, batch_size=64, epochs=10)
This is what I have. But obviously hti sis not how data should be feeded into the CNN LSTM model.
So I get the error
ValueError: Data cardinality is ambiguous:
x sizes: 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128
y sizes: 1007
Make sure all arrays contain the same number of samples.
@Kzyh |
st207604 | @Kzyh final.txt is basically the file with annotation for producing the labels.
There are 1007 rows and every row has 3 values : start time, end time and the category.
These values are tab separated. Something like this.
Screenshot_20210701-111833_HTML Viewer1080×1668 276 KB
Can’t upload .txt files on the forum:( |
st207605 | I think your input and labels should be something like this:
x_train shape (batch, 128, 800, 2)
y_train shape (batch, 7)
y_train is calculated from your final.txt file using timestamps.
Lets say your first spectrogram starts from 0, ends at 72.13. The label should looke like this [n, s1, i1, s1, i1, s1, i1]. Then you change it to class id: [1,2,3,2,3,2,3]. |
st207606 | Hello, I just tried implementing my own version of Scene GCNN from this paper (Holistic 3D Scene Understanding from a Single Image
with Implicit Representation by Zhang et al., arxiv code: 2103.06422) using TF2.0 framework (previous experience only in Pytorch)`:
What I run, what I get as a warning is :
WARNING:tensorflow:Gradients do not exist for variables [‘scene_gcnn/scene_gcn_conv_2/weight_rs/kernel:0’, ‘scene_gcnn/scene_gcn_conv_2/weight_rs/bias:0’, ‘scene_gcnn/scene_gcn_conv_2/weight_rd/kernel:0’, ‘scene_gcnn/scene_gcn_conv_2/weight_rd/bias:0’] when minimizing the loss.
Here’s my code down below:
import tensorflow as tf
from tensorflow.keras import activations, regularizers, constraints, initializers
import numpy as np
dot = tf.matmul
spdot = tf.sparse.sparse_dense_matmul
class Scene_GCNConv(tf.keras.layers.Layer):
def __init__(self,
activation=lambda x: x,
use_bias=True,
kernel_initializer='glorot_uniform',
kernel_regularizer=None,
kernel_constraint=None,
bias_initializer='ones',
bias_regularizer=None,
bias_constraint=None,
activity_regularizer=None,
weight_shape=None,
**kwargs):
self.activation = activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.weight_shape = weight_shape
self.weight_sd = tf.keras.layers.Dense(self.weight_shape[1], activation=None, use_bias=self.use_bias, name="weight_sd")
self.weight_sr = tf.keras.layers.Dense(self.weight_shape[1], activation=None, use_bias=self.use_bias, name="weight_sr")
self.weight_dr = tf.keras.layers.Dense(self.weight_shape[1], activation=None, use_bias=self.use_bias, name="weight_dr")
self.weight_rs = tf.keras.layers.Dense(self.weight_shape[1], activation=None, use_bias=self.use_bias, name="weight_rs")
self.weight_rd = tf.keras.layers.Dense(self.weight_shape[1], activation=None, use_bias=self.use_bias, name="weight_rd")
super(Scene_GCNConv, self).__init__()
def call(self, z_o_prev, z_r_prev):
# TODO : switched without z_o and z_r, and included all ones adjacent matrix in initialization
# adjacent_matrix = 1 - tf.eye(z_o_prev.shape[1]) #which is N + 1
dim = z_o_prev.shape[1]
adjacent_matrix = 1 - tf.eye(dim) #which is N + 1
z_o = self.update_object_nodes(z_o_prev, z_r_prev, adjacent_matrix)
z_r = self.update_relationship_nodes(z_o_prev, z_r_prev, adjacent_matrix)
output = [z_o, z_r]
return output
def update_object_nodes(self, object_nodes, relationship_nodes, adjacent_matrix):
z_o = object_nodes
z_r = relationship_nodes
dim = adjacent_matrix.shape
adjacent_matrix_r_compatible = tf.concat([adjacent_matrix, tf.ones([(dim[0]-1)*dim[0], dim[1]])], axis=0)
first_term = self.weight_sd(z_o)
second_term = dot(adjacent_matrix_r_compatible,self.weight_sr(z_r), transpose_a = True)
third_term = dot(adjacent_matrix_r_compatible, self.weight_dr(z_r), transpose_a = True)
z_o = self.activation(first_term + second_term + third_term)
return z_o
def update_relationship_nodes(self, object_nodes, relationship_nodes, adjacent_matrix):
z_o = object_nodes
z_r = relationship_nodes
dim = adjacent_matrix.shape
adjacent_matrix_o_compatible = tf.concat([adjacent_matrix, tf.ones([dim[0], (dim[1]-1)*dim[1]])], axis=1)
first_term = dot(adjacent_matrix_o_compatible, self.weight_rs(z_o), transpose_a = True)
second_term = dot(adjacent_matrix_o_compatible, self.weight_rd(z_o),transpose_a = True)
z_r = self.activation(first_term + second_term)
return z_r
## separate embedding transformation that should be inside a overall Scene Graph Conv Net
class Scene_GCNN(tf.keras.layers.Layer):
def __init__(self,
activation=lambda x: x,
use_bias=True,
kernel_initializer='glorot_uniform',
kernel_regularizer=None,
kernel_constraint=None,
bias_initializer='ones',
bias_regularizer=None,
bias_constraint=None,
activity_regularizer=None,
weight_shape_array=None,
**kwargs):
super(Scene_GCNN, self).__init__()
self.activation = activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.bias_constraint = constraints.get(bias_constraint)
## Initialize number of layers
self.weight_shape_array = weight_shape_array
self.num_iterations = len(weight_shape_array)
self.sgcnn_layers = []
for i, weight_shape in enumerate(self.weight_shape_array):
self.sgcnn_layers.append(Scene_GCNConv(
activation=self.activation,
use_bias=self.use_bias,
kernel_initializer=self.kernel_initializer,
kernel_regularizer=self.kernel_regularizer,
kernel_constraint=self.kernel_constraint,
bias_initializer=self.bias_initializer,
bias_regularizer=self.bias_regularizer,
bias_constraint=self.bias_constraint,
activity_regularizer=self.activity_regularizer,
weight_shape=weight_shape))
d = self.weight_shape_array[0][0]
embed_relationship = tf.keras.models.Sequential()
embed_relationship.add(tf.keras.Input(shape=(6,)))
embed_relationship.add(tf.keras.layers.Dense(d, activation='relu'))
embed_relationship.add(tf.keras.layers.Dense(d, activation=None))
self.embed_relationship = embed_relationship
embed_background = tf.keras.models.Sequential()
embed_background.add(tf.keras.Input(shape=(3,)))
embed_background.add(tf.keras.layers.Dense(d, activation='relu'))
embed_background.add(tf.keras.layers.Dense(d, activation=None))
self.embed_background = embed_background
embed_slots = tf.keras.models.Sequential()
embed_slots.add(tf.keras.Input(shape=(21,)))
embed_slots.add(tf.keras.layers.Dense(d, activation='relu'))
embed_slots.add(tf.keras.layers.Dense(d, activation=None))
self.embed_slots = embed_slots
final_embed_background = tf.keras.models.Sequential()
final_embed_background.add(tf.keras.Input(shape=(21,)))
final_embed_background.add(tf.keras.layers.Dense(3, activation=tf.keras.layers.LeakyReLU(alpha=0.01)))
self.final_embed_background = final_embed_background
# def call(self, slots, background_latent,):
def call(self, inputs):
slots = inputs[0]
background_latent = inputs[1]
#slots [B, num_obj, 21]
#background_latent [B, 1, 3]
background_latent = background_latent[:,None,:]
object_nodes = self.get_object_nodes(slots, background_latent)
relationship_nodes = self.get_relationship_nodes(slots)
for i in range(self.num_iterations):
object_nodes, relationship_nodes = self.sgcnn_layers[i](object_nodes, relationship_nodes)
#object_nodes [B, num_object + 1, 21]
slots = object_nodes[:,0:-1,:]
background_latent = self.final_embed_background(object_nodes[:,-1,:])
# output = [object_nodes, relationship_nodes]
output = [slots, background_latent]
return output
def get_object_nodes(self, slots=None, background_latent = None):
#Embedding of slot
slots_embedded = self.embed_slots(slots)
#Embedding of background
background_latent_embedded = self.embed_background(background_latent)
object_nodes = tf.concat([slots_embedded, background_latent_embedded], axis = 1)
return object_nodes
def get_relationship_nodes(self, slots):
#Relationship nodes, between background and slots
# For nodes connecting two different objects, the geometry feature [20, 49] of 2D object bounding
# boxes and the box corner coordinates of both connected objects normalized by the image height and width are used as
# features.
# In our example, we use x,y,z as values from each slot to get (N+1)^2 x 2d matrix where d=(x,y,z)
# The coordinates are flattened and concatenated in
# the order of source-destination, which differentiate the relationships of different directions.
# For nodes connecting
# objects and layouts, since the relationship is presumably
# different from object-object relationship, we initialize the
# representations with constant values, leaving the job of inferring reasonable relationship representation to SGCN
slots_extended = tf.concat([slots[:,:,18:21], tf.ones([slots.shape[0], 1, 3])],axis=1)
A = tf.repeat(slots_extended, axis = 1,repeats=slots_extended.shape[1])
#Add [B,1,latent_size] to both A and B to include layout
B = tf.tile(slots_extended, multiples=[1,slots_extended.shape[1],1])
relationship_nodes = tf.concat([A,B], axis=2)
relationship_latent_embedded = self.embed_relationship(relationship_nodes)
relationship_nodes = relationship_latent_embedded
return relationship_nodes
if(__name__ == "__main__"):
weight_shape_array=[(64,128),(128,64),(64,21)]
scene_gcnn = Scene_GCNN(
activation='sigmoid',
use_bias=True,
kernel_initializer='glorot_uniform',
kernel_regularizer=None,
kernel_constraint=None,
bias_initializer='glorot_normal',
bias_regularizer=None,
bias_constraint=None,
activity_regularizer=None,
weight_shape_array=weight_shape_array)
slots = tf.random.uniform([8,3,21])
background_latent = tf.random.uniform([8,3])
print(background_latent)
print(scene_gcnn([slots, background_latent]))
#Made up learning rate
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-7)
is_training = True
with tf.GradientTape() as tape:
output = scene_gcnn([slots, background_latent])
#Made up losses, don't think about the metric
losses_background = tf.reduce_sum(tf.random.uniform([8,3]) - output[1])
losses_foreground = tf.reduce_sum(tf.random.uniform([8,3,21]) - output[0])
losses = losses_background + losses_foreground
if is_training:
variables = scene_gcnn.trainable_variables
gradients = tape.gradient(losses, variables)
optimizer.apply_gradients(zip(gradients, variables))
How do I avoid and solve this warning?
I know it’s a long code sample, but I included dummy test example which you can run if you call python command on it (TF2.0 required). I tried debugging this thing and seeing where are my holes and it has been an ongoing struggle to understand it, but no results so far. |
st207607 | You can use:
optimizer.apply_gradients([
(grad, var)
for (grad, var) in zip(gradients, variables)
if grad is not None
]) |
st207608 | To me your solution seems like a way to avoid warnings appearing, but not a solution why does this happen, and why specific parameters are not updated? |
st207609 | I supposed you want just to suppress the warning.
Do you have tried with:
@tf.function
def call(self, z_o_prev, z_r_prev): |
st207610 | I see there are no more warnings, but I do not understand how it happens? Will it update my model now according to the loss function and update those 4 previously warned on parameters, or just skip them fully?
According to tf documentation, it only speeds up my model, which does not mean parameters I want to be updated are updated anymore.
I repeat, I want parameters mentioned in Warning to actually be updated, but I don’t see what am I missing? |
st207611 | Ok I made a mistake, didn’t realize that nonupdated weights are actually weights related to relationship_nodes from line 201 and are from the last update iteration (2nd) in the loop of model Scene_GCNN, so since they are not used as the output in line 209, they won’t be updated as they are not even needed. Thank you Stefano for trying to help me, at least I realized that I made a different kind of mistake.
I would close this question. |
st207612 | I am using a deep neural network for a semantic segmentation task. The training data comprise 17,000 images that have been interpreted manually to generate two classes: MMD, and non-MMDs.
While I have good confidence in the accuracy of interpreted MMDs (high accuracy for true positives), there is still a chance that we misclassified some MMDs as non-MMD in the training data (even though its not very frequent). Further, the ratio of MMD to non-MMD samples is not balanced; and only ~20% of the training images are covered by MMDs. As the result, the training step can get biased toward the non-MMD class and show high accuracy for identifying the non-MMDs, which cover 80% of the image. I am currently using the following parameters:
• softmax as the activation function for the last layer
• Adam as the optimizer in Keras
• soft dice as the loss function to account for the imbalanced training data
• precision (tf.keras.metrics.Precision) as the metric to find the optimum model during training.
Obviously, my objective is to focus the training on maximizing the precision for the MMD class, and punish the training for generating false positives specifically for the MMD class. However, the above configuration does not seem to produce the desired results.
My question is, what would you do differently to achieve this objective? all suggestions are welcome! |
st207613 | I suggest to take a look at:
github.com
JunMa11/SegLoss 11
A collection of loss functions for medical image segmentation |
st207614 | Hi,
I have applied several classification methods, unfortunately, the developed models never exceed 62% of accuracy.
here I attached a comparison table of the developed models.
I’m wondering how I can improve the models’ accuracy!?
Accurcy Table699×451 17.1 KB |
st207615 | The first question i suggest to you is:
what are your performances on the training set? |
st207616 | I have split the data into training and testing and the confusion matrix give these results, not sure if is it the right thing
from sklearn.tree import DecisionTreeClassifier
dt=DecisionTreeClassifier()
dt.fit(X_train,y_train)
pred_dt_tr=dt.predict(X_train)
pred_dt=dt.predict(X_test)
from sklearn.metrics import confusion_matrix,classification_report,f1_score
print(confusion_matrix(y_test,pred_dt))
print(classification_report(y_test,pred_dt)) |
st207617 | What I meant, I suppose that the table is from the testset.
So what are the models performances on the training set?
This could be a useful starting point to understand if:
You still have a margin to learn with the current data
You have generalization issues or you model is overfitting
Your model capacity is limited
Missing hyperparameter tuning on a validation set
Etc. |
st207618 | I just see the accuracy for training and resting for KNN. the accuracy for training is 0.996 and the testing is 0.722 |
st207619 | asiddiq:
print(confusion_matrix(y_test,pred_dt))
print(classification_report(y_test,pred_dt))
Using pred_dt_tr and y_train
This forum is generally about Tensorflow but you are using sklearn so I suggest you to use sklearn support channel for sklearn code/projects.
I don’t know your specific learning goal and dataset but in TF you can try to explore:
blog.tensorflow.org
Introducing TensorFlow Decision Forests 4
TensorFlow Decision Forests is a collection of Decision Forest algorithms for classification, regression and ranking tasks, with the flexibility and c |
st207620 | Sorry I apologies if posted something not related to tensorflow policy.
Thank you for replying to me |
st207621 | It is almost impossible to suggest anything without additional information.
How many classes are you trying to predict?
How many training images do you have?
Does your dataset suffer from data imbalance?
Have you already checked your data? (e.g. if it is labeled correctly) |
st207622 | asiddiq:
Hi,
I have applied several classification methods, unfortunately, the developed models never exceed 62% of accuracy.
here I attached a comparison table of the developed models.
I’m wondering how I can improve the models’ accuracy!?
I’m trying confirmed and suspected cases
it is numerical data not images
How can I check for data balance?
the data is labeled |
st207623 | It would be great that you will show two graphics: acc & loss, as a picture …they can be drawn on one picture. It has given some answers to the appeared questions. |
st207624 | I being a researcher wouldn’t be able to go for researcher on statically base. Can any one explained on module base research. |
st207625 | Hi, new to tfjs, just tested out tfjs-node mobilenet with pre-trained model. I would like to learn how to train custom models using exiting video footage. I came across roboflow.com 5 which looks like to give me what I want, however it only offer formats in TFRecords and Tensorflow Object Detection. Any suggestions what tools I should use? |
st207626 | What specific frameworks are you looking for? Object Detection API is a well-developed and appreciated framework among the vision community. It comes with performance benefits, a rich repository of SOTA model implementations many of which are TFLite-compatible.
But here’s a standalone tutorial from keras.io 2 that shows you how to train a RetinaNet from scratch:
keras.io
Keras documentation: Object Detection with RetinaNet 8 |
st207627 | Hello there! Thanks for posting your TensorFlow.js question and welcome to the community Indeed you may be interested in this codelab that shows you how to make your custom image classifier via a form of transfer learning using mobilenet as the base TensorFlow.js Transfer Learning Image Classifier | Google Codelabs 18
This is how the popular Teachable Machine (Teachable Machine 7) works behind the scenes to take video and retrain a network on top of mobile net to then recognize something new and all the retraining is done in < 30 seconds in the browser live if you have say 50 images of the new class or so.
If you wish to use MP4 files then you can simply take the frames of these if they only contain the new object you want to recognize and then use those frames as the training data. |
st207628 | Hello all,
As we all saw in Google IO that Google has developed LaMDA for open-ended conversations. Apart from this blog 5, I can’t find anything about LaMDA and I am assuming that Google has not yet made the paper public.
So, do anyone know any place other than this blog where I can find more about LaMDA? |
st207629 | Hi, I’m new to Tensorflow and even Python. I’m interested in to use Tensorflow3D models to recognize something on mobile devices.
Before starting my problem question, can I use a tensorflow3D model on a mobile device in the first place? If so, I have a problem.
My problem is that I can’t save a model that is using semantic segmentation model, named “semantic_segmentation_model.” I’m struggled with saving the model just adding below code.
model.fit(
x=inputs,
callbacks=[backup_checkpoint_callback, checkpoint_callback],
steps_per_epoch=FLAGS.num_steps_per_epoch,
epochs=FLAGS.num_epochs,
verbose=1 if FLAGS.run_functions_eagerly else 2)
model.close_writer()
# Added this by edom18
model.save("fileName")
This added code is in google-reserch/tf3d/train.py. I just added last code after calling fit method. Then I got some errors like below.
ValueError: batch_size is unknown at graph construction time.
In call to configurable 'train' (<function train at 0x7f3e476aee60>)
What should I know about that? |
st207630 | Hi folks,
A few months ago, we from PyImageSearch 17 (where I work) took part in a CVPR competition 18. It concerns developing a model for accurate detection of natural scenes in mobiles. Today, I am glad to share that our entry made it to the top teams (good to know that we competed with ByteDance :D).
With respect to model size, efficiency, topline metric, etc. we think our model is decent enough. We have jotted down our solution approach in this report (contains approaches from other teams too): [2105.08819] Fast and Accurate Quantized Camera Scene Detection on Smartphones, Mobile AI 2021 Challenge: Report 10. Our code is in TensorFlow and TensorFlowLite. We might be open-sourcing it to foster further research in the TinyML space.
Happy to address any feedback. |
st207631 | I found HierarchicalCopyAllReduce is much slower than NcclAllReduce, related issues of multi-Gpus training · Issue #971 · google/automl · GitHub 12. Any ideas? |
st80000 | I am building an LSTM net for time-series prediction. I found an example from github 4 and tried to implement it myself.
I have a class to monitor the training process. The training function is as below:
def trainModel(self, epochMax=100):
for i in range(epochMax):
self.h = torch.zeros(layerNo, datasetNo, D_H)
self.c = torch.zeros(layerNo, datasetNo, D_H)
self.h = self.h.to(device)
self.c = self.c.to(device)
def closure(): #this is needed for LBFGS optimizer
#***Why do I need these four lines?
self.h = self.h.detach()
self.c = self.c.detach()
self.h = self.h.requires_grad_()
self.c = self.c.requires_grad_()
yPred, h_temp, c_temp = self.model(self.XTrainAll, self.h, self.c)
self.optimizer.zero_grad()
self.h, self.c = h_temp, c_temp
loss = self.lossFn(yPred, self.YTrainAll)
# print('loss:', loss.item())
loss.backward()
return loss
loss = self.optimizer.step(closure)
As seen from the example code, they do not implement the detach() function. But for my case, if these lines are missing, the error RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. appears.
I have gone through this post but I am still not clear about the difference between my code and the github code.
Also, is there a way to pass h and c into the closure() function without defining them as class variables? Python complains about undefined variables when I define them as just h and c outside of the function. Having global h at the beginning of closure() does not help.
Thanks in advance. |
st80001 | Regarding h,c variables, if there’s an assignment to h anywhere in the function, Python will treat h as a local symbol that shadows h from the outer context.
Assignment to an element of h does not trigger this behavior, so the the standard workaround is to wrap the variable from outer context into a singleton list.
myvar = 5
myvar = [myvar]
def closure():
myvar[0]=6
closure()
print(myvar[0]) # => 6 |
st80002 | I was looking at the famous paper MAML (Model Agnostic Meta-Learning) and they say they use Hessian-Vector products. How do they use them:
in mathematics? i.e. how do the Hessian vector products get involved in the update rule?
in the code. How do they update the parameters being learned?
Or is there a nice example of how Hessian Vector products are used in (meta) training?
cross posted:
https://www.quora.com/unanswered/What-is-an-example-of-using-the-Hessian-vector-product-in-learning-using-PyTorch 12
https://www.reddit.com/r/pytorch/comments/dhcn8c/what_is_an_example_of_using_hessian_vector/ 17
What is an example of using Hessian Vector Product in Learning using Pytorch? |
st80003 | I am learning Logistic Regression within Pytorch and to better understand I am defining a custom CrossEntropyLoss as below:
def softmax(x):
exp_x = torch.exp(x)
sum_x = torch.sum(exp_x, dim=1, keepdim=True)
return exp_x/sum_x
def log_softmax(x):
return torch.exp(x) - torch.sum(torch.exp(x), dim=1, keepdim=True)
def CrossEntropyLoss(outputs, targets):
num_examples = targets.shape[0]
batch_size = outputs.shape[0]
outputs = log_softmax(outputs)
outputs = outputs[range(batch_size), targets]
return - torch.sum(outputs)/num_examples
I also make my own logistic regression (to predict FashionMNIST) as below:
input_dim = 784 # 28x28 FashionMNIST data
output_dim = 10
w_init = np.random.normal(scale=0.05, size=(input_dim,output_dim))
w_init = torch.tensor(w_init, requires_grad=True).float()
b = torch.zeros(output_dim)
def my_model(x):
bs = x.shape[0]
return x.reshape(bs, input_dim) @ w_init + b
To validate my custom crossentropyloss, I compared it with nn.CrossEntropyLoss from Pytorch by applying it on FashionMNIST data as below:
criterion = nn.CrossEntropyLoss()
for X, y in trn_fashion_dl:
outputs = my_model(X)
my_outputs = softmax(outputs)
my_ce = CrossEntropyLoss(my_outputs, y)
pytorch_ce = criterion(outputs, y)
print (f'my custom cross entropy: {my_ce.item()}\npytorch cross entroopy: {pytorch_ce.item()}')
break
My question is toward the results my_ce (my cross entropy) vs pytorch_ce (pytorch cross entropy) where they are different:
my custom cross entropy: 9.956839561462402
pytorch cross entroopy: 2.378990888595581 |
st80004 | Solved by mailcorahul in post #2
@alie There are two mistakes here.
You apply softmax twice - once before calling your custom loss function and inside it as well.
You are not applying log to softmax output. Inside log_softmax():
return torch.log(torch.exp(x) / torch.sum(torch.exp(x), dim=1, keepdim=True)) |
st80005 | @alie There are two mistakes here.
You apply softmax twice - once before calling your custom loss function and inside it as well.
You are not applying log to softmax output. Inside log_softmax():
return torch.log(torch.exp(x) / torch.sum(torch.exp(x), dim=1, keepdim=True)) |
st80006 | @mailcorahul Thanks; after changing the log_softmax() function with yours, the two cross entropy beam closer but still they are not exactly the same. Is this expected or there is mistake somewhere else?
my custom cross entropy: 2.319404125213623
pytorch cross entroopy: 2.6645867824554443 |
st80007 | Those two were the mistakes in the code. Can you post your full code again here? |
st80008 | Thanks, sure here it is:
import numpy as np
import torch
import torchvision
from torchvision import transforms, datasets
from torch.utils.data import DataLoader
import torch.nn as nn
def softmax(x):
exp_x = torch.exp(x)
sum_x = torch.sum(exp_x, dim=1, keepdim=True)
return exp_x/sum_x
def log_softmax(x):
return x - torch.logsumexp(x,dim=1, keepdim=True)
def CrossEntropyLoss(outputs, targets):
num_examples = targets.shape[0]
batch_size = outputs.shape[0]
outputs = log_softmax(outputs)
outputs = outputs[range(batch_size), targets]
return - torch.sum(outputs)/num_examples
def my_model(x):
bs = x.shape[0]
return x.reshape(bs, input_dim) @ w_init + b
#FashionMNIST Datasets for training/test
trans = transforms.Compose([transforms.ToTensor()])
trn_ds = datasets.FashionMNIST(’./training’, download=True, transform=trans)
test_ds = datasets.FashionMNIST(’./test’, train=False, download=True, transform=trans)
#Dataloader for training/test
trn_dl = DataLoader(trn_ds, batch_size=64, shuffle=True)
test_dl = DataLoader(test_ds, batch_size=64)
#paramaters initialization
input_dim = 784 # 28x28 FashionMNIST data
output_dim = 10
w_init = np.random.normal(scale=0.05, size=(input_dim,output_dim))
w_init = torch.tensor(w_init, requires_grad=True).float()
b = torch.zeros(output_dim)
#pytorch CrossEntropyLoss
criterion = nn.CrossEntropyLoss()
for X, y in trn_dl:
outputs = my_model(X)
my_outputs = softmax(outputs)
my_ce = CrossEntropyLoss(my_outputs, y)
pytorch_ce = criterion(outputs, y)
print (f'my custom cross entropy: {my_ce.item()}\npytorch cross entroopy: {pytorch_ce.item()}')
break |
st80009 | @alie I am finding it difficult to understand the code because of its formatting, but I can see you’re applying softmax twice. You can either remove the line my_outputs = softmax(outputs) or replace the 3rd line in CrossEntropyLoss() to outputs = torch.log(outputs). |
st80010 | Thanks again, removing the softmax solves the problems; no both ce returns the exact same values
my_outputs = softmax(outputs) |
st80011 | Hi all,
Whenever I try to move my tensors or model to the GPU using either the .cuda() or .to('cuda') method, the kernel just freezes and has to be terminated to be used again.
I’ve looked through several other related issues and they are either extremely old (circa 2017) or their solutions were that they had an incompatible version of cuda running - I think none were very useful.
Here is my environment details:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 207... Off | 00000000:01:00.0 Off | N/A |
| N/A 47C P8 5W / N/A | 303MiB / 7982MiB | 5% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1768 G /usr/lib/xorg/Xorg 135MiB |
| 0 2036 G /usr/bin/gnome-shell 116MiB |
| 0 2589 G ...uest-channel-token=11750413998548151078 49MiB |
+-----------------------------------------------------------------------------+
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Fri_Feb__8_19:08:17_PST_2019
Cuda compilation tools, release 10.1, V10.1.105
And I just followed the basic installation instructions on the website:
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
Any ideas? Thanks! |
st80012 | Solved by tom in post #2
There was an issue with cuda 10.1 minor versions between anaconda’s cuda and PyTorch’s cuda differing causing excessive JIT compiles.
It is fixed and reinstalling PyTorch helps.
Best regards
Thomas |
st80013 | There was an issue with cuda 10.1 minor versions between anaconda’s cuda and PyTorch’s cuda differing causing excessive JIT compiles.
It is fixed and reinstalling PyTorch helps.
Best regards
Thomas
github.com/pytorch/pytorch
Very Slow Moving Tensor to CUDA device (CUDA 10.1 with PyTorch 1.3) 41
opened
Oct 12, 2019
closed
Oct 12, 2019
dhpollack
🐛 Bug
Moving tensors to cuda devices is super slow when using pytorch 1.3 and CUDA 10.1. The issue does not occur...
high priority
module: binaries |
st80014 | Hi everyone,
I am kind of a newbie to PyTorch and ML. I have been trying to check for nodes correlation using the svcca module from google here 9.
The examples work well and if I try to use random tensors converted into np arrays I’m able to get the svcca values.
I then tried to get the activations from a layer of an LSTM net by adding a self.activations variable to the model, which saves the activations of the LSTM layer output.
def forward(self, text):
embedded = self.embedding(text)
output, hidden = self.rnn(embedded)
self.activations = output
assert torch.equal(output[-1,:,:], hidden[0].squeeze(0))
out = self.linear(hidden[0].squeeze(0))
return out
Then during evaluation I save the activations to a global variable:
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.text).squeeze(1)
activations.append(model.activations)
Then, just to test it, I take one of the activations and transpose it as required by the svcca module, and check the cca similarity with itself which should give a coefficient of 1
a = activations[1].cpu().detach().numpy()
a = a[0,:,:]
a = np.transpose(a, (1,0))
results = cca_core.get_cca_similarity(a, a, verbose=True)
What I get is this error:
File "<stdin>", line 1, in <module>
File "/home/main/Projects/svcca/cca_core.py", line 295, in get_cca_similarity
verbose=verbose)
File "/home/main/Projects/svcca/cca_core.py", line 162, in compute_ccas
u, s, v = np.linalg.svd(arr)
File "<__array_function__ internals>", line 6, in svd
File "/home/main/.local/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 1636, in svd
u, s, vh = gufunc(a, signature=signature, extobj=extobj)
ValueError: On entry to DLASCL parameter number 4 had an illegal value
I checked for NaNs and Infs and there seem to be none.
But if I make random matrices with the same shape (65, 128), I can get the cca similarity just fine. Anyone has any idea where I might be getting stuff wrong? Is there something wrong in how I get the activations from torch tensors to np arrays?
Thanks a lot. |
st80015 | I am working with multiple files, and multiple training samples in each file. I will use ConcatDataset as described here:
DataLoaders - Multiple files, and multiple rows per column with lazy evaluation
I created one dataset for each file, and if there’s only 3000 files then it isn’t that much to hold it inside an array (object that has a reference to).
If you wrote your DataSet with linecache, then it won’t read each file into memory.
At least this is my observation after reading more files than my computer’s memory can support.
I need to have negative samples in addition to my true samples, and I need my negative samples to be randomly selected from all the training data files. So, I am wondering, would the returned batch samples just be a random consecutive chuck from a random single file, or would be batch span across multiple random indexes across all the datafiles?
If there are more details needed about what I am trying to do exactly, it’s because I am trying to train over a TPU with Pytorch XLA.
Normally for negative samples, I would just use a 2nd DataSet and DataLoader, however, I am trying to train over TPUs with Pytorch XLA (alpha was just released a few days ago https://github.com/pytorch/xla 2 ), and to do that I need to send my DataLoader to a torch_xla.distributed.data_parallel.DataParallel object, like model_parallel(train_loop_fn, train_loader) which can be seen in these example notebooks
github.com
pytorch/xla/blob/master/contrib/colab/resnet18-training-xrt-1-15.ipynb 1
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "[PT Devcon 2019] PyTorch/TPU ResNet18/CIFAR10",
"provenance": [],
"collapsed_sections": [],
"machine_shape": "hm"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "TPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
This file has been truncated. show original
github.com
pytorch/xla/blob/master/contrib/colab/mnist-training-xrt-1-15.ipynb 1
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "[PT Devcon 2019] PyTorch/TPU MNIST",
"provenance": [],
"collapsed_sections": [],
"machine_shape": "hm"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "TPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
This file has been truncated. show original
So, I am now limited to a single DataLoader, which will need to handle both the true samples, and negative samples that need to be randomly selected from all my files. |
st80016 | Currently I has index of batch_size * candidate_size and desired_candidate : batch_size * top_5
like, if batch_size == 16 and candidate_size == 15
>> index.size()
torch.Size([64, 15])
>> desired_candidate.size()
torch.Size([64, 5])
>> index[0]
tensor([104, 171, 182, 3, 56, 178, 6, 6, 4, 21, 30, 182, 27, 39, 56], device='cuda:0', dtype=torch.int32)
>> desired_candidate[0]
tensor([171, 4, 182, 102, 61], device='cuda:0')
I’d like to filter index by its desired_candidate, so the desired one is
>> index_[0]
tensor([104, -1, -1, 3, 56, 178, 6, 6, -1, 21, 30, -1, 27, 39, 56], device='cuda:0', dtype=torch.int32)
for each batch. But I can’t find out how to do this.
If someone would know how to implement this, I’d appreciate it.
Thanks. |
st80017 | Solved by albanD in post #2
Hi,
It might depend a bit on the memory limitations you have. But the following should work (the sizes are very small for printing purposes):
import torch
b = 2
c = 3
index = torch.rand(b, c).mul(5).long()
desired_candidate = torch.rand(b, 5).mul(5).long()
print(index)
print(desired_candidate)
… |
st80018 | Hi,
It might depend a bit on the memory limitations you have. But the following should work (the sizes are very small for printing purposes):
import torch
b = 2
c = 3
index = torch.rand(b, c).mul(5).long()
desired_candidate = torch.rand(b, 5).mul(5).long()
print(index)
print(desired_candidate)
expanded_size = (b, c, 5)
expanded_index = index.unsqueeze(2).expand(expanded_size)
expanded_desired_candidate = desired_candidate.unsqueeze(1).expand(expanded_size)
mask = expanded_index.eq(expanded_desired_candidate).any(-1)
index[mask] = -1
print(index)
Let me know if it fits your needs ! |
st80019 | Hi guys,
x = torch.tensor([[1,2], [3, 4], [5, 6]])
(x<2).numpy().sum()
I’m able to do the above but is there a way to do it just using torch?
Thanks. |
st80020 | The sum() method exists in torch. Simply removing the .numpy() will make it work.
Do you see an error when you do that? |
st80021 | import torch
x = torch.arange(25).view((5, 5))
y = torch.tensor([[3, 4, 2, 2], [0, 4, 1, 1], [2, 4, 3, 1]]) # shape: (3, 4)
result = torch.zeros(3, 4)
for i in range(1, 4):
current_y = y[:, i] # shape: (3,)
prev_y = y[:, i - 1] # shape: (3,)
result[:, i] = x[prev_y, current_y]
print(result)
Here is the running result:
图片.png692×499 9.74 KB
I think this procedure can be executed in parallel, but I don’t know how. Can anyone help me? |
st80022 | Solved by ptrblck in post #2
This should work:
result2 = torch.zeros(3, 4)
result2[:, 1:] = x[y[:, :-1], y[:, 1:]]
print((result==result2).all())
> tensor(True) |
st80023 | This should work:
result2 = torch.zeros(3, 4)
result2[:, 1:] = x[y[:, :-1], y[:, 1:]]
print((result==result2).all())
> tensor(True) |
st80024 | Hi,
I need to use an old version of pytorch (0.1.12) for running an author’s implementation. Can I install it from source with CUDA 10.1?
I cloned the pytorch repository, and checked out v0.1.12. I then ran the following
conda install numpy pyyaml mkl setuptools cmake gcc cffi
where I got an error saying that package ‘gcc’ wasn’t found.
I tried ignoring this and running the following
conda install numpy pyyaml mkl setuptools cmake cffi
conda install -c pytorch magma-cuda100
export CMAKE_PREFIX_PATH=~/anaconda3/
python setup.py install
But this installs pytorch without CUDA/CUDnn. What can I do to install pytorch v0.1.12 with my current version of CUDA (which is on a server where I do not have root access)?
Any help would be appreciated. Thanks. |
st80025 | Hi,
Have you tried installing the cuda packages in conda? That should make sure that it is detected during installation from source. |
st80026 | Hi, thanks for the response.
I later just tried to compile it with an older version of CUDA (7.5). Realized I needed to install just the CUDA toolkit at a non-root location and it works with the Nvidia driver I have (418.67). Haven’t tried it with cuda 10.1 toolkit yet, but for now, I can just run with the cuda7.5 |
st80027 | I have a network that has two independent neurons at the output.
Each neuron has a function to activate tan.
I consider a mistake for the first neuron and for the second.
If I call backward () twice, I get an error.
How do i do right?
As the Loss function, I use L1Loss. |
st80028 | I also can not figure out how to choose the right Loss function.
I have two neurons. They work independently and give values from -1 to 1.
For example, one neuron gives 0.3, and the network gives the correct answer is -0.4. Network error is 0.7. I have to reduce the gradient. But if I call the L1Loss function, then I will get 0.3 - (- 0.4) = 0.7. In this case, my gradients will be increased. How do I specify a network to reduce the gradient?
Although maybe I’m wrong, the network will do everything right …
The first question remains, how do I calculate the gradients for two neurons( loss1.backward(), loss2.backward() ) |
st80029 | you can do for example (loss1 + loss2).backward() or use torch.autograd.grad function |
st80030 | every time after loss.backward() is called, the previous computational graph is released.
Thus if you want use the graph again, just add loss1.backward(retain_graph = True) to prevent the graph to be released.
And remember to reset optimizer.zero_grad , before you call optimizer.step().
loss.backward() will compute the gradient
and optimizer.step() will apply the gradient and update the tensor.
Since you have two loss, you might need to be more careful about when to reset the grad and update |
st80031 | In the example you stated, after applying you get the error 0.7. And when you do a backward pass the network computes the gradients such that your error is reduced. I am not sure how you stated the gradients will increase in your example.
Also like smth said you don’t need to do backward twice, you can basically add those losses and directly compute backward on the total loss. |
st80032 | Assume the network issued on the first neuron 0.7, the correct answer is 0.2 error is 0.5. The second neuron produced 0.1, the correct answer is 0.5, the error is -0.4.
If you do (loss1 + loss2) .backward () -> (0.5 - 0.4 = 0.1) .backward (), how does the network know that for the first neuron you need to decrease by 0.5, and for the second to increase by 0.4? |
st80033 | At the moment I did like this
optimizer.zero_grad()
loss1.backward(retain_graph=True)
loss2.backward()
optimizer.step()
Maybe that’s how it will be right?
optimizer.zero_grad()
loss1.backward(retain_graph=True)
optimizer.step()
optimizer.zero_grad()
loss2.backward()
optimizer.step() |
st80034 | you sum the absolute values of the errors. so your total loss would be 0.5 + 0.4. so, to decrease the total loss it needs to decrease individual losses, thus the gradients are updated such that both the losses decrease simultaneously.
If above is the scenario of your problem, then you can add absolute value of the losses and do backward at once. |
st80035 | The weights for the first neuron need to be reduced by 0.5, and the weights for the second one increased by 0.4. If the errors are 0.5 and 0.4, does backward () do the right thing? |
st80036 | yes. The error would be zero only if the first decreases and the second increases. And the models goal always would be to achieve zero loss. |
st80037 | I think you do not understand me correctly.
If the output was one neuron and the error was 0.5, then backward () would do everything correctly.
But I need backward () to change each neuron correctly, first reduce the weight for the first neuron, while the second neuron will not change. Then he changed the weight for the second neuron. Is it possible? |
st80038 | I can not understand, is the error 0.4 and -0.4 the same?
Indeed, in the first case, the gradient needs to be reduced, and in the second one, should it be increased?
But if I do
abs (Loss)
then the network will not correctly change the gradient … |
st80039 | Bug
I am trying to use DCGAN with DCGAN TUTORIAL: https://github.com/pytorch/tutorials/blob/master/beginner_source/dcgan_faces_tutorial.py
But there occurs an error: Process finished with exit code -1073741819 (0xC0000005) During loss.backward() running.
To Reproduce
My problem is I can only train it on CPU. It raises an error when I try to train it on GPU.And my cuda is available.
Here is the error.
Preformatted textProcess finished with exit code -1073741819 (0xC0000005)
the code is here https://github.com/IkeYang/AndrewNg_ML_practise/blob/master/GAN 3
Environment
My environment is python3.6.9 windows10 torch1.2.0. cuda9.2
Please help solve this, thanks a lot. |
st80040 | Duplicate of Please Help me solve this error Process finished with exit code -1073741819 (0xC0000005) 120 |
st80041 | def forward(self, input_seq, encoder_outputs, hidden=None):
outputs, hidden = self.gru(input_seq, hidden)
outputs = outputs[:, :, :self.hidden_size] + outputs[:, :, self.hidden_size:]
attn_weights = self.attn(outputs, encoder_outputs)
context = attn_weights.bmm(encoder_outputs.transpose(0, 1))
print(context.size())
context = context.squeeze(1)
new_outputs = outputs
new_outputs = new_outputs.squeeze(0)
print(outputs.size())
print(new_outputs.size())
print(context.size())
concat_input = torch.cat((new_outputs, context), 1)
concat_output = torch.tanh(self.concat(concat_input))
outputs = self.out(concat_output)
return outputs, hidden
The output:
torch.Size([5, 1, 50])
torch.Size([5, 5, 50])
torch.Size([5, 5, 50])
torch.Size([5, 50])
the context tensor is squeezed but the output tensor is not squeezed (size of new_output should be 5,50 and not 5,5,50). Why is this happening? |
st80042 | Solved by albanD in post #2
Hi,
Squeeze only works if the size of a given dimension is 1.
If you want to remove a dimension of size > 1 then you need to use a function to do that reduction like sum or max or mean. |
st80043 | Hi,
Squeeze only works if the size of a given dimension is 1.
If you want to remove a dimension of size > 1 then you need to use a function to do that reduction like sum or max or mean. |
st80044 | Hi I am trying to use the same code as mentioned in one of the post of this discussion forum for visualizing feature map but I am kind of getting this error "hook_result = hook(self, input, result)
TypeError: ‘NoneType’ object is not callable"
Below is the code
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
model.dc.register_forward_hook(get_activation('dc'))
dataiter = iter(trainloader)
images,labels = dataiter.next()
imshow(images[0],labels[0])
print(images[0].unsqueeze_(0).shape)
output = model(images[0].unsqueeze_(0))
print(output.shape)
act = activation['dc'].squeeze()
fig, ax = plt.subplots(act.size(0))
for idx in range(act.size(0)):
ax[idx].imshow(act[idx])
Model
class NormalCNN(nn.Module):
def __init__(self, args, classes):
super(NormalCNN, self).__init__()
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.args = args
self.class_num = 10
self.classes = classes
resnet18 = models.resnet18(pretrained=True)
self.backbone = nn.Sequential(*list(resnet18.children())[0:5])
self.dc = nn.Conv2d(64, 1 , kernel_size = 1, stride=1, padding=1)
self.mlp = nn.Sequential(
nn.Linear(64, 64),
nn.ReLU(),
nn.Linear(64, 10))
### ----------------------------------------------
def forward(self, imgs):
v = self.backbone(imgs)
v_out = self.dc(v)
v_out = v_out/8
out = nn.Upsample(size=(8, 8), mode='bilinear')(v_out)
out = out.view(-1,1*8*8)
cls_scores = self.mlp(out)
return cls_scores # Dim: [batch_size, 10] |
st80045 | You have a small indentation error in your hook method:
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
The return statement should be called from get_activation. |
st80046 | Hi,
In my implementation, I have following piece
dec = decoder_output.clone().squeeze()
for i in self.state_history:
dec[i] = -100000
topv, topi = dec.topk(1)
action = topi.squeeze().detach()
decoder_output is output of F.log_softmax
My question is, is dec recorded for automatic differentiation and does it affect the differentiation for decoder_output and rest of the graph? What is the best way to prevent it from automatic differentiation?
Also, is accessing dec[i] inside a for loop possible if dec is a torch.cuda tensor? |
st80047 | dl_noob:
is dec recorded for automatic differentiation and does it affect the differentiation for decoder_output and rest of the graph?
The assignment should be recorded and the inplace operation could yield an error.
You could detach dec from decoder_output, if you don’t want to track this assignment.
dl_noob:
Also, is accessing dec[i] inside a for loop possible if dec is a torch.cuda tensor?
Yes, that’s possible. |
st80048 | in nn.Transformer the method “generate_square_subsequent_mask” outputs a square matrix with the first column with all 0, second column with -inf and all 0, and so on.
if we are working column wise (ie the input is SEQ_LEN, BATCH_SIZE, E_DIM) shouldn’t it be transposed? |
st80049 | import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
# Define network
class Network(nn.Module):
def __init__(self, input_dim):
super(Network, self).__init__()
self.first_layer = nn.Linear(input_dim, 1)
def forward(self, x):
out = self.first_layer(x)
out = f.relu(out)
return out
# Check for cuda device
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
# Create dataset
x = np.array([[1, 1], [1, 0], [0, 1], [0, 0]], dtype=float)
# Create tensor from numpy array
x = torch.from_numpy(x)
y = np.array([[1], [0], [0], [0]], dtype=float)
y = torch.from_numpy(y)
# Initialize network
net = Network(2)
net.to(device)
# Setup criterion and optimizer
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(net.parameters(), lr=1e-2)
# Print network parameters
for parameter in net.parameters():
print(parameter)
for i in range(100):
print('Epoch:[{}\{}]'.format(i+1, 100))
for input, target in zip(x, y):
# Clear gradients
optimizer.zero_grad()
# Send input, and expected output to the device
input = input.to(device)
target = target.to(device)
output = net(input)
# Loss
loss = criterion(output, target)
# Backpropagate
loss.backward()
# Update weights
optimizer.step()
I have problem with this simple network. I was trying to use the DataLoader and DataTensor but still get the same result as below.
Traceback (most recent call last):
File "help.py", line 60, in <module>
output = net(input)
File "/home/hyperscypion/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "help.py", line 16, in forward
out = self.first_layer(x)
File "/home/hyperscypion/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/hyperscypion/.local/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/hyperscypion/.local/lib/python3.7/site-packages/torch/nn/functional.py", line 1371, in linear
output = input.matmul(weight.t())
RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'mat2' |
st80050 | Solved by dhpollack in post #2
Your inputs are float64 (Double) tensors, but the network expects a float32 tensor. You should either convert the tensor to float with .float() before putting it into the network. Also, you haven’t really given us enough information about your setup but this is probably because you are creating yo… |
st80051 | Your inputs are float64 (Double) tensors, but the network expects a float32 tensor. You should either convert the tensor to float with .float() before putting it into the network. Also, you haven’t really given us enough information about your setup but this is probably because you are creating your inputs with numpy which uses float64 by default. You could use dtype=np.float32 or astype(np.float32) if you are using numpy to create your inputs. |
st80052 | Wow I didn’t think about that! Putting .float() after input and target solve the problem, thanks. |
st80053 | We’ve been working on a tool tsanley to enable finding subtle shape errors in your deep learning code quickly and cheaply. The key idea is to label tensor variables with their expected shapes (e.g., x : 'b,t,d' = ...) using optional types in Python 3.x and let tsanley perform shape validity checks at runtime automatically. Works with small and big tensor programs.
repository: https://github.com/ofnote/tsanley 26
examples: models (Resnet, GraphNNs, Transformers)
Quick example:
def foo(x):
x: 'b,t,d' #expected shape of x is (B, T, D).
y: 'b,d' = x.mean(dim=0) * 2 # error!
z: 'b,d' = x.mean(dim=1) # ok
return y, z
Function foo contains tensor variables labeled with their named shapes using a shorthand notation. It has a subtle shape error in the assignment to y: we expect the shape of y to be (B,D), however mean got rid of the first, and not the second, dimension. pytorch won’t flag this as an error: instead, we will get a weird shape inconsistency error somewhere downstream.
tsanley finds such unexpected bugs quickly at runtime:
Update at line 37: actual shape of y = t,d
>> FAILED shape check at line 37
expected: (b:10, d:1024), actual: (100, 1024)
Update at line 38: actual shape of z = b,d
>> shape check succeeded at line 38
Writing these named shape annotations manually can also get tedious. tsanley can auto-annotate the tensor variables in your (or someone else’s) code, if the code is executable. This is especially useful when trying to dig deep into or adapt an existing code / library for your project.
The tool builds upon the tsalib 1 library, which introduced a shorthand notation for labeling tensor variables with their named shapes, irrespective of the backend tensor library used.
We would love feedback on tsanley 26 and hope it is useful for your coding/debugging workflow. |
st80054 | how can i do the random crop using functional ?
https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.RandomCrop 24 |
st80055 | Have a look at the implementation 86 to get a good idea, how the random cropping is applied internally. |
st80056 | I am taking MNIST data and performing some processing on it.
Instead of doing this processing every time the image is loaded, I want to just save it as a new dataset so that I can just directly read it the next time.
What is the proper way of saving a dataset? I can’t seem to find any examples. |
st80057 | You approach so store the data as tensors seems like a good idea!
If you save the data as tensors and load them in your next run, you could pass them into a TensorDataset 2.0k. |
st80058 | How about using pickle library?
https://docs.python.org/3/library/pickle.html 566 |
st80059 | Thanks a lot, can you share some example?
What should be done after calling:
torch.utils.data.TensorDataset(data)
Out[14]: <torch.utils.data.dataset.TensorDataset at 0x1d6c4522ef0> |
st80060 | Once you’ve loaded the tensors and created a TensorDataset, you could pass it to a DataLoader and start the training. Have a look at the Data loading tutorial 1.1k for more information. |
st80061 | You could just load the tensors again and create the Dataset in the same way in another script. |
st80062 | I encountered a strange behaviour that I don’t reall understand. When I want to normalize a Variable I run into a zero division error during the backward pass when the standard deviation is zero.
For example:
a = Variable(torch.ones(1), requires_grad=True)
b = (a - a.mean())/(a.std() + 1e-4)
b.backward()
print(a.grad) # gives nan
I tracked down this problem and it even occurs if I just divide by the std with adding of some high epsilon
a = Variable(torch.ones(1), requires_grad=True)
b = 1/(a.std() + 1)
b.backward()
print(a.grad) # gives nan
But the epsilon should actually prevent the division by zero of the derivative right?
So what is happening here? And how can I normalize without running into this error? |
st80063 | Solved by SimonW in post #2
Well, sqrt(x)'s derivative at x=0 is undefined. |
st80064 | sorry, could you share a link where I can read about that. Since, I thought we can find a derivative of square root. |
st80065 | The sqrt(x) is not defined for values smaller or equal zero (at least for real numbered values). Doing this operation with such values results in nan. Therefore the derivative does only exist for x > 0. |
st80066 | Hi, how could you get rid of this issue? I need to use std, is there any solution to avoid nan in the backward pass? |
st80067 | Hi everyone,
We aim to measure the executive time in the GPU mode. We take Mnist as an example. The code is as follows.
109 for epoch in range(1, args.epochs + 1):
110 # start time
111 torch.cuda.synchronize()
112 since = int(round(time.time()*1000))
113 train(args, model, device, train_loader, optimizer, epoch)
114 torch.cuda.synchronize()
115 time_elapsed = int(round(time.time()*1000)) - since
116 print ('training time elapsed {}ms'.format(time_elapsed))
117
118 # start time
119 torch.cuda.synchronize()
120 tsince = int(round(time.time()*1000))
121 test(args, model, device, test_loader)
122 torch.cuda.synchronize()
123 ttime_elapsed = int(round(time.time()*1000)) - tsince
124 print ('test time elapsed {}ms'.format(ttime_elapsed))
The result is as follows.
training time elapsed 13325ms
testing time elapsed 2115ms
However, we find that there is no difference between training time and inference time when we caculate the average time (total time/num_samples). That is, 13325/60000 and 2115/10000. We are confused with the fact. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.