id
stringlengths
3
8
text
stringlengths
1
115k
st98068
Solved by ptrblck in post #2 You could use forward hooks to get the desired activation. Here is a small example for a dummy network.
st98069
You could use forward hooks to get the desired activation. Here 140 is a small example for a dummy network.
st98070
I have 2 pytorch modules like this class Module1(nn.Module): def __init__(self, opt): super(Module1, self).__init__() self.num_features = opt.num_features self.output_size = opt.output_size self.w_t = torch.nn.Parameter(data=torch.Tensor(self.num_features,self.output_size), requires_grad=True) self.w_t.data.uniform_(-1, 1) self.w_r = torch.nn.Parameter(data=torch.Tensor(self.num_features,self.output_size), requires_grad=True) self.w_r.data.uniform_(-1, 1) self.w_l = torch.nn.Parameter(data=torch.Tensor(self.num_features,self.output_size), requires_grad=True) self.w_l.data.uniform_(-1, 1) self.b_conv = torch.nn.Parameter(data=torch.Tensor(self.output_size), requires_grad=True) self.b_conv.data.uniform_(-1, 1) def forward(self, param_1, param_2) return some_function(param_1, param_2, self.w_t, self.w_r, self.w_r, b_conv) ......... class Module2(nn.Module): def __init__(self, opt): super(Module2, self).__init__() self.module_1 = Module1(opt) def forward(self, param_1, param_2) result = module_1(param_1, param_2) return result x_1 = torch.randn(N, D_in) x_2 = torch.randn(N, D_in) y = torch.randn(N, D_out) module_2 = Module2(opt) learning_rate = 1e-4 optimizer = torch.optim.Adam(module_2.parameters(), lr=learning_rate) for t in range(500): y_pred = module_2(x1, x2) loss = loss_fn(y_pred, y) print(t, loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() In this case, Module1 is a sub-module of Module2, and Module2 is the main module that will be called in the main training process. Due to my specific task, I need to define 4 weights w_t, w_r, w_l and b_conv manually in Module1. When I try to print if the gradient is afffected on any of these 4 weights, I found that the loss.backward() does not work in by doing: print(self.w_t) in the forward pass of Module1. So my question is that how can I backward the loss correctly in this case? If the loss.backward() is good in this case, I guess I need to change the place where I define the Variable, not sure what is a good practice here
st98071
Hi, I think the problem is that the model that you give to your optimizer is not the one you use in your tranining loop. So even though the gradients are computed, the parameters are never updated.
st98072
Hi Right, that’s exactly the problem that i’m suspecting. Do you have any suggestion to solve this problem?
st98073
Change optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) module_2 = Module2(opt) to module_2 = Module2(opt) optimizer = torch.optim.Adam(module_2.parameters(), lr=learning_rate)
st98074
ah, I make a quick example to show what I did, make a mistake in the example, definitely the optimizer should receive module_2.parameters() in the real code. I also fixed the code in the original example
st98075
I’m going to reimplement the following paper: http://keg.cs.tsinghua.edu.cn/jietang/publications/CIKM17-Ding-et-al-BayDNN-Friend-Recommendation.pdf 3 The architecture is look like: Looking for any available starting point source code in PyTorch. Please suggest me if you know something similar. Thanks.
st98076
my data is x => [1,0,0,0,0,0,1,0,0,2…] y => [1,0] my model is like below: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.l1 = nn.Linear(input_len, 200) self.l2 = nn.Linear(200, 200) self.l3 = nn.Linear(200, 2) def forward(self, x): x = F.relu(self.l1(x)) x = F.relu(self.l2(x)) return self.l3(x) model = Net() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.0001) and i try to test my model like this output = model(X_train[0]) print(output, Y_train[0]) loss = criterion(output, Y_train[0]) but there is error output: tensor([0.5214, 0.4851], grad_fn=<SigmoidBackward>) tensor([1., 0.], grad_fn=<SelectBackward>) RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1) how can i handle this error?
st98077
Hi, All models and loss functions assume that the first dimension is the batch size. So the first input to cross entropy should be batch_size x nb_classes and the second batch_size (and contain values between 0 and nb_classes-1).
st98078
Hi, I was wondering if there are any problems with using different gpus with DataParallel, for example, 1080ti and titian Xp? Or does it just work at the rate of the slowest gpu?
st98079
It will split the work equally to all GPUs. So that means that yes it will run twice as fast as the slower. And you won’t be able to use more memory than the smallest have. It should still be almost twice as fast as using only one of them in this case. Given that the perfs of the two cards are similar.
st98080
Consider a nxm matrix A and a n-dimensional vector x whose entry is an integer from 0 to m-1. I’d like to mask A in a way such that A[i][j] is masked iff j > x[i]. I’m seeking for a method that utilizes GPU and PyTorch. I don’t think torch.gather is applicable here.
st98081
Solved by tom in post #2 How about mask = (A < torch.arange(0, A.size(1), device=A.device, dtype=A.dtype).unsqueeze(0)) masked_A = torch.where(mask, A, torch.zeros(1,1, dtype=A.dtype, device=A.device)) where the arange will give you a vector of j and unsqueezing makes is broadcasteable to As shape, the comparison gives …
st98082
How about mask = (A < torch.arange(0, A.size(1), device=A.device, dtype=A.dtype).unsqueeze(0)) masked_A = torch.where(mask, A, torch.zeros(1,1, dtype=A.dtype, device=A.device)) where the arange will give you a vector of j and unsqueezing makes is broadcasteable to As shape, the comparison gives a matrix of the same shape as A with the desired mask torch.where selects the masked inputs and zero otherwise. The reason to use where instead of mask.float() * A is to also mask away NaNs. Best regards Thomas [Edit: Depending on whether masked means “stays if the condition is set” or “is set to zero”, you might use <= in place of >.]
st98083
Thank you for your answer. My question may not have been clear, so I’d like to clarify that. I said about the vector x “entry is an integer from 0 to m-1,” but that doesn’t necessarily mean x[i]=i. I meant that each entry of x can be any integer from 0 to m-1 (multiple entries may have the same value). Since your answer is not dependent on such x, I believe my question was misleading. I’m sorry.
st98084
The arange produces the j, it’s not depending on i, but broadcast in the first dimension.
st98085
Oh, I think I understand what you meant. I guess you mistakenly put A instead of x in the first line, and I believe it should be like the following: mask = (x < torch.arange(0, A.size(1), device=A.device, dtype=A.dtype).unsqueeze(0)) Anyway, that solved my problem. Thank you very much!
st98086
I find that the resnet18 onnx model exported from pytorch differs in 0.4.0 and 0.4.1. I use https://github.com/lutzroeder/netron 9 to visualize the resnet18.onnx model, 0.4.0: image.png649×1646 43 KB 0.4.1: imgbb.com resnet18-0-4-1 22 Image resnet18-0-4-1 hosted in imgbb.com The last few layers after average_pooling are different. I am not sure what’s the problem. Thanks.
st98087
I’m quite concerned about how to free GPU memory when OOM error occurs. It’s quite easy for Theano, but I don’t know how for Pytorch. This is a quite serious problem because when OOM occurs in deployment environment, we can’t kill the process and start it again.
st98088
You could try to use the method FairSeq 143 is using. Depending on the OOM error, you could empty the cache or just skip the batch, if it’s too large. Would that work for you?
st98089
Hey all. I’m trying to create a weighted sampler to do balanced sampling on my training set, and I created a sampler based off of the response here (Is there a better way to split data and deal with an unbalanced dataset? 5). Therefore, my code to generate a weighted sampler is very similar: def get_weighted_sampler(dataset): sampler = None # Create weight array for each training sample sorted_label_counts = dataset.label_counts.sort_index() label_weights = sum(sorted_label_counts.iloc[:]) / np.array(sorted_label_counts.iloc[:]) sampling_weights = [] for i in range(len(dataset)): print(i) _, image_label = camera_catalogue_training[i] sampling_weights.append(label_weights[image_label]) sampler = torch.utils.data.sampler.WeightedRandomSampler(sampling_weights , len(sampling_weights)) print(len(sampling_weights)) return sampler I then tried to use this sampler with a DataLoader as follows: training_sampler = get_weighted_sampler(camera_catalogue_training) training_loader = torch.utils.data.DataLoader(camera_catalogue_training, batch_size=8, shuffle=True) validation_loader = torch.utils.data.DataLoader(camera_catalogue_validation, batch_size=8, shuffle=True) test_loader = torch.utils.data.DataLoader(camera_catalogue_test, batch_size=8, shuffle=True) I didn’t get any errors with initializing the DataLoader itself, but when I try to iterate over a batch of the DataLoader, I get a “TypeError: len() of unsized object” error. I believe this has to do specifically with this weighted sampler I’m trying to work with, because when I remove sampler and just use a normal DataLoader, I’m able to iterate and examine the contents of a batch perfectly fine. Any ideas?
st98090
I’m not sure you can pass a list as the weights. Could you create a tensor using sampling_weights and try it again? I’ve created dummy code here 26.
st98091
Thanks for the reply! So I tried converting sampling_weights to a tensor as follows: sampling_weights = torch.FloatTensor(sampling_weights) This got me the same error as before. I also tried doing sampling_weights = torch.from_numpy(sampling_weights) and I get a different error: TypeError: expected np.ndarray (got list) Do you think I should take a different approach entirely to generating my sampling weights?
st98092
What is in sampling_weights? Is it a list of numpy arrays or pd.Series? Could you check the dytpe of one element? Since the first approach is also throwing the same error, I guess the data is unknown. Make sure sampling_weights is a tensor containing the weights before passing it to the Sampler.
st98093
Checking dtype of an element in sampling_weights tells me that it’s of format float64. I also tried converting to a tensor as follows: sampling_weights2 = torch.from_numpy(np.array(sampling_weights)) And when I run dtype of an element in that, I get torch.float64. I then tried passing in this tensor, and I get the same original error. For further context, here’s more of the error (not sure if it might be of any help or not): for i, data in enumerate(training_loader): File "C:\Users\ckwij\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 314, in __next__ batch = self.collate_fn([self.dataset[i] for i in indices]) File "C:\Users\ckwij\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 314, in <listcomp> batch = self.collate_fn([self.dataset[i] for i in indices]) File "c:/Users/ckwij/Documents/--redacted--/--redacted--/Code/PyTorch/pytorch_data.py", line 53, in __getitem__ image_name = data_folder / self.labels_frame.iloc[idx, 0] File "C:\Users\ckwij\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1472, in __getitem__ return self._getitem_tuple(key) File "C:\Users\ckwij\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 2013, in _getitem_tuple self._has_valid_tuple(tup) File "C:\Users\ckwij\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 222, in _has_valid_tuple self._validate_key(k, i) File "C:\Users\ckwij\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1967, in _validate_key if len(arr) and (arr.max() >= l or arr.min() < -l): TypeError: len() of unsized object Thanks for taking the time out to help me!
st98094
Thanks for the stack trace! It seems pandas is throwing this error. Could you post your __getitem___ and some content of self.labels_frame?
st98095
So the way I have my data structured is that I have all my images in a folder with file paths set up as follows (the CSV files contain a small subset of image names and their corresponding class labels, since I wanted to initially start off testing and debugging with a small subset of the data before using the whole thing, which is 100s of thousands of images and slows everything to a crawl when trying to process): data_folder = Path('C:/Users/ckwij/Downloads/camera_catalogue/all_combined/') training_data = data_folder / 'training_subset.csv' validation_data = data_folder / 'validation_subset.csv' test_data = data_folder / 'test_subset.csv' The training_data csv is what gets read in by Pandas as labels_frame, and a sample of the data looks as follows: Lastly, my __getitem__ function looks as follows: def __getitem__(self, idx): image_name = data_folder / self.labels_frame.iloc[idx, 0] image = Image.open(image_name) image_label = self.labels_frame.iloc[idx, 1] if self.transform: image = self.transform(image) return image, image_label
st98096
Thanks for the information. It should generally work. I guess something is still wrong with your pd.DataFrame. Could you just create the Dataset and try to call dataset.labels_frame.iloc[0, 0]. If that’s throwing the same error, try to load the .csv offline, i.e. without the Dataset and debug it. Alternatively, you could upload a small snippet of one .csv and I could take a look.
st98097
Tried calling dataset.labels_frame.iloc[0, 0] (where dataset was replaced with my training dataset), and it works fine (I also called dataset.labels_frame.iloc[0, 1] which is shown): Here’s a small subset of the training data CSV: https://ufile.io/v5elq 1 This is a pretty perplexing issue, because if I just take my custom sampler out of the equation, the DataLoader seems to work fine.
st98098
I used your sample .csv data and could create a Dataset and a WeightedRandomSampler. Could you post your Dataset code completely? I’m not sure, how dataset.label_counts was calculated etc., so that I couldn’t debug your get_weighted_sampler method.
st98099
Definitely! Let me know if you need any other details. And thanks again for working with me on figuring this out. Here’s the entire code for my custom Dataset class: # Create custom Dataset class class CameraCatalogueDataset(Dataset): """ Camera Catalogue dataset. """ def __init__(self, csv_file, data_folder, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. data_folder (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.labels_frame = pd.read_csv(csv_file) self.data_folder = data_folder self.transform = transform self.labels = self.labels_frame.Label.unique() self.label_counts = self.labels_frame.Label.value_counts() self.num_classes = len(self.labels) def __len__(self): return len(self.labels_frame) def __getitem__(self, idx): image_name = data_folder / self.labels_frame.iloc[idx, 0] image = Image.open(image_name) image_label = self.labels_frame.iloc[idx, 1] if self.transform: image = self.transform(image) return image, image_label
st98100
I had to change camera_catalogue_training[i] to dataset.labels_frame.Label.iloc[i] in get_weighted_sampler and image = Image.open(image_name) to image = torch.tensor([len(image_name)]) in __getitem__. Also, I created fake continuous labels, as the sample target labels had missing values, so that the code wouldn’t work: Name,Label 5443970_0.jpeg,0 4441645_0.jpeg,0 9705709_0.jpeg,0 9989229_0.jpeg,0 9769189_0.jpeg,0 4445197_0.jpeg,0 4432030_0.jpeg,0 4443722_0.jpeg,2 4515753_0.jpeg,3 5440101_0.jpeg,0 4454669_0.jpeg,1 5424361_0.jpeg,2 4512630_0.jpeg,0 4510856_0.jpeg,0 4469947_0.jpeg,0 4523697_0.jpeg,1 9329894_0.jpeg,0 4514251_0.jpeg,1 4445912_0.jpeg,3 Using these changes, the sampler works: dataset = CameraCatalogueDataset(path, '/') sampler = get_weighted_sampler(dataset) loader = DataLoader( dataset, sampler=sampler, batch_size=8) for data, target in loader: print(data, target) If you remove the sampler, you’ll see that the batches are imbalanced.
st98101
So at this point, let me just try posting all the code, since I’m still getting the same error with the fixes. Maybe I’m missing some small difference. # Create custom Dataset class class CameraCatalogueDataset(Dataset): """ Camera Catalogue dataset. """ def __init__(self, csv_file, data_folder, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. data_folder (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.labels_frame = pd.read_csv(csv_file) self.data_folder = data_folder self.transform = transform self.labels = self.labels_frame.Label.unique() self.label_counts = self.labels_frame.Label.value_counts() self.num_classes = len(self.labels) def __len__(self): return len(self.labels_frame) def __getitem__(self, idx): image_name = data_folder / self.labels_frame.iloc[idx, 0] image = torch.tensor([len(image_name)]) image_label = self.labels_frame.iloc[idx, 1] if self.transform: image = self.transform(image) return image, image_label # Function for getting weighted sampler for sampling Training set def get_weighted_sampler(dataset): sampler = None # Create weight array for each training sample sorted_label_counts = dataset.label_counts.sort_index() label_weights = sum(sorted_label_counts.iloc[:]) / np.array(sorted_label_counts.iloc[:]) sampling_weights = [] for i in range(len(dataset)): image_label = dataset.labels_frame.Label.iloc[i] sampling_weights.append(label_weights[image_label]) sampler = torch.utils.data.sampler.WeightedRandomSampler(sampling_weights , len(sampling_weights)) return sampler # Set up file paths data_folder = Path('C:/Users/ckwij/Downloads/camera_catalogue/all_combined/') training_data = data_folder / 'training_subset.csv' #validation_data = data_folder / 'validation_subset.csv' #test_data = data_folder / 'test_subset.csv' # Set up transform for dataset image_size = 224 dataset_transform = transforms.Compose([ transforms.Resize(image_size), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) # Initialize dataset and its splits camera_catalogue_training = CameraCatalogueDataset(csv_file=training_data, data_folder=data_folder, transform=dataset_transform) #camera_catalogue_validation = CameraCatalogueDataset(csv_file=validation_data, data_folder=data_folder, transform=dataset_transform) #camera_catalogue_test = CameraCatalogueDataset(csv_file=test_data, data_folder=data_folder, transform=dataset_transform) # Create data samplers and loaders training_sampler = get_weighted_sampler(camera_catalogue_training) training_loader = DataLoader(camera_catalogue_training, batch_size=8, sampler=training_sampler) #validation_loader = DataLoader(camera_catalogue_validation, batch_size=8, shuffle=True) #test_loader = DataLoader(camera_catalogue_test, batch_size=8, shuffle=True) for data, target in training_loader: print(data, target)
st98102
I tried your code using some dummy images and it’s working. As this issue is most likely unrelated to PyTorch, let’s move the discussion to private messages and post the final solution here.
st98103
I’ve been looking through the ATen library and want to be be able to call functions like cudnn_convolution or cudnn_convolution_backward_weight 3 but they don’t appear to get included with #include <ATen/ATen.h> or #include <torch/torch.h> These functions (and others) are in the native directory. Another one I’m trying to use is from GridSampler.cu 1 which uses enumerators defined elsewhere in the native package. What do I include in my *.cpp file to access these functions? Are these functions exposed? I am writing a custom layer as described in the mixed CPP/CUDA tutorial 3.
st98104
You shouldn’t use the native functions directly. Please use the at::xxx or tensor.xxx bindings. For example, at::grid_sampler(input, grid). Hmm is that enum not exposed? If so you should submit a feature request.
st98105
Thanks for the reply! Could you please link me to the file/doc that lays out the exposed functions? For example, the only thing I could find on the GitHub repo for grid sampling was what I’ve linked to. Where can I see that at::grid_sampler(input, grid) is part of the library? Or at::grid_sampler_backward() for that matter? Also, are there conv_2d calls I can make, or conv_2d_grad_inputs(), conv_2d_grad_weights(), and conv_2d_grad_bias()? PyTorch is a great library, but it can be quite frustrating to search through the source code with all the code gen and abstraction. I’ve spent days just to get to the point I’m at now. Is it going to be cleaned up (or at least better documented) as part of the 1.0 release?
st98106
Is there a tutorial of how to use the new c10 DistributedDataParallel? I haven’t found an end to end example of how to use it.
st98107
I trained my model using arbitrary initial seed for randomness. I have printed the seed, (it is a large number, 13248873089935215612). Now as I want to reproduce the training results, I want to set manual seed via torch.manual_seed(13248873089935215612), but this throws me error in torch/random.py at line 35 when executing default_generator.manual_seed(seed) default_generator.manual_seed(seed) RuntimeError: Overflow when unpacking long How can I set torch random seed for such large initial seed?
st98108
Solved by tom in post #2 The seed is a signed 64 bit integer on the C++ side, so the maximum is (1<<63)-1. You could use “13248873089935215612 & ((1<<63)-1)” if you wanted to. I tend to be cautious about trying to be clever with random seeds, I’m not certain that your seed is any better than, say, 42. Best regards Thom…
st98109
mfajcik: 13248873089935215612 The seed is a signed 64 bit integer on the C++ side, so the maximum is (1<<63)-1. You could use “13248873089935215612 & ((1<<63)-1)” if you wanted to. I tend to be cautious 58 about trying to be clever with random seeds, I’m not certain that your seed is any better than, say, 42. Best regards Thomas
st98110
Last time when I am using ‘python setup.py install’, I was told that either NCCL 2+ is needed. So I git clone nccl with the branch v2.3.7-1, which says lacking CMakeLists.txt. Instead I the following at last build, but still NCCL is built in. Could somebody give a helping hand, thanks indeed. CMakeLists.txt: CMAKE_MINIMUM_REQUIRED(VERSION 2.8 FATAL_ERROR) CMAKE_POLICY(VERSION 2.8) IF(NOT CUDA_FOUND) FIND_PACKAGE(CUDA 7.0 REQUIRED) ENDIF() include("${CMAKE_UTILS_PATH}") torch_cuda_get_nvcc_gencode_flag(NVCC_GENCODE) string(REPLACE “-gencode;” “-gencode=” NVCC_GENCODE “${NVCC_GENCODE}”) message(STATUS “Set NVCC_GENCODE for building NCCL: ${NVCC_GENCODE}”) ADD_CUSTOM_COMMAND( WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/lib/libnccl.so COMMAND env CUDA_HOME=${CUDA_TOOLKIT_ROOT_DIR} NVCC=${CUDA_NVCC_EXECUTABLE} BUILDDIR=${CMAKE_CURRENT_BINARY_DIR} NVCC_GENCODE="${NVCC_GENCODE}" make -j${NUM_JOBS} ) ADD_CUSTOM_TARGET(nccl ALL DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/lib/libnccl.so) INSTALL(FILES ${CMAKE_CURRENT_BINARY_DIR}/include/nccl.h DESTINATION “include”) Following are the building outputs related to NCCL: Building wheel torch-1.0.0a0 – Set NVCC_GENCODE for building NCCL: -gencode=arch=compute_30,code=sm_30;-gencode=arch=compute_35,code=sm_35;-gencode=arch=compute_50,code=sm_50;-gencode=arch=compute_52,code=sm_52;-gencode=arch=compute_60,code=sm_60;-gencode=arch=compute_61,code=sm_61;-gencode=arch=compute_70,code=sm_70;-gencode=arch=compute_70,code=compute_70 – Build files have been written to: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/third_party/build/nccl [100%] Built target nccl – Found NCCL: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/include – Determining NCCL version from the header file: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/include/nccl.h – Found NCCL (include: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/include, library: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/lib/libnccl.so) – Include NCCL operators – USE_NCCL : ON – USE_SYSTEM_NCCL : OFF [ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/contrib/nccl/cuda_nccl_gpu.cc.o [ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/contrib/nccl/cuda_nccl_op_gpu.cc.o – Installing: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/lib/python3.6/site-packages/caffe2/contrib/nccl/init.py – Installing: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/lib/python3.6/site-packages/caffe2/contrib/nccl/nccl_ops_test.py – Determining NCCL version from the header file: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/include/nccl.h – Found NCCL (include: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/include, library: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/lib/libnccl.so) – NCCL_LIBRARIES: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/lib/libnccl.so – NCCL_INCLUDE_DIRS: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/include – Found NCCL, but the NCCL version is either not 2+ or not determinable, will not compile with NCCL distributed backend – Determining NCCL version from the header file: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/include/nccl.h – Found NCCL (include: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/include, library: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/lib/libnccl.so) – NCCL_LIBRARIES: /home/simon/Desktop/pytorch-scripts-1.0rc1/pytorch/torch/lib/tmp_install/lib/libnccl.so – Found NCCL, but the NCCL version is either not 2+ or not determinable, will not compile with NCCL distributed backend copying torch/cuda/nccl.py -> build/lib.linux-x86_64-3.6/torch/cuda copying torch/lib/libnccl.so.1 -> build/lib.linux-x86_64-3.6/torch/lib copying torch/lib/libnccl.so.1.3.5 -> build/lib.linux-x86_64-3.6/torch/lib copying torch/lib/libnccl.so -> build/lib.linux-x86_64-3.6/torch/lib copying torch/lib/include/torch/csrc/cuda/python_nccl.h -> build/lib.linux-x86_64-3.6/torch/lib/include/torch/csrc/cuda copying torch/lib/include/torch/csrc/cuda/nccl.h -> build/lib.linux-x86_64-3.6/torch/lib/include/torch/csrc/cuda – Building NCCL library
st98111
In training phase, I have the true lables and the corresponding scores. I can use sklearn.metric. precision_recall_curve(labels, scores) to calculate the precision and recall rate. But How can I plot them and view ? I also try to use tensorboardX, there are two methods, add_pr_curve(), but it requires the input is the predicted probability, not the scores. Another method is add_pr_curve_raw(), it requires many parameters, such as positive_num, negative_num, precision, recall, it is hard or difficult for me to calculate these parameter values. So, What can I do for viewing the pr curve in the training phase? is there a good way to implement it? Thanks in advance.
st98112
You could use Visdom 124 to plot your curve. Since you already have the necessary numpy arrays representing the precision and recall, you could try to plot the line using viz.line 20. I’m not sure, which arguments tensorboardX expects for the PR curve. Let me know, if visdom would be an alternative or if you would like to stick to tensorboardX.
st98113
Thank you very much for your reply. My codes are like this: import matplotlib.pyplot as plt fig,ax=plt.subplots() ax.step(recall,precision,color=‘r’,alpha=0.99,where=‘post’) ax.fill_between(recall, precision, alpha=0.2, color=‘b’, step=‘post’) plt.xlabel(‘Recall’) plt.ylabel(‘Precision’) plt.ylim([0.0, 1.05]) plt.xlim([0.0, 1.0]) plt.title(‘2-class Precision-Recall curve: AP={0:0.2f}’.format( average_precision)) writer.add_figure(‘epoch_pr’,fig,epoch) plt.close(fig) I used the method of add_figure to save and view the pr curve.
st98114
I want to view the pr curve in each epoch. So, I want to save all them and view them.
st98115
If you would like to use matplotlib and just visualize the figures, one way would be to save then as svg data and send them to visdom. Using this approach you wouldn’t need to recreate the figures in visdom. Here is a small example: import re import io from visdom import Visdom labels = np.random.randint(0, 2, (10,)) scores = np.random.uniform(0, 1, (10,)) precision, recall, thresholds = precision_recall_curve( labels, scores) fig, ax = plt.subplots() ax.step(recall,precision,color='r',alpha=0.99,where='post') ax.fill_between(recall, precision, alpha=0.2, color='b', step='post') plt.xlabel('Recall') plt.ylabel('Precision') plt.ylim([0.0, 1.05]) plt.xlim([0.0, 1.0]) viz = Visdom() imgdata = io.StringIO() fig.savefig(imgdata, format='svg') svg_str = imgdata.getvalue() # Scale the figure svg_str = re.sub('width=".*pt"', 'width="100%"', svg_str) svg_str = re.sub('height=".*pt"', 'height="100%"', svg_str) viz.svg(svg_str)
st98116
Good morning, I am implementing some functions in C to speed up part of my code, however when trying to use TH functions I have few problems. I am trying to create a 2D tensor using : THFloatTensor * IoU = THFloatTensor_newWithSize2d(nb1,nb2); Which does’nt raise any errors. But when compiling the functions : THTensor_get2d and THTensor_set2d give me warnings, and during execution the set function give the error : undefined symbol: THTensor_set2d But those functions seems to be well defined in https://github.com/pytorch/pytorch/blob/master/aten/src/TH/generic/THTensor.h 9 Any idea ? Also when working with CPU Tensors, should I include #include <TH/TH.h> To be able to use THTensor_get2d and THTensor_set2d ? I am a bit lost about how to manipulate and create tensors in C. I was creating them in python before and manipulated them in C as 1D arrays.
st98117
I have already quite some functions implemented in C to interface some Cuda code with my python code, Isn’t there any way for me to make this work using types such as THCudaTensor or THFloatTensor ? Especially as this is a very easy way to use data created in python directly in C and Cuda. UPDATE : I am for now using the tensor as a float * to access its elements and create it in python directly.
st98118
If I use the following code for sorting batches will I face issues while running on single/multiple GPUs? If so Could you please explain why. Screen Shot 2017-11-09 at 10.39.46 PM.png1374×420 48.1 KB
st98119
Hi, I was interested in using the multiprocessing module. The given example is this one. import torch.multiprocessing as mp from model import MyModel def train(model): # Construct data_loader, optimizer, etc. for data, labels in data_loader: optimizer.zero_grad() loss_fn(model(data), labels).backward() optimizer.step() # This will update the shared parameters if __name__ == '__main__': num_processes = 4 model = MyModel() # NOTE: this is required for the ``fork`` method to work model.share_memory() processes = [] for rank in range(num_processes): p = mp.Process(target=train, args=(model,)) p.start() processes.append(p) for p in processes: p.join() So as far as I can see it opens 4 processes for the train function. Each train function contains a dataloader and so on. I was wondering how the multiprocessing module deals with the GPU memory. Does it control the memory available at the time of opening each process? or It just try to open them all and if there is not enough memory throws ‘out of memory’ error. If the 2nd case happens, is there a way to check if there is enough memory available? I guess this can be achieved using torch.cuda.mem_allocated(), but I would like to understand how the "caching memory allocator " works. When an allocator is created? if I call two different processes on the same GPU, there will be memory conflicts? is there a way to share an allocator to avoid that? Thanks in advance
st98120
Is there a major difference between Adam and SparseAdam implementation? I’m using the SparseAdam for optimizing the embedding layer in my model, and I noticed that the model requires fewer epochs to converge if I instead used the Adam optimizer to optimize the embedding layer with sparse gradients disabled.
st98121
Yes there is. The doc for SparseAdam directly says In this variant, only moments that show up in the gradient get updated, and only those portions of the gradient get applied to the parameters.
st98122
I got an error when do backward() “the derivative for _th_bernoulli is not implemented”
st98123
you can’t backprop through sampling functions (unless they are reparametrized). Even in 0.4, it probably returned all zero gradient and that was a bug if it was the case.
st98124
for example. x y 1 [1,2] [3,4] 2.[5,6] [7,8] and… I want to get like this: z 1 [1,2,3,4] 2 [5,6,7,8] Is there have some idea resolve this problem? thank you. and sorry my poor english… T^T
st98125
Solved by jaeyung1001 in post #3 Oh… your answer is also correct. but If the csv file contain “list” type data I should read the file like below: df = pd.read_csv(“train_after.csv”, converters={“data”: literal_eval, “label”: literal_eval}) Thank you:)
st98126
Good morning, You could use torch.cat((x,y),1) as in import torch import numpy as np a = [[1,2],[5,6]] b = [[3,4],[7,8]] x = torch.from_numpy(np.array(a)) y = torch.from_numpy(np.array(b)) u = torch.cat((x,y),1) print(u)
st98127
Oh… your answer is also correct. but If the csv file contain “list” type data I should read the file like below: df = pd.read_csv(“train_after.csv”, converters={“data”: literal_eval, “label”: literal_eval}) Thank you:)
st98128
I’m trying to run many small models in parallel. I noticed with my first implementation (just calling each of the models in a loop), that GPU utilization was very low, about 25%. So I did some research and it sounds like what I want are cuda streams. So I initialized two streams and broke my models into two groups and I ran the models now wrapped in a with block. The code ran fine, but GPU utilization remained low at about 25%. Am I misunderstanding the purpose or function of streams?
st98129
Hi, There are things to be careful when using streams (note that I’m not a specialist at all!) but if you do any op on the default stream, it will sync all streams. So you need to make sure no such op is done. the nvidia visual profiler is a great tool to see what runs where and what runs in parallel. https://developer.nvidia.com/nvidia-visual-profiler 8 Also for very small problems, it’s possible that you are limited by how far you can ask for stuff to be done on the gpu, meaning that you are actually bound by the cpu code. To improve this, you would need to use different processes for each model.
st98130
I did try with torch.multiprocessing, but then python allocates a large amount of memory which seems to be roughly proportional to the number of processes torch.multiprocessing starts.
st98131
Yes that is expected, CUDA sharing between process is quite tricky. I don’t think we support it.
st98132
Really? I started 6 processes and ~12 gigs of RAM was allocated. In any case, is there a way to use threading so that I can use multiple processors while still only having a single process?
st98133
from this link : https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/4%20-%20Convolutional%20Sentiment%20Analysis.ipynb 16 You will see they use this code for generate iterator. import torch from torchtext import data from torchtext import datasets import random SEED = 1234 torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) TEXT = data.Field(tokenize='spacy') LABEL = data.LabelField(tensor_type=torch.FloatTensor) train, test = datasets.IMDB.splits(TEXT, LABEL) train, valid = train.split(random_state=random.seed(SEED)) BATCH_SIZE = 64 train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits( (train, valid, test), batch_size=BATCH_SIZE, sort_key=lambda x: len(x.text), repeat=False) Is it possible to create my iterator with the torchtext function
st98134
W is a parameter tensor, and A = W[[1,1]] by indexing operator. So the two elements of A are copied from same source. Are the gradients of A[0] and A[1] computed independently? If yes, the computation is double.
st98135
Yes the gradients for A[0] and A[1] will be done independently and the gradient for W[1] will be the sum of the two. This is the gradient of the function you implement.
st98136
Sound sad and thanks for reply. For A = W[[1,1]] ,could I make A[0], A[1] and W[1] share same gradient and gradient function, or even share same storage?
st98137
I’m not sure what you mean by that. Could you write a code sample and what you expect to find in the .grad fields?
st98138
Here is pseudocode for input (X, Y) W = nn.Parameter() index = [1,1,1,2,3] output = X*W[index] loss = function(output, Y) loss.backward() In above case, parameter W[1] has three copies whose gradients will be computed independently as you say. I hope the gradient of W[1] just computed once in loss.backward(). W[1]'s copies don’t need to compute the gradients, which can share the same gradient with W[1] . Otherwise the backward gets slow.
st98139
Here is pseudocode for input (X, Y) W = nn.Parameter() index = [1,1,1,2,3] output = X*W[index] loss = function(output, Y) loss.backward() In above case, parameter W[1] has three copies whose gradients will be computed independently as you say. I hope the gradient of W[1] just computed once in loss.backward(). W[1]'s copies don’t need to compute the gradients, which can share the same gradient with W[1] . Otherwise the backward gets slow.
st98140
W[1] is computed once. But it will contain the sum of the gradients for each of the place where it is used. If you have W that contains 3 values. O contains [W[1], W[1], w[1], W[2], W[3]]. And your loss is sum(O). Then when you call backward on this loss, the gradient of O wrt this loss is [1, 1, 1, 1, 1]. And the gradient of W wrt this loss is [3, 1, 1]. So yes the gradient of W[1] is computed once.
st98141
Thanks for clear explanation. I understand W[1] is computed once now. For O which contains [W[1], W[1], w[1], W[2], W[3]], the gradients of O[0],O[1],O[2] are computed independently, although they have same value. I think the computation is wasted. Could gradients of O[0],O[1],O[2], W[1] wrt loss be the same? I hope that just one of gradients of O[0],O[1],O[2],W[1] are computed and the others share it.
st98142
Thanks for clear explanation. I understand W[1] is computed once now. For O which contains [W[1], W[1], w[1], W[2], W[3]], the gradients of O[0],O[1],O[2] are computed independently, although they have same value. I think the computation is wasted. Could gradients of O[0],O[1],O[2], W[1] wrt loss be the same? I hope that just one of gradients of O[0],O[1],O[2],W[1] are computed and the others share it.
st98143
They are the same in my example but could be different. With the same O, if my loss is: loss = 2*O[0] + 3*O[1] + 4*O[2] + 5*O[3] + 6*O[4] Then the gradients of O wrt the loss would be [2, 3, 4, 5, 6] and the gradient of W [9, 5, 6].
st98144
I wanted to try out different sets of optimizer hyperparameters for each element of a tensor. I tried the following but I am getting a non-leaf tensor error, possibly because I am indexing the tensor: y = torch.ones(3) y.requires_grad_() print(y) opt2 = torch.optim.SGD([{'params':[y[0]],'lr':0.1},{'params':[y[1]],'lr':1},{'params':[y[2]],'lr':10}]) loss2 = y.sum() opt2.zero_grad() loss2.backward() opt2.step() print(y) ValueError Traceback (most recent call last) in () 2 y.requires_grad_() 3 print(y) ----> 4 opt2 = torch.optim.SGD([{‘params’:[y[0]],‘lr’:0.1},{‘params’:[y[1]],‘lr’:1},{‘params’:[y[2]],‘lr’:10}]) 5 loss2 = y.sum() 6 opt2.zero_grad() /usr/local/lib/python3.6/dist-packages/torch/optim/sgd.py in init(self, params, lr, momentum, dampening, weight_decay, nesterov) 62 if nesterov and (momentum <= 0 or dampening != 0): 63 raise ValueError(“Nesterov momentum requires a momentum and zero dampening”) —> 64 super(SGD, self).init(params, defaults) 65 66 def setstate(self, state): /usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py in init(self, params, defaults) 41 42 for param_group in param_groups: —> 43 self.add_param_group(param_group) 44 45 def getstate(self): /usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py in add_param_group(self, param_group) 191 "but one of the params is " + torch.typename(param)) 192 if not param.is_leaf: –> 193 raise ValueError(“can’t optimize a non-leaf Tensor”) 194 195 for name, default in self.defaults.items(): ValueError: can’t optimize a non-leaf Tensor
st98145
Hi, When you do y[0], the Tensor you get is not a leaf tensor anymore. Remember a leaf Tensor is one that you created with required_grad=True (and so is not the result of an operation). You can only optimize leaf Tensors. If you want to use builint optimizers, you will need to create one Tensor for every parameter and combine them during the forward pass: # Create like this y0 = torch.ones(1, requires_grad=True) y1 = torch.ones(1, requires_grad=True) y2 = torch.ones(1, requires_grad=True) opt2 = torch.optim.SGD([{'params':[y0],'lr':0.1},{'params':[y1],'lr':1},{'params':[y2],'lr':10}]) # During the forward pass: y = torch.cat([y0, y1, y2], 0) # The rest of your forward
st98146
i tried the snippet i am pasting and i think i will have to execute a stack or cat after each optimizer.step call. In the snippet below the changes in the leaf tensors dont seem to be communicated to the cat/stack, while changes to the stack/cat are communicated to the leaf: a = torch.randn(1,2) b = torch.randn(1,2) c = torch.cat([a,b],0) print(a) a.data = a.data+1 print(a) print('*'*50) print(c) print('\n\n') c.data[0,:] = c.data[0,:] *10 print(c,a) tensor([[ 0.5121, -0.8800]]) tensor([[1.5121, 0.1200]]) ************************************************** tensor([[ 0.5121, -0.8800], [-0.6470, 0.2288]]) tensor([[ 5.1207, -8.7999], [-0.6470, 0.2288]]) tensor([[1.5121, 0.1200]])
st98147
Hi, As I said, you will need to run cat for each forward pass. The changes are never “communicated” to the input of cat or stack, these are out of place operations. The weird behavior you see when you print a in the end is just because you changed a first.
st98148
I have a lot of images with .gif and .oct-stream extension. Since the ImageFolder will ignore those files, I use the DatasetFolder and provide my img_extension and loader as suggested by other forks on this forum. I create a dataloader and try to iterate through it. Unfortunately, it got stuck somewhere forever. The folder structure is the following. I have attached all images except ‘01.octet-stream’ as this forum does not support it. # debug/0/01.octet-stream # debug/0/02.jpeg # debug/0/03.gif # debug/1/01.octet-stream # debug/1/02.jpeg # debug/1/03.gif Here is my code, you can run them directly on notebook. from __future__ import print_function, division import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms from sklearn.utils.class_weight import compute_class_weight import matplotlib.pyplot as plt import torch.nn.functional as F import time import os import copy from torchvision.datasets import ImageFolder plt.ion() data_transforms = transforms.Compose([transforms.Resize(300), transforms.CenterCrop(299), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) img_extensions = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif', '.gif', '.octet-stream'] def my_loader(path): from torchvision import get_image_backend from PIL import Image def my_pil_loader(path): print ("loading {}".format(path)) with open(path, 'rb') as f: img = Image.open(f) return img.convert('RGB') if get_image_backend() == 'accimage': print('{} uses accimage'.format(path)) try: return accimage_loader(path) except IOError: print('{} accimage loading fail, using PIL'.format(path)) return my_pil_loader(path) else: print('{} uses PIL'.format(path)) return my_pil_loader(path) my_loader('./debug/0/03.gif') data_dir = './debug/' batch_size = 32 image_datasets = datasets.DatasetFolder(data_dir, my_loader, img_extensions, data_transforms) dataloaders = torch.utils.data.DataLoader(image_datasets, batch_size=batch_size, shuffle=True, num_workers=4) dataset_sizes = len(image_datasets) print(dataset_sizes) Everything works fine so far. However, when I try to iterate through the dataloader and run the following code, the program got stuck forever! It seems the code runs into dead loop somewhere even before loading images as I do not see any print information during loading images. What’s wrong with implementation? If I replace the DatasetFolder with ‘ImageFolder’ and get rid of the customized loader and extension, everything works fine. Very wired… index = 0 for inputs, labels in dataloaders: print(index) print('inputs') print(inputs.size()) print('labels') print(labels.size())
st98149
Is your Dataset working without the DataLoader, i.e. do you get a valid image using this code: data = image_dataset[0] If so, could you try to use num_workers=0 in your DataLoader and try it again? I would like to narrow down the possible error source first.
st98150
Hi Patrick image_datasets[0] gives ./debug/0\01.octet-stream uses PIL loading ./debug/0\01.octet-stream (tensor([[[ 0.2967, 0.0912, 0.1254, ..., -0.3369, -0.2513, -0.2856], [ 0.2111, 0.2282, 0.1768, ..., -0.4054, -0.3369, -0.3198], [ 0.1939, 0.2796, 0.2282, ..., -0.5596, -0.4568, -0.2684], ..., [-1.5870, -1.5014, -1.5699, ..., -1.6555, -1.8097, -1.8439], [-1.6727, -1.5699, -1.5699, ..., -1.9809, -1.9295, -1.7754], [-1.6384, -1.5185, -1.5528, ..., -1.7069, -1.6898, -1.6042]], [[ 0.5728, 0.6604, 0.7304, ..., -0.0924, -0.1099, -0.0749], [ 0.7304, 0.6254, 0.6954, ..., -0.1975, -0.1450, -0.1099], [ 0.7304, 0.7129, 0.7479, ..., -0.3025, -0.1800, -0.0924], ..., [-1.1253, -1.1429, -1.1429, ..., -1.6331, -1.5630, -1.5980], [-1.0728, -1.1779, -1.2654, ..., -1.4755, -1.5455, -1.5980], [-1.1604, -1.1429, -1.1604, ..., -1.5980, -1.6155, -1.6155]], [[ 0.7054, 0.6356, 0.6531, ..., 0.0605, 0.0431, 0.0256], [ 0.7576, 0.7402, 0.7228, ..., 0.0605, 0.0431, -0.0092], [ 0.7576, 0.7925, 0.7925, ..., -0.0267, -0.0615, -0.0441], ..., [-0.7064, -0.6715, -0.6541, ..., -1.2990, -1.2467, -1.2467], [-0.7936, -0.6541, -0.6193, ..., -1.3513, -1.3164, -1.1073], [-0.8284, -0.7064, -0.7064, ..., -1.2816, -1.2990, -1.2119]]]), 0) The issue seems to be in the dataloader iterator. dataloader_iter = dataloaders.__iter__() dataloader_iter.pin_memory gives False It got stuck when I run dataloader_iter.data_queue.get() If I set the num_workers = 0, I got the following error --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-26-34fa306d18e2> in <module>() ----> 1 dataloader_iter._get_batch() ~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py in _get_batch(self) 307 raise RuntimeError('DataLoader timed out after {} seconds'.format(self.timeout)) 308 else: --> 309 return self.data_queue.get() 310 311 def __next__(self): AttributeError: '_DataLoaderIter' object has no attribute 'data_queue
st98151
Is the code working, if you just set num_workers=0 and get the samples in a loop?
st98152
Yes, it works! That’s amazing. Why setting num_workers = 0 will resolve the problem and run into dead loop if num_workers != 0?
st98153
BTW, I am running those codes within jupyter notebook on Windows with Pytorch 0.4.1 and python 3.6.5
st98154
Great to hear, it’s working now! Multiprocessing is implemented a bit differently on Windows, e.g. it uses spawn instead on fork. That means that you should guard your code with: if __name__=='__main__': main() You can read more about these differences in the Windows FAQ 3. I’m not sure, how Jupyter notebooks should be handled on Windows machines, but I guess it might be related to the multiprocessing part. Could you try to export your notebook as a Python script, add the guard, and try to run it again using more workers?
st98155
Hi Patrick Yes, it indeed works when I rewrite the code with if protection in the following way. Thanks! My question is why the issue occurs only when I use the customized loader in the ‘datasetfolder’. If I use the ImageFolders with the default loader, this is no dead loop even without the ‘if’ protection. # coding: utf-8 # In[ ]: from __future__ import print_function, division import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms from sklearn.utils.class_weight import compute_class_weight import matplotlib.pyplot as plt import torch.nn.functional as F import time import os import copy import sys from torchvision.datasets import ImageFolder plt.ion() # In[ ]: def my_loader(path): from torchvision import get_image_backend from PIL import Image def my_pil_loader(path): print ("loading {}".format(path)) try: with open(path, 'rb') as f: img = Image.open(f) return img.convert('RGB') except: print('fail to load {} using PIL'.format(img)) if get_image_backend() == 'accimage': print('loading {} uses accimage'.format(path)) try: return accimage_loader(path) except IOError: print('fail to load {} using accimage, instead using PIL'.format(path)) return my_pil_loader(path) else: print('{} uses PIL'.format(path)) return my_pil_loader(path) # In[ ]: def main(): data_transforms = transforms.Compose([transforms.Resize(300), transforms.CenterCrop(299), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) img_extensions = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif', '.gif', '.octet-stream'] data_dir = './debug/' batch_size = 32 image_datasets = datasets.DatasetFolder(data_dir, my_loader, img_extensions, data_transforms) dataloaders = torch.utils.data.DataLoader(image_datasets, batch_size=batch_size, shuffle=True, num_workers=4) index = 0 for inputs, labels in dataloaders: print(index) print('inputs') print(inputs.size()) print('labels') print(labels.size()) # In[ ]: if __name__ == '__main__': main()
st98156
I’m not really sure, what makes the DataLoader hang, but it’s good to hear it’s working now! Your custom loader looks more or less like the default pil_loader besides the try block. However, as Windows doesn’t use fork to create new workers, it might often be problematic, if you don’t use a “main guard”, as the whole module-level code will be executed in all child processes.
st98157
Hi every friends, I am stuck on training LSTM model. The input of LSTM has the size format of (seq_len, batch_size, input_len ), it is (5, 8, 2048). The following code will show more details of my model. def __init__(self): self.layer_size = 1 self.batch_size = 6 self.hidden_size = 512 self.tmestep = 3 self.hidden = self.init_hidden_lstm() self.lstm = nn.LSTM(2048, self.hidden_size, self.layer_size,batch_first=False) self.drop = nn.Dropout(0.2) self.fc1 = nn.Linear(2560, 512) self.fc2 = nn.Linear(512, 2) self.init_weight() def init_hidden_lstm(self): return (torch.randn(self.layer_size, self.batch_size,self.hidden_size, requires_grad=True).cuda(), torch.randn(self.layer_size, self.batch_size, self.hidden_size, requires_grad=True).cuda()) def init_weight(self): for name, param in self.lstm.named_parameters(): if 'bias' in name: nn.init.constant(param, 0.5) nn.init.constant(param[256:512], 0.5) elif 'weight' in name: nn.init.uniform_(param) def forward(self,in1,in2,in3,in4,in5): # in1,in2,in3,in4,in5: extracted features from CNN network self.hidden = repackage_hidden(self.hidden) in = torch.stack([in1, in2, in3,in4,in5], dim=0) out, self.hidden = self.lstm(in,self.hidden) out = out.permute(1,0,2) out= out.contiguous().view(self.batch_size,-1) o = self.fc1(out) o = self.drop(o) o = self.fc2(o) def repackage_hidden(h): if isinstance(h, torch.Tensor): return h.detach() else: return tuple(repackage_hidden(v) for v in h) Currently, the training accuracy fluctuates between 0.55 and 0.48 from the beginning until 60th epoch. I tried LSTM with 2,4,8 layers with different learning_rate(1, 0.1, 0.01), but the similar case happens. If you can guess the reason, please reply. I am appreciate your help!
st98158
I’m trying to parameterise a full covariance matrix – I’ve done this by declaring a tensor variable and multiplying it by it’s transpose. Pytorch is throwing an error though when I try and call torch.potrf on it though: RuntimeError: Lapack Error in potrf : the leading minor of order 8 is not positive definite at /Users/soumith/code/builder/wheel/pytorch-src/aten/src/TH/generic/THTensorLapack.c:617 I’ve checked though and it does actually seem to be a positive semidefinite matrix, and np.linalg.cholesky works. I’ve uploaded the matrix here 22.
st98159
Is there more detailed documentation about the built-in RNN modules? It is difficult to interpret even the expected dimensions of input/hidden/output based on the docs, and the examples are too idiomatic to describe the basic architecture of the RNN modules. Even a book suggestion. Thanks!
st98160
Hello, I am dealing with memory fragment when using dataloader. Following is my __getitem__ function: def __getitem__(self, idx): # Get token token = self.tokens[idx] # Get drawing drawing = self.drawings[idx] stroke = stroke_data(drawing).astype(np.float32) # stroke = np.expand_dims(stroke, axis=0) stroke = np.transpose(stroke, (1, 0)).astype(np.float32) drawing = eval(drawing) label = self.labels[idx] # Plot the image img, img_gray = drawing_to_image(drawing, IMG_SIZE, IMG_SIZE) if self.transform: img = self.transform(image=img)["image"] img = np.transpose(img, (2, 0, 1)).astype(np.float32) # img = np.expand_dims(img, 0).astype(np.float32) img_gray = self.transform(image=img_gray)["image"] img_gray = np.expand_dims(img_gray, 0).astype(np.float32) return { "image": img, "image_gray": img_gray, "stroke": stroke, "token": token, "targets": label } Environment: Ubuntu 16.04 - AMD Ryzen 2700 CUDA 9.0, CuDNN 7.0. I am running by the following options: num_workers=4 shuffle=True batch_size=128 The problem is: My RAM increases every batch and it is released when an epoch ends. I tried following: set num_workers=0 Change drawing = self.drawings[idx] to drawing = self.drawings[0] Using gc.collect every min-batch or every 500 mini batch (1) and (3) can help, but it makes my model running too slow. I could not figure it out. Do you have any solutions for it? Thank you.
st98161
import random import torch from torch.multiprocessing import Process class DynamicNet(torch.nn.Module): def __init__(self, D_in, H, D_out): super(DynamicNet, self).__init__() self.input_linear = torch.nn.Linear(D_in, H) self.middle_linear = torch.nn.Linear(H, H) self.output_linear = torch.nn.Linear(H, D_out) def forward(self, x): h_relu = self.input_linear(x).clamp(min=0) for _ in range(5): h_relu = self.middle_linear(h_relu).clamp(min=0) y_pred = self.output_linear(h_relu) return y_pred N, D_in, H, D_out = 64, 1000, 100, 10 def p1(): x1 = torch.randn(N, D_in).cuda() model1 = DynamicNet(D_in, H, D_out).cuda() while True: y_pred1 = model1(x1) def p2(): x2 = torch.randn(N, D_in).cuda() model2 = DynamicNet(D_in, H, D_out).cuda() t = 0 while True: y_pred2 = model2(x2) print("Step {}".format(t)) t+=1 p1 = Process(target=p1, args=()) p2 = Process(target=p2, args=()) p1.start() p2.start() p1.join() p2.join() This allocates over 3 gigs of RAM in main memory and only about 1 gig in VRAM. Why should this be the case, especially when everything is pushed to the cuda device. EDIT: For the sake of citation, the DynamicNet class is modified from this :https://jhui.github.io/2018/02/09/PyTorch-neural-networks/ 1
st98162
Hi PyTorchers! I’m trying to ‘per-pixel L1 loss’ for my image synthesis network. It would be so much helpful if you check this correct Firstly set L1 loss for per-pixel wise. pixel_loss = torch.nn.L1Loss(size_average=False, reduce=False) And, here’s a snippet of my training code result_image = myModel(…) # result_image has a shape of( Nx1x256x256) where N is batch size. loss = pixel_loss(result_image, answer_image) loss.backward(loss) so… is this correct? and if I want to change ‘per-pixel’ loss to scalar loss, would it be like this? scalar_loss = torch.nn.L1Loss(size_average=True, reduce=True) result_image = myModel(…) loss = scalar_loss(result_image, answer_image) loss.backward() Any advice would be helpful. Thanks
st98163
Hi Pytorcher! I was checking grad value of bias w.r.t the last conv-filter. Here is my code. myLoss = abs(pred - answer).mean() # 1x1x256x256 -> scalar value print( myLoss.item() ) optimizer.zero_grad() myLoss.backward(retain_graph=True) torch.cuda.synchronize() print( last_layers_bias.grad ) I guess myLoss.item() and last_layers_bais.grad has same value. But two values are so different, and I geuss this is a bug?
st98164
Here is the tutorial code(the source code link 5): # -*- coding: utf-8 -*- from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import torch import torch.nn as nn import torch.nn.functional as F import re import os import unicodedata import numpy as np device = torch.device("cpu") MAX_LENGTH = 10 # Maximum sentence length # Default word tokens PAD_token = 0 # Used for padding short sentences SOS_token = 1 # Start-of-sentence token EOS_token = 2 # End-of-sentence token class Voc: def __init__(self, name): self.name = name self.trimmed = False self.word2index = {} self.word2count = {} self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"} self.num_words = 3 # Count SOS, EOS, PAD def addSentence(self, sentence): for word in sentence.split(' '): self.addWord(word) def addWord(self, word): if word not in self.word2index: self.word2index[word] = self.num_words self.word2count[word] = 1 self.index2word[self.num_words] = word self.num_words += 1 else: self.word2count[word] += 1 # Remove words below a certain count threshold def trim(self, min_count): if self.trimmed: return self.trimmed = True keep_words = [] for k, v in self.word2count.items(): if v >= min_count: keep_words.append(k) print('keep_words {} / {} = {:.4f}'.format( len(keep_words), len(self.word2index), len(keep_words) / len(self.word2index) )) # Reinitialize dictionaries self.word2index = {} self.word2count = {} self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"} self.num_words = 3 # Count default tokens for word in keep_words: self.addWord(word) # Lowercase and remove non-letter characters def normalizeString(s): s = s.lower() s = re.sub(r"([.!?])", r" \1", s) s = re.sub(r"[^a-zA-Z.!?]+", r" ", s) return s # Takes string sentence, returns sentence of word indexes def indexesFromSentence(voc, sentence): return [voc.word2index[word] for word in sentence.split(' ')] + [EOS_token] class EncoderRNN(nn.Module): def __init__(self, hidden_size, embedding, n_layers=1, dropout=0): super(EncoderRNN, self).__init__() self.n_layers = n_layers self.hidden_size = hidden_size self.embedding = embedding # Initialize GRU; the input_size and hidden_size params are both set to 'hidden_size' # because our input size is a word embedding with number of features == hidden_size self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout), bidirectional=True) def forward(self, input_seq, input_lengths, hidden=None): # Convert word indexes to embeddings embedded = self.embedding(input_seq) # Pack padded batch of sequences for RNN module packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths) # Forward pass through GRU outputs, hidden = self.gru(packed, hidden) # Unpack padding outputs, _ = torch.nn.utils.rnn.pad_packed_sequence(outputs) # Sum bidirectional GRU outputs outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:] # Return output and final hidden state return outputs, hidden ###################################################################### # Define Decoder’s Attention Module # --------------------------------- # # Next, we’ll define our attention module (``Attn``). Note that this # module will be used as a submodule in our decoder model. Luong et # al. consider various “score functions”, which take the current decoder # RNN output and the entire encoder output, and return attention # “energies”. This attention energies tensor is the same size as the # encoder output, and the two are ultimately multiplied, resulting in a # weighted tensor whose largest values represent the most important parts # of the query sentence at a particular time-step of decoding. # # Luong attention layer class Attn(torch.nn.Module): def __init__(self, method, hidden_size): super(Attn, self).__init__() self.method = method if self.method not in ['dot', 'general', 'concat']: raise ValueError(self.method, "is not an appropriate attention method.") self.hidden_size = hidden_size if self.method == 'general': self.attn = torch.nn.Linear(self.hidden_size, hidden_size) elif self.method == 'concat': self.attn = torch.nn.Linear(self.hidden_size * 2, hidden_size) self.v = torch.nn.Parameter(torch.FloatTensor(hidden_size)) def dot_score(self, hidden, encoder_output): return torch.sum(hidden * encoder_output, dim=2) def general_score(self, hidden, encoder_output): energy = self.attn(encoder_output) return torch.sum(hidden * energy, dim=2) def concat_score(self, hidden, encoder_output): energy = self.attn(torch.cat((hidden.expand(encoder_output.size(0), -1, -1), encoder_output), 2)).tanh() return torch.sum(self.v * energy, dim=2) def forward(self, hidden, encoder_outputs): # Calculate the attention weights (energies) based on the given method if self.method == 'general': attn_energies = self.general_score(hidden, encoder_outputs) elif self.method == 'concat': attn_energies = self.concat_score(hidden, encoder_outputs) elif self.method == 'dot': attn_energies = self.dot_score(hidden, encoder_outputs) # Transpose max_length and batch_size dimensions attn_energies = attn_energies.t() # Return the softmax normalized probability scores (with added dimension) return F.softmax(attn_energies, dim=1).unsqueeze(1) ###################################################################### # Define Decoder # -------------- # # Similarly to the ``EncoderRNN``, we use the ``torch.nn.GRU`` module for # our decoder’s RNN. This time, however, we use a unidirectional GRU. It # is important to note that unlike the encoder, we will feed the decoder # RNN one word at a time. We start by getting the embedding of the current # word and applying a # `dropout <https://pytorch.org/docs/stable/nn.html?highlight=dropout#torch.nn.Dropout>`__. # Next, we forward the embedding and the last hidden state to the GRU and # obtain a current GRU output and hidden state. We then use our ``Attn`` # module as a layer to obtain the attention weights, which we multiply by # the encoder’s output to obtain our attended encoder output. We use this # attended encoder output as our ``context`` tensor, which represents a # weighted sum indicating what parts of the encoder’s output to pay # attention to. From here, we use a linear layer and softmax normalization # to select the next word in the output sequence. # # Hybrid Frontend Notes: # ~~~~~~~~~~~~~~~~~~~~~~ # # Similarly to the ``EncoderRNN``, this module does not contain any # data-dependent control flow. Therefore, we can once again use # **tracing** to convert this model to Torch Script after it is # initialized and its parameters are loaded. # class LuongAttnDecoderRNN(nn.Module): def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers=1, dropout=0.1): super(LuongAttnDecoderRNN, self).__init__() # Keep for reference self.attn_model = attn_model self.hidden_size = hidden_size self.output_size = output_size self.n_layers = n_layers self.dropout = dropout # Define layers self.embedding = embedding self.embedding_dropout = nn.Dropout(dropout) self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout)) self.concat = nn.Linear(hidden_size * 2, hidden_size) self.out = nn.Linear(hidden_size, output_size) self.attn = Attn(attn_model, hidden_size) def forward(self, input_step, last_hidden, encoder_outputs): # Note: we run this one step (word) at a time # Get embedding of current input word embedded = self.embedding(input_step) embedded = self.embedding_dropout(embedded) # Forward through unidirectional GRU rnn_output, hidden = self.gru(embedded, last_hidden) # Calculate attention weights from the current GRU output attn_weights = self.attn(rnn_output, encoder_outputs) # Multiply attention weights to encoder outputs to get new "weighted sum" context vector context = attn_weights.bmm(encoder_outputs.transpose(0, 1)) # Concatenate weighted context vector and GRU output using Luong eq. 5 rnn_output = rnn_output.squeeze(0) context = context.squeeze(1) concat_input = torch.cat((rnn_output, context), 1) concat_output = torch.tanh(self.concat(concat_input)) # Predict next word using Luong eq. 6 output = self.out(concat_output) output = F.softmax(output, dim=1) # Return output and final hidden state return output, hidden ###################################################################### # Define Evaluation # ----------------- # # Greedy Search Decoder # ~~~~~~~~~~~~~~~~~~~~~ # # As in the chatbot tutorial, we use a ``GreedySearchDecoder`` module to # facilitate the actual decoding process. This module has the trained # encoder and decoder models as attributes, and drives the process of # encoding an input sentence (a vector of word indexes), and iteratively # decoding an output response sequence one word (word index) at a time. # # Encoding the input sequence is straightforward: simply forward the # entire sequence tensor and its corresponding lengths vector to the # ``encoder``. It is important to note that this module only deals with # one input sequence at a time, **NOT** batches of sequences. Therefore, # when the constant **1** is used for declaring tensor sizes, this # corresponds to a batch size of 1. To decode a given decoder output, we # must iteratively run forward passes through our decoder model, which # outputs softmax scores corresponding to the probability of each word # being the correct next word in the decoded sequence. We initialize the # ``decoder_input`` to a tensor containing an *SOS_token*. After each pass # through the ``decoder``, we *greedily* append the word with the highest # softmax probability to the ``decoded_words`` list. We also use this word # as the ``decoder_input`` for the next iteration. The decoding process # terminates either if the ``decoded_words`` list has reached a length of # *MAX_LENGTH* or if the predicted word is the *EOS_token*. # # Hybrid Frontend Notes: # ~~~~~~~~~~~~~~~~~~~~~~ # # The ``forward`` method of this module involves iterating over the range # of :math:`[0, max\_length)` when decoding an output sequence one word at # a time. Because of this, we should use **scripting** to convert this # module to Torch Script. Unlike with our encoder and decoder models, # which we can trace, we must make some necessary changes to the # ``GreedySearchDecoder`` module in order to initialize an object without # error. In other words, we must ensure that our module adheres to the # rules of the scripting mechanism, and does not utilize any language # features outside of the subset of Python that Torch Script includes. # # To get an idea of some manipulations that may be required, we will go # over the diffs between the ``GreedySearchDecoder`` implementation from # the chatbot tutorial and the implementation that we use in the cell # below. Note that the lines highlighted in red are lines removed from the # original implementation and the lines highlighted in green are new. # # .. figure:: /_static/img/chatbot/diff.png # :align: center # :alt: diff # # Changes: # ^^^^^^^^ # # - ``nn.Module`` -> ``torch.jit.ScriptModule`` # # - In order to use PyTorch’s scripting mechanism on a module, that # module must inherit from the ``torch.jit.ScriptModule``. # # # - Added ``decoder_n_layers`` to the constructor arguments # # - This change stems from the fact that the encoder and decoder # models that we pass to this module will be a child of # ``TracedModule`` (not ``Module``). Therefore, we cannot access the # decoder’s number of layers with ``decoder.n_layers``. Instead, we # plan for this, and pass this value in during module construction. # # # - Store away new attributes as constants # # - In the original implementation, we were free to use variables from # the surrounding (global) scope in our ``GreedySearchDecoder``\ ’s # ``forward`` method. However, now that we are using scripting, we # do not have this freedom, as the assumption with scripting is that # we cannot necessarily hold on to Python objects, especially when # exporting. An easy solution to this is to store these values from # the global scope as attributes to the module in the constructor, # and add them to a special list called ``__constants__`` so that # they can be used as literal values when constructing the graph in # the ``forward`` method. An example of this usage is on NEW line # 19, where instead of using the ``device`` and ``SOS_token`` global # values, we use our constant attributes ``self._device`` and # ``self._SOS_token``. # # # - Add the ``torch.jit.script_method`` decorator to the ``forward`` # method # # - Adding this decorator lets the JIT compiler know that the function # that it is decorating should be scripted. # # # - Enforce types of ``forward`` method arguments # # - By default, all parameters to a Torch Script function are assumed # to be Tensor. If we need to pass an argument of a different type, # we can use function type annotations as introduced in `PEP # 3107 <https://www.python.org/dev/peps/pep-3107/>`__. In addition, # it is possible to declare arguments of different types using # MyPy-style type annotations (see # `doc <https://pytorch.org/docs/master/jit.html#types>`__). # # # - Change initialization of ``decoder_input`` # # - In the original implementation, we initialized our # ``decoder_input`` tensor with ``torch.LongTensor([[SOS_token]])``. # When scripting, we are not allowed to initialize tensors in a # literal fashion like this. Instead, we can initialize our tensor # with an explicit torch function such as ``torch.ones``. In this # case, we can easily replicate the scalar ``decoder_input`` tensor # by multiplying 1 by our SOS_token value stored in the constant # ``self._SOS_token``. # class GreedySearchDecoder(torch.jit.ScriptModule): def __init__(self, encoder, decoder, decoder_n_layers): super(GreedySearchDecoder, self).__init__() self.encoder = encoder self.decoder = decoder self._device = device self._SOS_token = SOS_token self._decoder_n_layers = decoder_n_layers __constants__ = ['_device', '_SOS_token', '_decoder_n_layers'] # @torch.jit.script_method def forward(self, input_seq : torch.Tensor, input_length : torch.Tensor, max_length : int): # Forward input through encoder model encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length) # Prepare encoder's final hidden layer to be first hidden input to the decoder decoder_hidden = encoder_hidden[:self._decoder_n_layers] # Initialize decoder input with SOS_token decoder_input = torch.ones(1, 1, device=self._device, dtype=torch.long) * self._SOS_token # Initialize tensors to append decoded words to all_tokens = torch.zeros([0], device=self._device, dtype=torch.long) all_scores = torch.zeros([0], device=self._device) # Iteratively decode one word token at a time for _ in range(max_length): # Forward pass through decoder decoder_output, decoder_hidden = self.decoder(decoder_input, decoder_hidden, encoder_outputs) # Obtain most likely word token and its softmax score decoder_scores, decoder_input = torch.max(decoder_output, dim=1) # Record token and score all_tokens = torch.cat((all_tokens, decoder_input), dim=0) all_scores = torch.cat((all_scores, decoder_scores), dim=0) # Prepare current token to be next decoder input (add a dimension) decoder_input = torch.unsqueeze(decoder_input, 0) # Return collections of word tokens and scores return all_tokens, all_scores ###################################################################### # Evaluating an Input # ~~~~~~~~~~~~~~~~~~~ # # Next, we define some functions for evaluating an input. The ``evaluate`` # function takes a normalized string sentence, processes it to a tensor of # its corresponding word indexes (with batch size of 1), and passes this # tensor to a ``GreedySearchDecoder`` instance called ``searcher`` to # handle the encoding/decoding process. The searcher returns the output # word index vector and a scores tensor corresponding to the softmax # scores for each decoded word token. The final step is to convert each # word index back to its string representation using ``voc.index2word``. # # We also define two functions for evaluating an input sentence. The # ``evaluateInput`` function prompts a user for an input, and evaluates # it. It will continue to ask for another input until the user enters ‘q’ # or ‘quit’. # # The ``evaluateExample`` function simply takes a string input sentence as # an argument, normalizes it, evaluates it, and prints the response. # def evaluate(encoder, decoder, searcher, voc, sentence, max_length=MAX_LENGTH): ### Format input sentence as a batch # words -> indexes indexes_batch = [indexesFromSentence(voc, sentence)] # Create lengths tensor lengths = torch.tensor([len(indexes) for indexes in indexes_batch]) # Transpose dimensions of batch to match models' expectations input_batch = torch.LongTensor(indexes_batch).transpose(0, 1) # Use appropriate device input_batch = input_batch.to(device) lengths = lengths.to(device) # Decode sentence with searcher tokens, scores = searcher(input_batch, lengths, max_length) # indexes -> words decoded_words = [voc.index2word[token.item()] for token in tokens] return decoded_words # Evaluate inputs from user input (stdin) def evaluateInput(encoder, decoder, searcher, voc): input_sentence = '' while(1): try: # Get input sentence input_sentence = input('> ') # Check if it is quit case if input_sentence == 'q' or input_sentence == 'quit': break # Normalize sentence input_sentence = normalizeString(input_sentence) # Evaluate sentence output_words = evaluate(encoder, decoder, searcher, voc, input_sentence) # Format and print response sentence output_words[:] = [x for x in output_words if not (x == 'EOS' or x == 'PAD')] print('Bot:', ' '.join(output_words)) except KeyError: print("Error: Encountered unknown word.") # Normalize input sentence and call evaluate() def evaluateExample(sentence, encoder, decoder, searcher, voc): print("> " + sentence) # Normalize sentence input_sentence = normalizeString(sentence) # Evaluate sentence output_words = evaluate(encoder, decoder, searcher, voc, input_sentence) output_words[:] = [x for x in output_words if not (x == 'EOS' or x == 'PAD')] print('Bot:', ' '.join(output_words)) ###################################################################### # Load Pretrained Parameters # -------------------------- # # Ok, its time to load our model! # # Use hosted model # ~~~~~~~~~~~~~~~~ # # To load the hosted model: # # 1) Download the model `here <https://download.pytorch.org/models/tutorials/4000_checkpoint.tar>`__. # # 2) Set the ``loadFilename`` variable to the path to the downloaded # checkpoint file. # # 3) Leave the ``checkpoint = torch.load(loadFilename)`` line uncommented, # as the hosted model was trained on CPU. # # Use your own model # ~~~~~~~~~~~~~~~~~~ # # To load your own pre-trained model: # # 1) Set the ``loadFilename`` variable to the path to the checkpoint file # that you wish to load. Note that if you followed the convention for # saving the model from the chatbot tutorial, this may involve changing # the ``model_name``, ``encoder_n_layers``, ``decoder_n_layers``, # ``hidden_size``, and ``checkpoint_iter`` (as these values are used in # the model path). # # 2) If you trained the model on a CPU, make sure that you are opening the # checkpoint with the ``checkpoint = torch.load(loadFilename)`` line. # If you trained the model on a GPU and are running this tutorial on a # CPU, uncomment the # ``checkpoint = torch.load(loadFilename, map_location=torch.device('cpu'))`` # line. # # Hybrid Frontend Notes: # ~~~~~~~~~~~~~~~~~~~~~~ # # Notice that we initialize and load parameters into our encoder and # decoder models as usual. Also, we must call ``.to(device)`` to set the # device options of the models and ``.eval()`` to set the dropout layers # to test mode **before** we trace the models. ``TracedModule`` objects do # not inherit the ``to`` or ``eval`` methods. # save_dir = os.path.join("data", "save") corpus_name = "cornell movie-dialogs corpus" # Configure models model_name = 'cb_model' attn_model = 'dot' #attn_model = 'general' #attn_model = 'concat' hidden_size = 500 encoder_n_layers = 2 decoder_n_layers = 2 dropout = 0.1 batch_size = 64 # If you're loading your own model # Set checkpoint to load from checkpoint_iter = 4000 # loadFilename = os.path.join(save_dir, model_name, corpus_name, # '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size), # '{}_checkpoint.tar'.format(checkpoint_iter)) # If you're loading the hosted model loadFilename = '4000_checkpoint.tar' # Load model # Force CPU device options (to match tensors in this tutorial) checkpoint = torch.load(loadFilename, map_location=torch.device('cpu')) encoder_sd = checkpoint['en'] decoder_sd = checkpoint['de'] encoder_optimizer_sd = checkpoint['en_opt'] decoder_optimizer_sd = checkpoint['de_opt'] embedding_sd = checkpoint['embedding'] voc = Voc(corpus_name) voc.__dict__ = checkpoint['voc_dict'] print('Building encoder and decoder ...') # Initialize word embeddings embedding = nn.Embedding(voc.num_words, hidden_size) embedding.load_state_dict(embedding_sd) # Initialize encoder & decoder models encoder = EncoderRNN(hidden_size, embedding, encoder_n_layers, dropout) decoder = LuongAttnDecoderRNN(attn_model, embedding, hidden_size, voc.num_words, decoder_n_layers, dropout) # Load trained model params encoder.load_state_dict(encoder_sd) decoder.load_state_dict(decoder_sd) # Use appropriate device encoder = encoder.to(device) decoder = decoder.to(device) # Set dropout layers to eval mode encoder.eval() decoder.eval() print('Models built and ready to go!') ###################################################################### # Convert Model to Torch Script # ----------------------------- # # Encoder # ~~~~~~~ # # As previously mentioned, to convert the encoder model to Torch Script, # we use **tracing**. Tracing any module requires running an example input # through the model’s ``forward`` method and trace the computational graph # that the data encounters. The encoder model takes an input sequence and # a corresponding lengths tensor. Therefore, we create an example input # sequence tensor ``test_seq``, which is of appropriate size (MAX_LENGTH, # 1), contains numbers in the appropriate range # :math:`[0, voc.num\_words)`, and is of the appropriate type (int64). We # also create a ``test_seq_length`` scalar which realistically contains # the value corresponding to how many words are in the ``test_seq``. The # next step is to use the ``torch.jit.trace`` function to trace the model. # Notice that the first argument we pass is the module that we want to # trace, and the second is a tuple of arguments to the module’s # ``forward`` method. # # Decoder # ~~~~~~~ # # We perform the same process for tracing the decoder as we did for the # encoder. Notice that we call forward on a set of random inputs to the # traced_encoder to get the output that we need for the decoder. This is # not required, as we could also simply manufacture a tensor of the # correct shape, type, and value range. This method is possible because in # our case we do not have any constraints on the values of the tensors # because we do not have any operations that could fault on out-of-range # inputs. # # GreedySearchDecoder # ~~~~~~~~~~~~~~~~~~~ # # Recall that we scripted our searcher module due to the presence of # data-dependent control flow. In the case of scripting, we do the # conversion work up front by adding the decorator and making sure the # implementation complies with scripting rules. We initialize the scripted # searcher the same way that we would initialize an un-scripted variant. # ### Convert encoder model # Create artificial inputs test_seq = torch.LongTensor(MAX_LENGTH, 1).random_(0, voc.num_words) test_seq_length = torch.LongTensor([test_seq.size()[0]]) # Trace the model traced_encoder = torch.jit.trace(encoder, (test_seq, test_seq_length)) ### Convert decoder model # Create and generate artificial inputs # ####################################################### # The code line show error in PyCharm Console! _**test_encoder_outputs, test_encoder_hidden = traced_encoder(test_seq, test_seq_length)**_ ####################################################### test_decoder_hidden = test_encoder_hidden[:decoder.n_layers] test_decoder_input = torch.LongTensor(1, 1).random_(0, voc.num_words) # Trace the model traced_decoder = torch.jit.trace(decoder, (test_decoder_input, test_decoder_hidden, test_encoder_outputs)) ### Initialize searcher module scripted_searcher = GreedySearchDecoder(traced_encoder, traced_decoder, decoder.n_layers) ###################################################################### # Print Graphs # ------------ # # Now that our models are in Torch Script form, we can print the graphs of # each to ensure that we captured the computational graph appropriately. # Since our ``scripted_searcher`` contains our ``traced_encoder`` and # ``traced_decoder``, these graphs will print inline. # print('scripted_searcher graph:\n', scripted_searcher.graph) ###################################################################### # Run Evaluation # -------------- # # Finally, we will run evaluation of the chatbot model using the Torch # Script models. If converted correctly, the models will behave exactly as # they would in their eager-mode representation. # # By default, we evaluate a few common query sentences. If you want to # chat with the bot yourself, uncomment the ``evaluateInput`` line and # give it a spin. # # Evaluate examples sentences = ["hello", "what's up?", "who are you?", "where am I?", "where are you from?"] for s in sentences: evaluateExample(s, traced_encoder, traced_decoder, scripted_searcher, voc) # Evaluate your input #evaluateInput(traced_encoder, traced_decoder, scripted_searcher, voc) ###################################################################### # Save Model # ---------- # # Now that we have successfully converted our model to Torch Script, we # will serialize it for use in a non-Python deployment environment. To do # this, we can simply save our ``scripted_searcher`` module, as this is # the user-facing interface for running inference against the chatbot # model. When saving a Script module, use script_module.save(PATH) instead # of torch.save(model, PATH). # scripted_searcher.save("scripted_chatbot.pth") Here is the error: image.png879×651 77.7 KB image.png817×318 35.8 KB I’m new to PyTorch. And I have googled yet not found similiar case! Anyon can help me, thanksssssssss!
st98165
Are you sure you are using 0.4.1? torch.jit was added in the preview of 1.0 or at least after 0.4.1 as far as I know. I’ve tried the tutorial on my machine and it’s working. Could you install the 1.0 preview and try it again? You will find the install instructions here 19.
st98166
Oh, I see! When I tranform to 1.0 preview.It really works.Thank you a lot.I’m a freshman to PyTorch not a long time.I found that PyTorch 0.4.1 is the lastest stable version.As far as I know, people prefer to choose the stable one to develop or do research.But as above, some tutorial codes are written in preview 1.0.So which one should I choose to learn PyTorch, stable 0.4.1 or cool preview 1.0.Can you give me some advice?Thank you!
st98167
Well, I personally would have a look at the 1.0 preview, as there are some awesome new features. Especially if you are learning PyTorch and don’t write some production code etc., I would play around with the nightly build. It’s also really easy to install, so that you don’t even have to build from source. That’s just my biased opinion, and I have to admit I use a lot of conda environments for the stable release, nightly build, and several builds from source.