id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st82368 | Does anyone build robot with deep learning powered by pytorch??
I have a conflict situation now:
libtorch built with gcc4.9, it using old ABI which can not link with higher programs.
ros does not support lower ABI
To make libtorch link, I have to disable CXX11_ABI.
but once I disable it, ros can not link to their libs.
if you don’t beleive , you can add this line your ros package, and your ros package can not build and link.
add_compile_options(-std=c++11 -D_GLIBCXX_USE_CXX11_ABI=0)
Any big god could help me out??? |
st82369 | Hi Jin,
I think it would be a better idea to keep these topics together, as they are apparently all related to libtorch and ROS:
topic2 83
topic3 63
Otherwise different users might answer the same aspect multiple times (or try to debug it).
Would you mind collecting all these information in this topic? |
st82370 | We now have new ABI binaries for libtorch. They can be found on
http://pytorch.org 13, or at https://github.com/pytorch/pytorch/issues/17492#issuecomment-524692441 57. |
st82371 | Hi all,
I would like to report a build issue when linking an application that uses both libtorch and boost.
tensor_test.cpp:
#include <boost/program_options/options_description.hpp>
int main() {
boost::program_options::options_description d("");
}
CMakeLists.txt:
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(torch_example)
set(Torch_DIR "${CMAKE_SOURCE_DIR}/libtorch/share/cmake/Torch")
set(Caffe2_DIR "${CMAKE_SOURCE_DIR}/libtorch/share/cmake/Caffe2")
find_package(Torch REQUIRED)
message(STATUS ${TORCH_LIBRARIES})
find_package(Boost COMPONENTS program_options REQUIRED)
add_executable(main tensor_test.cpp)
target_link_libraries(main Boost::program_options ${TORCH_LIBRARIES})
set_property(TARGET main PROPERTY CXX_STANDARD 11)
This is the link error I get:
Scanning dependencies of target main
[ 50%] Building CXX object CMakeFiles/main.dir/tensor_test.cpp.o
[100%] Linking CXX executable main
CMakeFiles/main.dir/tensor_test.cpp.o: In function `main':
tensor_test.cpp:(.text+0x6b): undefined reference to `boost::program_options::options_description::options_description(std::string const&, unsigned int, unsigned int)'
When removing the ${TORCH_LIBRARIES} variable from target_link_libraries linking works correctly.
The same issue arises with other shared libraries.
I guess this is a similar issue to the one reported here:
Compile libtorch with pcl cause segmentation fault
I try to compile libtorch with PCL library. Both the libraries work fine when compiled dependently, but segmentation fault occurs when complie these libraries together.
cmake file:
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(custom_ops)
SET(CMAKE_BUILD_TYPE "Debug")
list(APPEND CMAKE_PREFIX_PATH ${CMAKE_CURRENT_SOURCE_DIR}/libtorch)
find_package(Torch REQUIRED)
find_package(PCL 1.8 REQUIRED COMPONENTS common io visualization)
include_directories(
${PCL_INCLUDE_DIRS}
)
…
I am on Ubuntu18.04 and I am using libtorch 1.0.0.dev20181022, boost 1.65.1 |
st82372 | Hi,
I just used your two files and that compiles fine for me on Ubuntu 14.04.
Maybe try including them by two different calls?
target_link_libraries(main Boost::program_options)
target_link_libraries(main ${TORCH_LIBRARIES}) |
st82373 | Hi,
thank you for trying it out.
The problem seems related to the Ubuntu version.
I confirm that the build works with 14.04, but it fails on Ubuntu 16.04 and 18.04.
I particular, I moved the object file from the working 14.04 machine to the 16.04 machine and the link stage failed.
I have used g++ 5.5.0 20171010 on both.
(The suggestion of duplicating the target_link_libraries function does not make a difference) |
st82374 | Just for the record,
I have worked around the problem by compiling libtorch locally on my machine from
source. |
st82375 | Does anybody know how to fix this problem? I’m finding the same thing happens with another proprietary library. |
st82376 | Could this be caused by the nightly build coming with an absolute file path to /usr/local/cuda/lib64/libculibos.a in Caffe2Targets.cmake? |
st82377 | I think that this can be related to the C++11 ABI as well, see this comment I posted: Issues linking with libtorch (C++11 ABI?) 53, I’m facing similar issues with other libraries. The issue seems that libtorch is adding this definition to compilation _GLIBCXX_USE_CXX11_ABI=0, which forces GCC to use old C++11 ABI, and this is incompatible with your boost dependency, hence the error you’re getting, because the std::string was changed between old and new C++11 ABI. |
st82378 | Hi,there is a similar problem …
When libtorch is built with opencv :
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(simnet)
find_package(Torch REQUIRED)
find_package(OpenCV 4 REQUIRED)
message(STATUS "Pytorch status:")
message(STATUS " libraries: ${TORCH_LIBRARIES}")
message(STATUS "OpenCV library status:")
message(STATUS " version: ${OpenCV_VERSION}")
message(STATUS " libraries: ${OpenCV_LIBS}")
message(STATUS " include path: ${OpenCV_INCLUDE_DIRS}")
add_executable(simnet main.cpp)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)
target_link_libraries(simnet "${TORCH_LIBRARIES}")
target_link_libraries(simnet "${OpenCV_LIBS}")
It will give rise to ‘undefined reference to `cv::imread(std::string const&, int)’’
And when I removed ‘target_link_libraries(simnet “${TORCH_LIBRARIES}”)’ and the build process succeeded. |
st82379 | Hi, I have the same problem with opencv and libtorch
I’ve also tried this piece of code : https://github.com/tobiascz/MNIST_Pytorch_python_and_capi 22
But it does’nt work either…
Has anyone found the solution?
Thanks |
st82380 | I was able to fix it by building from source instead of trying to use the nightly builds. |
st82381 | Thanks for the reply KevNull
I’ve just build pytorch from source now and i search how to link it with my code. |
st82382 | We now have new ABI binaries for libtorch. They can be found on
http://pytorch.org 16, or at https://github.com/pytorch/pytorch/issues/17492#issuecomment-524692441 34. |
st82383 | I’m at a loss. I just upgraded from pytorch 1.0 to pytorch 1.2 and see huge slowdowns in training (50%-70%). What could be causing this discrepancy? I’m building my docker container off of official nvidia/cuda:10.0 image and haven’t changed anything except upgrading pytorch.
The command for upgrading was:
conda install -c pytorch pytorch=1.2.0=py3.7_cuda10.0.130_cudnn7.6.2_0 torchvision
What am I missing? |
st82384 | Hmm… not easily, unfortunately. I’m using a large custom dataset for segmentation with SGD and BCE + Dice so nothing too crazy there. The model I’m using is here 3 I’ll try another model just to see if that’s where the issue lies. |
st82385 | Confirming that that particular model is training much slower in pytorch 1.2. Any ideas what specifically in it could be causing the slowdown? At first glance there doesn’t seem to be anything incredibly different about it. |
st82386 | Could you try to profile specific parts of the model and try to isolate a submodule?
I assume we could use some random dummy data for profiling, of the slowdown is created in the model? |
st82387 | Ok I think I narrowed it down to the basic ResNeXt101_64x4d class 2 since using a different backbone (e.g. densenet) does not produce the slowdown. I’m guessing the slowdown is in the features 23. Still bewildered as to why that would cause such a drastic change in performance though. |
st82388 | Hi why do we have negative weights when initialized in any type default or even xavier initialization ? and how much would it be bad in term of performances to initialize the weights as distribution between 0 and 1 ? |
st82389 | I am going to use 2 GPUs to do data parallel training, and the model has batch normalization. I am wondering how pytorch handle BN with 2 GPUs. Does each GPU estimate the mean and variance separately? Suppose at test time, I will only use one GPU, then which mean and variance will pytorch use? |
st82390 | According to the document of PyTorch, the batch norm performs over mini-batch, namely, per GPU |
st82391 | See https://pytorch.org/docs/stable/nn.html#torch.nn.SyncBatchNorm 281, with DistributedDataParallel and SyncBatchNorm then BN can be performed on multiple GPUs. |
st82392 | I’m having a model which I want to run in parallel on multiple gpus. The code works well, but I’m having a problem. I’m doing something that requires an input and an adjacency matrix (which is static). I’ve tried to save the adjacency matrix under the model but it only saves it on the first model only. I’ve tried to pass it as a parameter in the forward function but it splits it in half. Any ideas how I could do this? |
st82393 | I’m using a custom Dataset where the path to the images is created from a Pandas DataFrame. Out of curiosity I added a print(idx) to the __getitem__ function and I noticed that it’s called twice (it prints two different indices) if I use 1 worker. If I use multiple workers, it’s called even more times. The batch size is 1, though.
Am I missing something? Shouldn’t I get just one image? Moreover, it returns just one image, independently of the number of workers (as it should be). |
st82394 | Solved by ptrblck in post #4
Each worker will create a batch and call into your Dataset's __getitem__.
For num_workers=0, the main thread will be used to create the batch. For num_workers=1 you will use another additional process to fetch the next batch. |
st82395 | It’s rather difficult to understand what it does without having the Pandas DataFrame (which I cannot share, I guess). But here’s the class:
class Data(Dataset):
def __init__(self, mode, df, img_dir, site, transform):
self.mode = mode
self.df = df
self.img_dir = img_dir
self.site = site
self.transform = transform
def path_channel(self, channel, idx):
experiment = self.df.loc[self.df.index[idx], 'experiment']
plate = self.df.loc[self.df.index[idx], 'plate']
well = self.df.loc[self.df.index[idx], 'well']
path = os.path.join(self.img_dir, experiment, f'Plate{plate}',
f'{well}_s{self.site}_w{channel}.png')
return path
def __getitem__(self, idx):
print(idx) # With 1 process and batch size 1, printed twice (different items)
# Iterate over channels of one image (from file)
all_channels = [np.array(Image.open(self.path_channel(ch, idx)),
dtype=np.float32) for ch in range(1, 7)]
img = np.stack([ch for ch in all_channels], axis=2)
if self.mode == 'train':
label = self.df.loc[self.df.index[idx], 'label'].astype('int32')
return img, label
elif self.mode == 'test':
return img
def __len__(self):
return self.df.shape[0] |
st82396 | Each worker will create a batch and call into your Dataset's __getitem__.
For num_workers=0, the main thread will be used to create the batch. For num_workers=1 you will use another additional process to fetch the next batch. |
st82397 | Hi I am new in pytorch, I was wondering why should we use the same epoch loop for train and test
for epoch in range(1, args.epochs + 1):
model.train_(train_loader, epoch)
model.test_(test_loader, epoch)
and how is that different from using 2 loops ? thank you
for epoch in range(1, args.epochs + 1):
model.train_(train_loader, epoch)
for epoch in range(1, args.epochs + 1):
model.test_(test_loader, epoch) |
st82398 | Solved by mmisiur in post #4
In validation phase we care mostly about the general performance of the model rather than the loss. During training phase tracking training loss is very useful to see how the model actually process the data, but what is most important loss is strictly tied to backpropagation and updating weights, th… |
st82399 | Usually you would like to validate your model after each training epoch to get a signal about your model’s ability to generalize, i.e. how high the validation accuracy is (or how low the validation loss gets). Using these validation metrics you could apply e.g. early stopping in order to stop the training once your model starts to overfit.
If you train for some epochs and try to validate (or test) your model afterwards, you will just get the final validation metrics. Running the validation (or test) for several epochs sequentially doesn’t make sense, as the metrics won’t change and you’ll end up with the same values. |
st82400 | Thank you for your reply but why for the validation we focus only on accuracy best score to save the model what about the loss , also some implementations uses
for data, target in test_loader:
instead of
for batch_idx, (data, target) in enumerate(test_loader):
why ?
Thank you |
st82401 | In validation phase we care mostly about the general performance of the model rather than the loss. During training phase tracking training loss is very useful to see how the model actually process the data, but what is most important loss is strictly tied to backpropagation and updating weights, thus calculating loss is obligatory for training, but not for validation.
But sometimes if you want to know better how the model behaves under the validation data it is also useful to keep track of test loss.
In the code you provided the only difference is that in
for data, target in test_loader:
you simply load batch (data and target labels) and don’t keep track of the index of the batch, but in
for batch_idx, (data, target) in enumerate(test_loader):
you load batch but also batch index which is sometime useful to keep track of progress or whatnot. You can check why the author went with keeping track of batch indexes simply by looking where he/she used it later in code.
Hope I helped |
st82402 | I’m using DataLoader to read from a custom Dataset object based on numpy memmap.
As long as I read the data without shuffling everything works fine but, as I set shuffle=True, the runtime crash.
I tried implementing the shuffling mechanism in the Dataset class by using a permutation vector and setting shuffle=False in the DataLoader but the issue persists.
I also noticed that, when shuffling, the __getitem__() function of the Dataset object is called n times, where n is the batch_size.
Here’s the Dataset code:
class CustomDataset(Dataset):
num_pattern = 60112
base_folder = 'dataset'
def __init__(self, root):
self.root = os.path.expanduser(root)
self.output_ = np.memmap('{0}/output'.format(root), 'int64', 'r', shape=(60112, 62))
self.out_len = np.memmap('{0}/output-lengths'.format(root), 'int32', 'r', shape=(60112))
self.input_ = np.memmap('{0}/input'.format(root), 'float32', 'r', shape=(60112, 512, 1024))
self.in_len = np.memmap('{0}/input-lengths'.format(root), 'int32', 'r', shape=(60112))
def __len__(self):
return self.num_pattern
def __getitem__(self, index):
return (self.in_len[index], torch.from_numpy(self.input_[index])), (self.out_len[index], torch.from_numpy(self.output_[index]))
if __name__ == '__main__':
dataset = CustomDataset(root='/content/')
data_loader = data.DataLoader(dataset, batch_size=32, shuffle=False, num_workers=1)
for i, data in enumerate(data_loader, 0):
# training
The error stack is the following:
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _try_get_batch(self, timeout)
510 try:
--> 511 data = self.data_queue.get(timeout=timeout)
512 return (True, data)
9 frames
/usr/lib/python3.6/multiprocessing/queues.py in get(self, block, timeout)
103 timeout = deadline - time.monotonic()
--> 104 if not self._poll(timeout):
105 raise Empty
/usr/lib/python3.6/multiprocessing/connection.py in poll(self, timeout)
256 self._check_readable()
--> 257 return self._poll(timeout)
258
/usr/lib/python3.6/multiprocessing/connection.py in _poll(self, timeout)
413 def _poll(self, timeout):
--> 414 r = wait([self], timeout)
415 return bool(r)
/usr/lib/python3.6/multiprocessing/connection.py in wait(object_list, timeout)
910 while True:
--> 911 ready = selector.select(timeout)
912 if ready:
/usr/lib/python3.6/selectors.py in select(self, timeout)
375 try:
--> 376 fd_event_list = self._poll.poll(timeout)
377 except InterruptedError:
/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/signal_handling.py in handler(signum, frame)
62 # Python can still get and update the process status successfully.
---> 63 _error_if_any_worker_fails()
64 if previous_handler is not None:
RuntimeError: DataLoader worker (pid 3978) is killed by signal: Bus error.
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
<ipython-input-8-b407a8532808> in <module>()
5 data_loader = data.DataLoader(dataset, batch_size=4, shuffle=True, num_workers=1)
6
----> 7 for i, data in enumerate(data_loader, 0):
8 print(i)
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self)
574 while True:
575 assert (not self.shutdown and self.batches_outstanding > 0)
--> 576 idx, batch = self._get_batch()
577 self.batches_outstanding -= 1
578 if idx != self.rcvd_idx:
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _get_batch(self)
551 else:
552 while True:
--> 553 success, data = self._try_get_batch()
554 if success:
555 return data
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _try_get_batch(self, timeout)
517 if not all(w.is_alive() for w in self.workers):
518 pids_str = ', '.join(str(w.pid) for w in self.workers if not w.is_alive())
--> 519 raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str))
520 if isinstance(e, queue.Empty):
521 return (False, None)
RuntimeError: DataLoader worker (pid(s) 3978) exited unexpectedly
I’m running it on Colab though I don’t think it could be the problem. |
st82403 | I know it might be intuitive to others but i have a huge confusion and frustration when it comes to shaping data for convolution either 1D or 2D as the documentation makes it looks simple yet it always gives errors because of kernel size or input shape, i have been trying to understand the datashaping from the link [1], basically i am attempting to use Conv1D in RL. the Conv1D should accept data from 12 sensors, 25 timesteps.
The data shape is (25, 12)
I am attempting to use the below model
class DQN_Conv1d(nn.Module):
def __init__(self, input_shape, n_actions):
super(DQN_Conv1d, self).__init__()
self.conv = nn.Sequential(
nn.Conv1d(input_shape[0], 32, kernel_size=4, stride=4),
nn.ReLU(),
nn.Conv1d(32, 64, kernel_size=4, stride=2),
nn.ReLU(),
nn.Conv1d(64, 64, kernel_size=3, stride=1),
nn.ReLU(),
nn.Linear(64, 512),
nn.ReLU(),
nn.Linear(512, n_actions)
)
def forward(self, x):
return self.conv(x)
but i get error
RuntimeError: Calculated padded input size per channel: (1 x 3). Kernel size: (1 x 4). Kernel size can’t be greater than actual input size at c:\a\w\1\s\windows\pytorch\aten\src\thnn\gen
eric/SpatialConvolutionMM.c:50
How should i properly shape the data of 12 sensors and 25 data point for a 1D Convolution in PyTorch ?
Thanks in advance
[1] https://blog.goodaudience.com/introduction-to-1d-convolutional-neural-networks-in-keras-for-time-sequences-3a7ff801a2cf 13 |
st82404 | Hi,
You can check shape of the input which causes the error (see error message carefully, kernel size == 4 but input size == 3, makes input size < kernel size). You can see it simply by print(input_hoge_hoge). Or, it implies more elements in input or less kernel size.
And you can use word of “python” at beginning of three quotations of “`” to insert your code between quotations makes python code readable with indent. |
st82405 | Hi,
i added the below print statements to the main logic
print(env.observation_space.shape)
print(env.action_space.n)
net = dqn_model.DQN_Conv1d(env.observation_space.shape, env.action_space.n).to(device)
print(net)
And got the below in the terminal
(25, 12)
5
DQN_Conv1d(
(conv): Sequential(
(0): Conv1d(25, 32, kernel_size=(4,), stride=(4,))
(1): ReLU()
(2): Conv1d(32, 64, kernel_size=(4,), stride=(2,))
(3): ReLU()
(4): Conv1d(64, 64, kernel_size=(3,), stride=(1,))
(5): ReLU()
(6): Linear(in_features=64, out_features=512, bias=True)
(7): ReLU()
(8): Linear(in_features=512, out_features=5, bias=True)
)
)
before i get the error:
RuntimeError: Calculated padded input size per channel: (1 x 3). Kernel size: (1 x 4). Kernel size can’t be greater than actual input size at c:\a\w\1\s\windows\pytorch\aten\src\thnn\gen
eric/SpatialConvolutionMM.c:50 |
st82406 | Assume your input shape is [N, 25, 12], after the first Conv1d, it will become [N, 25, 3], while 3 is too short for next Conv1d with kernel_size=(4,).
Check https://pytorch.org/docs/stable/nn.html#torch.nn.Conv1d 51 for output shape of Conv1d and try padding or changing the stride. But I think 12 is just not enough. |
st82407 | Hi @Ramzy_Karam -san,
Third print shows inside of the model. You see the kernel size for each convolution. Which one takes kernel size == 4 ?, yes first and second convolutions. I do not know the observation_space is input for your model or not, but let us assume so, then you can calculate output size as follows;
"output size" = ("input size" - "kernel size")/"stride factor" + 1
So first output size is (12 - 4)/4 + 1 = 3, thus on a second convolution, your model has 3-element input but your kernel size is 4. Matches to the error message, right? So your choice is; adjust input size for first one or adjust first kernel size and or adjust stride factor. |
st82408 | Thanks alot, but there is a confusing points for me that i hope you would clarify,
The " in_channels" should be an int representing the number of signals ? in my case 12
The shape of the input should be (1,number of data points, number of signals) in my case (1,25,12) ? and the one represent the batch number ? |
st82409 | Firstly, you take first element of input_shape, and first convolution takes it as input_shape[0] which is 12. I do not know how you did set input for your model. It is depending on your design, actually. As you mentioned in your first one, 25 is time step what you want. If you did set batch size in your code, then there might been batched input.
I do not know where is the “in_channel” in your comment. However, in general, channel means that a number of input tensors operated by the convolution.
I recommend that you must check your model’s input at first. You did set batch size? You did not set batch size? If you did set so, then let you change value of batch size from 25 to other number such as 3. Then you can check input shape again by print, if it has 3 then the original 25 was batch size, but if it is not it is probably other. |
st82410 | Thanks alot for your support, so seems like i screwed the whole thing,
First, i didn’t set batch size anywhere.
Second, i took the model from a book i was following, i now adjusted the strides to 1 that has passed the error but got me the below error in the Linear layer
RuntimeError: size mismatch, m1: [64 x 4], m2: [64 x 512] at c:\a\w\1\s\windows\pytorch\aten\src\th\generic/THTensorMath.cpp:940
The 12 represent the number of features, i have more features to add. the 25 represent the number of time steps in a time series, i can add more also.
For the model i just used a model from a book i was following in Reinforcement Learning.
so i am still confused, for a batch size of 1, should the input shape is the number of features or the timesteps ? as far as i understand from the blog i added earlier its the timesteps
Also, where should i set the batch size, in the shape of the input ? |
st82411 | Ok,
First, you must check which is m1 and m2, just identifying object.
Second, you must check matrix operation how it work.
Third, you must check shape of tensor of operands (m1 and m2) for the fully-connected layer.
Through these checks, you can find out where you take misunderstand. I do not recommend to skip the three-step check, it makes more confusion. STEP by STEP is SHORTEST PATH. |
st82412 | Thanks alot for your help, i was able to do the first step but got confused on the other two.
Eventually following up with this URL [1] i rewrote the model, still the model is arbitrarily.
But is this the right way to use 1D convolution for 12 channels (sensors) and 25 data points (time steps) ?
The forward function accept the below shape
torch.Size([1, 25, 12])
The model:
class DQN_Conv1d(nn.Module):
def __init__(self, input_shape, n_actions):
super(DQN_Conv1d, self).__init__()
self.conv1 = nn.Conv1d(input_shape[0], 32, kernel_size=4, stride=1)
self.conv2 = nn.Conv1d(32, 64, kernel_size=4, stride=1)
self.conv3 = nn.Conv1d(64, 64, kernel_size=3, stride=1)
self.conv_drop = nn.Dropout(0.2)
self.fc1 = nn.Linear(256, 512)
self.fc2 = nn.Linear(512, n_actions)
def forward(self, x):
print(x)
print(x.shape)
x = F.relu(F.max_pool1d(self.conv1(x), 1))
x = F.relu(F.max_pool1d(self.conv2(x), 1))
x = F.relu(F.max_pool1d(self.conv_drop(self.conv3(x)), 1))
x = x.view(-1, 256)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = F.relu(self.fc2(x))
return F.log_softmax(x)
[1] Inferring shape via flatten operator 6 |
st82413 | Hi, @Ramzy_Karam -san,
Are you still on issue of;
RuntimeError: size mismatch, m1: [64 x 4], m2: [64 x 512] at c:\a\w\1\s\windows\pytorch\aten\src\th\generic/THTensorMath.cpp:940
If you are so, then, you must understand about matrix operation.
The matrix-matrix multiplication used in fully-connected layer can be written as follows;
for i in range(I)
for j in range(J)
c[i][j] = 0.
for k in range(K)
c[i][j] += a[i][k]*b[k][j]
You can see the relationship of indexes on each matrices. And you can compare shape of m1 and m2 in the error message. |
st82414 | Oh Thanks, i have passed all those issue using the model i wrote in the previous comment.
Now i am adjusting my algorithm for the Reinforcement learning, but i wanted to be validate with you that i am doing something that make sense. Is this the right way to use 1D convolution for 12 channels (sensors) and 25 data points (time steps) ?
I am not asking about data cleaning, hyper parameters or model complexity, i am asking if this is the right shape of data to the 1d convolution problem as (batches, data points, channels) or (batches, channels, data points) fed to the forward function ? |
st82415 | Ramzy_Karam:
batches, channels, data points
You can check these three value positions as argument by changing position like a shuffling.
If error is occurred, then the position is incorrect, right (three values should be different to make clear the problem)? |
st82416 | Shuffling them still works, but i want to be sure the data is aligned correctly
if the convolution works by running a kernel on timesteps to “summarize” them as a feature extraction, so am i giving it the data the way it should receive |
st82417 | No, I mean is to compare and check order of
batches, data points, channels
and
batches, channels, data points
Which is correct is to check which makes an error, no error is the one you want. |
st82418 | Hi,
I did training with;
for epoch in range(EPOCH):
for x, t in dataloader_train:
The dataloader_train includes input x with mini-batch size image and t as a scalar label. The code as above, I think same (x, t) is multiply loaded by EPOCH, so same sequence is repeated EPOCH times. If we use random sampling for each iteration for “epoch” then it is OK, I think, but such the above code is not appropriate code.
Is my thought a correct? And if it is so must I rewrite as follows?
for epoch in range(EPOCH):
dataloader_train = load_randomly()
for x, t in dataloader_train: |
st82419 | for x, t in dataloader_train:
# this always gives you fresh x_batches and t_batches
and you can add the for loop on top of it, it you want to go through the whole train data again and again.
In short,
for epoch in range(EPOCH):
for x, t in dataloader_train:
is perfectly fine !
This is exactly how we train our models |
st82420 | Hi, @n0obcoder -san,
Thank you for your comments. So it implies that “EPOCH*BATCH_SIZE” should be equal to or less than the total number of training samples in the data set, right? |
st82421 | no its not like that. Actually the EPOCH has nothing to do with the BATCH_SIZE.
Let me put it this way…
lets say we have 2560 training examples. We choose the BATCH_SIZE of 256, so when we run
for x, t in dataloader_train:
This loop runs for 10 times (2560 examples/256 batch size)
This is one forward pass ove rall the 2560 training examples, also known as 1 EPOCH
Now if we want do repeart the same thing for 50 times, we set put it in another loop, like…
for epoch in range(50):
for x, t in dataloader_train:
where our EPOCHS = 50
I hope this makes sense
feel free to ask if you stilll have doubts
Kompai ! |
st82422 | Hi @n0obcoder -san,
Thank you very much for your advises !, your comments make me clear about it (^ - ^).
I wonder is that this is quite different from ordinary other programming languages which takes sequential execution.
乾杯(kanpai)! |
st82423 | glad to know that i could help. I am not sure about other programming languages. Python is the only programming language that i know.
乾杯(kanpai)! |
st82424 | I’ve tried this command,
conda install pytorch=0.4.1 cuda80 -c pytorch
But when I use the torch it will say “CUDA driver version is insufficient for CUDA runtime version”.
Then I tried to download the whl from https://download.pytorch.org/whl/cu80/torch_stable.html 147, but I can not open the website. |
st82425 | ccaf2ce57149782689c0:
conda install pytorch=0.4.1 cuda80 -c pytorch
sss.png893×299 13.5 KB |
st82426 | Have a look at the Compatible driver versions 516 and make sure you are satisfying the minimal driver version. |
st82427 | Thanks!
Actually I have checked it, seems no problem with the driver version.
Other users and applications work well. I just could not install pytorch. |
st82428 | Which driver are you currently using?
nvidia-smi will show you the driver version on top. |
st82429 | Thanks for the information.
The CUDA version given by nvcc won’t be used, as PyTorch binaries ship with their own CUDA and cudnn libs.
Do you necessarily need this old PyTorch version (0.4.1) or could you try to install the latest stable version (1.2.0) with CUDA9.2 or CUDA10? |
st82430 | Thanks for your patience.
you mean “conda install pytorch torchvision cudatoolkit=9.2 -c pytorch”?
I‘ve tried it, but when I use torch, it will warn me that my driver version is too old.
If I type “conda install pytorch torchvision cudatoolkit=8.0 -c pytorch”, then under this channel cudatoolkit8.0 can not be found. |
st82431 | In that case, I would suggest to update the drivers and use the latest version.
Note that you should use CUDA10.0, if you have a Turing GPU (e.g. RTX2080). |
st82432 | Actually I can not update the driver because the server is managed by others. Maybe I can suggest him to update it, but I think it will take quite a time.
Thanks for your advice anyway~ It is so kind of you :) |
st82433 | My target is pooling(max, min, avg, etc.) every sequence to one vector.
For example, given two sentences “I love Outman. I love Superman, too.”, after encoding, the first sentence length is 5(include punctuation) second is 6. I want to pool these two sentences to 2 vectors. And these sentences must encode in one round.
A simple idea is using the ‘scatter_’ function, but I have no idea about this.
# To simplify this question, I don't use batch.
# suppose we have the sentences embedding, 5+6=11 means the sum of the lengths of the sentences, 20 means the embedding dim.
sentence_embedding = torch.Tensor(11, 20)
# I have the split flag like this:
sentence_split_flag = torch.LongTensor([1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0])
....
# After some oprations(depends on the pooling method), I want to get:
pooled_sentence_embedding = torch.Tensor(2, 20)
And I’m sorry for my poor English, if you are perplexed for anything, I will try my best to explain it. |
st82434 | Hi, If I understand correctly want you want to do, you could do this:
sentence_embedding = torch.Tensor(11, 20)
# We the equivalent indices instead of flags
# sentence_split_flag = torch.LongTensor([1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0])
# which you can also get with:
# sentence_split_idx = (sentence_split_flag == 1).nonzero().view(-1)
sentence_split_idx = torch.LongTensor([0, 5])
pooled_sentence_embedding = sentence_embedding.index_select(dim=0, index=sentence_split_idx) |
st82435 | Thanks for your reply.
I want to pool all word of one sentence to one, not just get the first.
And in the sentence_split_flag, 1 means the start of one sentence.
# the sentence segment could be represented like this :
sentence_segment = torch.LongTensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]) |
st82436 | My bad, I misunderstood.
So I’m not sure of the optimal way of doing it.
But maybe doing something like this might help you:
segments_sizes = (5, 6)
pooled_segs = []
for seg in torch.split(sentence_embedding, segments_sizes, dim=0):
pooled_segs.append(pooling_op(seg))
pooled_sentence_embedding = torch.cat(pooled_segs, dim=0) |
st82437 | Yes, it works. But I think there is a more efficient method.
Maybe I could scatter the sentences which dim is (11, 20) to (2, 6, 10), which sentence length is smaller than 6 should be padded. Then pooling (2, 6, 10) to (2, 1, 10), and then reshape to (2, 10).
So I want to know one quick method to get the scatter index from sentence_split_flag or sentence_segment. |
st82438 | Hi,
I face an unsolvable problem and looking for any advice here…
In my use case, I have a special model M that processes the input images in the dataloader. And the model is quite huge, so it always requires GPU execution speed up.
In this case, I have two solutions:
Straightforward one. Use the main thread of the dataloader only. (It works, but very slow.)
Run the dataloader with multiple workers. Use torch.multiprocess.set_start_method("spawn") to let the child processes to acquire cuda. (15 times slower that 1…)
As execution time is critical in my case, so I keep finding faster implementation.
I originally believe that 2. should be faster.
But in reality, it runs 15 times slower than running with main thread only.
It seems like “spawn” backend significantly slows down the child process creation and the dataloader does not reuse the used child process (I can see the GPU memory usage goes ups and downs).
Any suggestion would be a great help!
Sincerely thanks!
P.S.
I also tried to use M.share_memory to share the model M among child processes, but it seems do not affect the execution speed at all. |
st82439 | Yeah, I would avoid (2). Accessing the GPU from dataloader workers is the path to ruin.
You don’t have to do all your preprocessing in the dataloader. For example, you can do your file load and CPU pre-processing in the data loader, but do the GPU operations afterwards:
for sample_cpu in dataloader:
sample_gpu = preprocess_gpu(sample_cpu)
train(sample_gpu) # or whatever
In general, the way to make GPU operations fast is:
batch operations
avoid CPU-GPU synchronizations
make sure the underlying ops are efficient |
st82440 | Thanks for your reply!
But my use case (I’m trying model-based data augmentation) strictly constraints that each sample is processed differently.
In this case, as I can’t process the samples in batch-wise, I believe that running multiprocessing is the last hope for me ;( |
st82441 | I have a tensor that defined as one hot as
label = torch.from_numpy(np.asarray([0, 0, 1, 0, 0, 0 , 0])).long()
I want to randomly shuffle the 1 position to another position in the label such as the new tensor does not equal the label. Do we have any function to do it in pytorch? For example
new_label = torch.from_numpy(np.asarray([0, 1, 0, 0, 0, 0 , 0])).long() |
st82442 | To shuffle one dimension of a tensor you could use torch.randperm() 4, e.g.:
>>> a = torch.eye(10)
>>> b = a[torch.randperm(10)]
>>> a
tensor([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]])
>>> b
tensor([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]])
>>> |
st82443 | andreaskoepf:
b = a[torch.randperm(10)]
Thanks. After 1M iterations, does b will equal a? I do not want to the new tensor equal the input |
st82444 | @andreaskoepf: How it work for my example?
This is error when I used your code
/opt/conda/conda-bld/pytorch_1565287025495/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread: [3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1565287025495/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread: [4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1565287025495/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1565287025495/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. |
st82445 | I have a (square) pairwise distances matrix, say n x n, and I would like to get rid of the diagonal and hence work with a n x (n - 1) matrix. What’s a fast way of doing that?
One way is a.masked_select(~torch.eye(n, dtype=bool)).view(n, n - 1) but I was curious if there’s a faster approach. |
st82446 | I do not know if it is really faster, but just for the fun of it you could try as an alternative:
a.flatten()[1:].view(n-1, n+1)[:,:-1].reshape(n, n-1) |
st82447 | Trainning resnet101 for 19k steps (3 nodes), I got “Unexpected poll revent: 25 on socket: 9: Software caused connection abort”, what can I do to fix this?
thanks |
st82448 | I am facing the same issue with 2 node training on a different model: “Unexpected poll revent: 25 on socket: 90: Software caused connection abort”. I am using the torch.distributed.launch utility.
Anyone able to resolve it? Or have any idea what might be causing it? |
st82449 | Let’s say a convolutional layer takes an input 𝑋 with dimensions of 5x100x100 and applies 10 filters 𝐹 5x5x5, thus produces an output 𝑂 10 feature maps 96x96.
During the backpropagation the layer receives 𝑑𝐸/𝑑𝑂 of shape 10x96x96.
My question is how to compute 𝑑𝐸/𝑑𝐹 ?
According to [that article]
(https://medium.com/@20!7csm1006/forward-and-backpropagation-in-convolutional-neural-network-4dfa96d7b37e 22)
𝑑𝐸/𝑑𝐹 can be calculated as convolution between 𝑋 and 𝑑𝐸/𝑑𝑂
Unfortunately, the article does not cover a case with multiple filters and multiple input channels.
Since 𝑋 has shape 5x100x100 and 𝑑𝐸/𝑑𝑂 has shape 10X96x96 the depth of 𝑋 equals to 5 and the depth of 𝑑𝐸/𝑑𝑂 equals to 10. So the depth dimension does not match. How to compute convolution in that case ?
Link for question on stackoverflow:https://datascience.stackexchange.com/questions/38896/an-error-with-respect-to-filter-weights-in-cnn-during-the-backpropagation?answertab=votes#tab-top 14
The author posted a solution to this problem as shown in the image.But this shows that the gradient of all filters will be the same across their channels which I could not reproduce with my code?
125.png1000×923 156 KB
Is the method wrong or is something wrong with my code?
import torch
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
import cv2
import matplotlib.pyplot as plt
ref_tensor1=torch.from_numpy(cv2.resize(cv2.imread("./trial_5.jpg",0).astype(np.float32),dsize=(225,225)))
ref_tensor1=ref_tensor1.unsqueeze(0).unsqueeze(0)
print(ref_tensor1.shape)
image1=cv2.imread("./trial_5.jpg").astype(np.float32)
image1=cv2.resize(image1,dsize=(256,256))
image1=np.rollaxis(image1,2)
image2=cv2.imread("./trial_8.jpg").astype(np.float32)
image2=cv2.resize(image2,dsize=(256,256))
image2=np.rollaxis(image2,2)
img_tensor2=torch.from_numpy(image2).unsqueeze(0)
img_tensor1=torch.from_numpy(image1).unsqueeze(0)
img_tensors=torch.cat((img_tensor1,img_tensor2),0)
print(img_tensors.shape)
print("Input_image_shape:",img_tensors.shape)
#print(img_tensors)
class torch_model1(torch.nn.Module):
def __init__(self,ic,oc,ks):
super(torch_model1,self).__init__()
self.conv1 = torch.nn.Conv2d(in_channels=ic,out_channels=oc,kernel_size=ks,stride=1)
def forward(self,x):
x = self.conv1(x)
return (x)
###1,3###
model1=torch_model1(3,3,32)
temp=torch.randn(img_tensors.shape)
op1=model1(img_tensor1)
print(op1.shape)
#assert(op1.shape==ref_tensor1.shape)
loss=torch.abs(op1-ref_tensor1).mean()
print(loss)
print("gradient_shape:",model1.conv1.weight.shape)
print("Before backprop:",model1.conv1.weight.grad)
loss.backward()
print("after backprop:",model1.conv1.weight.grad.shape)
#print("gradients:",model1.conv1.weight.grad)
print(model1.conv1.weight.grad)
##########RESULT(1,3,x,y):changes if seed is not set and the gradient is same for all channels#############
plt.subplot(131)
plt.imshow(model1.conv1.weight.grad[0,0,:,:])
plt.subplot(132)
plt.imshow(model1.conv1.weight.grad[0,1,:,:])
plt.subplot(133)
plt.imshow(model1.conv1.weight.grad[0,2,:,:])
plt.show() |
st82450 | I dont know if it helps, but I wrote a post on it here: http://soumith.ch/ex/pages/2014/08/07/why-rotate-weights-convolution-gradient/ 146
It goes into the calculations in a bit more detail. |
st82451 | This post shows the calculation for gradient of the input.I need the solution for calculation of the? gradient of the filter weights? |
st82452 | Hi @Srinjay_Sarkar, did you figure this out?
Adding padding, dilation, stride, and channels makes this a very mind twisting exercise! |
st82453 | @Sia_Rezaei, you might want to look at this 55 .It has all the python implementations for calculating the gradients of weights,biases and inputs.But using this for a network makes it extremely slow.I used a C++ extension to call the cudnn backprop which is much faster. |
st82454 | @Srinjay_Sarkar thanks! Yes, I just found out about grad.py and yea, it is slow!
Can you share how you call cudnn backprop? Or point me to the right direction? Thanks! |
st82455 | Sorry for my terrible English…
I run the code from 60mins tutorial,like this.
trainset = torchvision.datasets.CIFAR10(root=’./datasets’, train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
dataiter = iter(trainloader)
And return a error.
File “C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py”, line 705, in runfile
execfile(filename, namespace)
File “C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py”, line 102, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “C:/Users/flow_/Documents/cifarten/cf10.py”, line 36, in
dataiter = iter(trainloader)
File “C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py”, line 451, in iter
return _DataLoaderIter(self)
File “C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py”, line 239, in init
w.start()
File “C:\ProgramData\Anaconda3\lib\multiprocessing\process.py”, line 105, in start
self._popen = self._Popen(self)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\context.py”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\context.py”, line 322, in _Popen
return Popen(process_obj)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py”, line 65, in init
reduction.dump(process_obj, to_child)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe |
st82456 | Solved by ptrblck in post #2
There is an issue regarding multi-processing on Windows machines, since apparently Windows subprocesses will import (i.e. execute) the main module at start, which will result in recursively creating subprocesses.
Try to protect your code in if __name__ == '__main__'.
Also you could check, if this … |
st82457 | There is an issue regarding multi-processing on Windows machines, since apparently Windows subprocesses will import (i.e. execute) the main module at start, which will result in recursively creating subprocesses.
Try to protect your code in if __name__ == '__main__'.
Also you could check, if this is the error by setting num_workers=0 and running it again. |
st82458 | I got similar error on Ubuntu, which is: module multiprocessing.util’ has no attribute '_flush_std_streams
how to fix it? |
st82459 | The method is defined in the Python lib multiprocessing, and was not introduced in PyTorch.
Could you check if you can update the multiprocessing lib or re-install it? |
st82460 | Thank you for your explanation! The problem seems to disappear if I run the same code (without if __name__ == '__main__') in Jupyter notebook in Windows. |
st82461 | I am not sure this is a pytorch question or a python question. In
github.com
TropComplique/lda2vec-pytorch/blob/master/utils/training.py 148
import numpy as np
import torch
from torch.autograd import Variable
import torch.optim as optim
import math
from tqdm import tqdm
from torch.utils.data.dataset import Dataset
from torch.utils.data import DataLoader
from .lda2vec_loss import loss, topic_embedding
# negative sampling power
BETA = 0.75
# i add some noise to the gradient
ETA = 0.4
# i believe this helps optimization.
# the idea is taken from here:
# https://arxiv.org/abs/1511.06807
# 'Adding Gradient Noise Improves Learning for Very Deep Networks'
This file has been truncated. show original
line 127, why is it ok to call model(doc_indices, pivot_words, target_words) without specifying forward() method name? |
st82462 | Solved by SimonW in post #2
python question.
When you call model(...), you are actually calling model.__call__(...). As you can see here, the __call__ method on nn.Module eventually calls forward along with taking care of tracing and hooks. |
st82463 | python question.
When you call model(...), you are actually calling model.__call__(...). As you can see here 445, the __call__ method on nn.Module eventually calls forward along with taking care of tracing and hooks. |
st82464 | I am having the following problem when trying to use TensorBoard in PyTorch 1.2:
from torch.utils.tensorboard import SummaryWriter
Error:
ImportError: TensorBoard logging requires TensorBoard with Python summary writer installed. This should be available in 1.14 or above.
I understood that I could use TensorBoard with PyTorch 1.2 without problems!
I am using Anconda and I have installed version 1.14 of TensorBoard, so I don’t know what the problem is!!:
tensorboard 1.14.0 py37hf484d3e_0 anaconda
cpuonly 1.0 0 pytorch
pytorch 1.2.0 py3.7_cpu_0 [cpuonly] pytorch
torchvision 0.4.0 py37_cpu [cpuonly] pytorch
I tried with “pip uninstall tb-nightly” but it didn’t work for me. Anyone have the same problem?. I come from Keras / TF and I’m starting with PyTorch (I like PyTorch!!!), so maybe I’m missing out! |
st82465 | I meet the same error as you. I uninstall tensorboard and then install tensorflow, finally it worked. Bless for you! |
st82466 | It seems that I had some conflict between the TensorBoard of my system (ubuntu 18.04) and the one I had in Anaconda. I reinstalled everything using pip3 (not the anaconda pip3!!) directly and it already works. Thank you a lot. Problem solved! |
st82467 | Hi, I have a very big model that consists of different modules and I can’t reduce the mini-batch size. Also, I only have one GPU. So what are my options here? on this forum, I found out about accumulating gradient. will it help me? but I prefer doing normal SGD calculation. Is there any way to transfer some tensors and variables to CPU between modules and when I want to backpropagate transfer them back to GPU one by one or do backpropagation on CPU?
Thanks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.