id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st48968 | Ah I see, so does pytorch dataloader generate a unique list every time an epoch is finished? say if I have a 6k unique dataset, pytorch would generate an index of 6k length to enumerate (as to not repeating the input and ground truth)? |
st48969 | Hmm, I believe the official Dataloader has some logic regarding to indexing (like you mentioned), maybe also depends on if you set shuffle=True/False ,sampler, etc, but I always add an extra guard with modulo operation ("%") even if it is redundant.
I recommend reading the official guide to have a better understanding of what you can do with DataSet and DataLoader, here is a link:
pytorch.org
torch.utils.data — PyTorch 1.6.0 documentation 1 |
st48970 | Hi,
I am trying to get the gradient of a vector (with length m and batch size N) with respect to another vector (with length m and batch size N). Hence, I am expecting the gradient matrix to be Nxmxm.
For example,
x = torch.rand(5, 3)
x.requires_grad_(True)
x.retain_grad()
u = x*x
dudx = grad(u, x, torch.ones(x.size()[0], 3, 3), create_graph=True, retain_graph=True)
What is the mistake I am making? Many thanks in advance |
st48971 | Solved by albanD in post #2
Hi,
First, You seem to assume that N is a batch size that should be considered specially? But the autograd does not know about that. It will compute the gradient considering N as any other dimension.
Also autograd.grad() does a vector jacobian product (when grad_outputs is provided) and so will re… |
st48972 | Hi,
First, You seem to assume that N is a batch size that should be considered specially? But the autograd does not know about that. It will compute the gradient considering N as any other dimension.
Also autograd.grad() does a vector jacobian product (when grad_outputs is provided) and so will return something with the same size as the input.
You can use autograd.functional.jacobian() to get the full Jacobian but as mentionned above, you will get NxmxNxm as N is considered as any other dimension.
You can do a for loop around jacobian for the N dimension if you want to consider the batch samples one at a time. |
st48973 | In DataParallel() with 2 GPUs.
If I save checkpoint inside forward pass, checkpoint saves the weights of both GPUs (these weights are the same). However, If I keep checkpoint outside of forward pass, checkpoint saves weights of default devices.
Which place is good to keep checkpoint, (inside forward pass or outside forward pass) ? |
st48974 | Hi,
All these weights should have the exact same value. So I would recommend saving them outside so that you only save a single copy for them. |
st48975 | albanD:
you on
Yes, If I save checkpoint outside forward pass, checkpoint saves weights of single default device.
But, If I would like to save checkpoint inside forward, checkpoint saves weights of all GPUs (which are same) I believe it is issue in PyTorch.
If it is an issue, I would like to work on it and try to avoid duplicates weights inside the checkpoint. |
st48976 | Hi,
The DataParallel’s whole point is to run copies of your module on different GPU with different part of the input. So it is expected that the forward is called multiple times.
So if you do something in the forward that has side effects (like saving a file), then it is expected that this will not behave the same way as without DataParallel.
You can guard that part of the code to only run if you are running on GPU 0 bu that would be quite fragile. You should not do checkpointing inside the forward in general. |
st48977 | I agree,
Is there any particular location where we can keep checkpoint?
I am wondering are there any use cases where we can keep checkpoint inside forward pass |
st48978 | I agree,
Is there any particular location where we can keep checkpoint?
I am wondering are there any use cases where we can keep checkpoint inside forward pass |
st48979 | From my experience we usually checkpoint in the main training loop either at each epoch or after a fixed number of batches.
Doing checkpoint for each batch is most likely slowing down your training quite significantly. |
st48980 | I’m wonder how one could be able to do something like this in pytroch:
for example I want to initalize 64 copies of nn.Conv.2d, I tried to call a for loop and store the network like this:
self. conv_array = []
for i in range (64):
self. conv_array.append(nn.Conv2d(1, 16, stride=2, kernel_size = 2))
This apparently doesn’t work after I print out the network after initialization as the layers wont show up. |
st48981 | Solved by ptrblck in post #2
Use nn.ModuleList instead of a Python list to register these modules properly. |
st48982 | I downloaded the jpg images for DCGAN pytorch tutorial (https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html 16) and the ipynb cannot find the files.
I made sure the path location is in the right location
# Root directory for dataset
dataroot = "/home/paperspace/Label_YT_Videos/code/img_align_celeba/"
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
yet I get the following error
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-6-e26dc739e9e3> in <module>()
7 transforms.CenterCrop(image_size),
8 transforms.ToTensor(),
----> 9 transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
10 ]))
11
~/anaconda3/envs/fastai/lib/python3.6/site-packages/torchvision/datasets/folder.py in __init__(self, root, transform, target_transform, loader)
176 super(ImageFolder, self).__init__(root, loader, IMG_EXTENSIONS,
177 transform=transform,
--> 178 target_transform=target_transform)
179 self.imgs = self.samples
~/anaconda3/envs/fastai/lib/python3.6/site-packages/torchvision/datasets/folder.py in __init__(self, root, loader, extensions, transform, target_transform)
77 if len(samples) == 0:
78 raise(RuntimeError("Found 0 files in subfolders of: " + root + "\n"
---> 79 "Supported extensions are: " + ",".join(extensions)))
80
81 self.root = root
RuntimeError: Found 0 files in subfolders of: /home/paperspace/Label_YT_Videos/code/img_align_celeba/
Supported extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif |
st48983 | From the tutorial 59:
This is an important step because we will be using the ImageFolder dataset class, which requires there to be subdirectories in the dataset’s root folder. Now, we can create the dataset, create the dataloader, set the device to run on, and finally visualize some of the training data.
try setting the dataroot as "/home/paperspace/Label_YT_Videos/code/" |
st48984 | What if the directory I’m on has other data irrelevant to the tutorial? Honestly I’ll probaby have a folder within the data folder since the concept is a bit convoluted in my opinion. Your explanation worked tho. Thanks! |
st48985 | I have created an extra folder, too. But I also would be interested in why it is solved that way? @rchavezj’s question is also what I am wondering about…
rchavezj:
What if the directory I’m on has other data irrelevant to the tutorial? |
st48986 | Hey,
I’m trying to forecast the product demand.
(Long-Term-Demand-Forecasting…)
My raw data looks (really simplified) like this:
Date Customer_id article_id indicator1 indicator2 order_size
0 2020-03-01 D021 104 True 213.6 10
1 2020-03-02 D034 243 True 325.2 15
2 2020-03-02 D034 311 False 65.3 43
3 2020-03-02 D054 104 False 853.2 5
4 2020-03-03 D021 554 False 125.8 67
5 2020-03-03 D093 219 True 34.2 11
I want to predict the order_size (maybe separately for every article).
I already written something that combines Embedding for the categorical data with an LSTM.
I really would like to use a RNN, but I don’t have a proper time series since i have multiple rows for one date, because there are multiple orders every day from different customers for different products.
I could just “force” it into one row by replacing the customer_id-column with a "no. of customer that placed an order that day’-column, but then I would lose important information about which customer orders which product, etc.
Any idea how I could manage this without losing information? |
st48987 | I have the following situation, I’m trying to train a Unet Learner using fastai’s Library. My data is stored as float16 tensor saved by using torch.save and loaded via a custom load function. In fastai, you create a Learner object, and then you call Learn.fit() to train your model. My memory usage is linearly going up during training to a point where I run out of memory. One interesting thing is that my memory usage is reset between epochs, and so the problem is coming from an epoch of training. Basically fastai iters through a pytorch dataloader and does its stuff on top of that.
My Learner item has a learn.data.train_dl.dl attribute which is a torch.utils.data.dataloader.DataLoader. When simply iterating through this dataloader with
> for xb, yb in learn.data.train_dl.dl:
> pass
All my memory is used in a few minutes. I read on the forums that this could be coming from an issue when using multiple workers to load the files. I tried setting num_workers to 0 but this didn’t fix my memory issue.
I would really like to be able to train my model, do you have any idea on how to fix that bug, or how to do a hack (like how to replace the for x, y in dl with something else) to go around this issue?
For more details on how i’m loading the data, I’m using a fastai custom class TensorImageList(ImageList) where I just overrides the open(self, fn) method with :
class TensorImageList(ImageList):
def open(self, fn):
return torch.load(fn, map_location='cpu').type(torch.float)
I’ve also opened a post on fastai forum discussing this issue : https://forums.fast.ai/t/learn-fit-one-cycle-makes-me-run-out-of-memory-on-cpu-while-i-train-on-gpu/45428 12 |
st48988 | If you see the increase in memory usage during the dummy DataLoader loop, the issue might be in the Dataset and in particular how you are loading/storing the data.
Could you post the code for your Dataset? The TensorImageList class doesn’t look like it’s corresponding to Dataset, as the __getitem__ and __len__ methods are missing. |
st48989 | Basically i’m using the datablock api of fastai https://docs.fast.ai/data_block.html 18
My databunch is created by doing an TensorImageList.from_folder(), where TensorImageList is a subclass of ImageList. |
st48990 | I’ve created a minimal example on this repo : https://github.com/StatisticDean/fastai_memory_cpu/blob/master/test_leak.ipynb 33 |
st48991 | To access the dataset from the databunch data, one needs to do data.train_dl.dl.dataset.
The type of the dataset is LabelList which is a subclass of torch.utils.data.dataset.Dataset created by fastai in data_block.py
My databunch is created by doing
data = TensorImageList.from_folder('./data_test/', extensions='.ti').split_by_rand_pct().label_from_folder().databunch(bs=8, num_workers=0)
The __getitems__ method of the LabelList is the following :
def __getitem__(self,idxs:Union[int,np.ndarray])->'LabelList':
idxs = try_int(idxs)
if isinstance(idxs, Integral):
if self.item is None: x,y = self.x[idxs],self.y[idxs]
else: x,y = self.item ,0
if self.tfms or self.tfmargs:
x = x.apply_tfms(self.tfms, **self.tfmargs)
if hasattr(self, 'tfms_y') and self.tfm_y and self.item is None:
y = y.apply_tfms(self.tfms_y, **{**self.tfmargs_y, 'do_resolve':False})
if y is None: y=0
return x,y
else: return self.new(self.x[idxs], self.y[idxs])
So basically, it calls the __getitem__ method of self.x which in my case is a TensorImageList, the
__getitem__ method is the following
def __getitem__(self,idxs:int)->Any:
idxs = try_int(idxs)
if isinstance(idxs, Integral): return self.get(idxs)
else: return self.new(self.items[idxs], inner_df=index_row(self.inner_df, idxs))
It calls self.get which is the following :
def get(self, i):
fn = super().get(i)
res = self.open(fn)
self.sizes[i] = res.size
return res
And at this point the open is exactly what I overrided. |
st48992 | After some closer inspection, I noticed that the default open method returns an Image which is a class of fastai instead of returning a tensor. So I changed my
class TensorImageList(ImageList):
def open(self, fn):
return torch.load(fn, map_location='cpu').type(torch.float)
with
class TensorImageList(ImageList):
def open(self, fn):
return Image(torch.load(fn, map_location='cpu').type(torch.float))
And magic, memory is stable (at least if you just iterate through the dataloader). |
st48993 | So if you just load the data, you’ll see an increase in memory, while wrapping the tensor into an Image class solves this issue?
Could you post the class definition of Image? |
st48994 | Honestly I don’t understand where the issue is coming from either, this was just a quick fix. The error seems to be coming from fastai side though, According to sgugger on fastai’s forum :
" Ah! I think this might be due to our `data_collate` default function, which collected the `data` inside your tensor instead of just grabbing your tensor.
Why that didn’t release memory is beyond me, but I think if you pass to the call to `DataBunch` the regular pytorch collate function (which is `torch.utils.data.dataloader.default_collate` ) you won’t have a memory leak."
The Image source code is the following :
class Image(ItemBase):
"Support applying transforms to image data in `px`."
def __init__(self, px:Tensor):
self._px = px
self._logit_px=None
self._flow=None
self._affine_mat=None
self.sample_kwargs = {}
def set_sample(self, **kwargs)->'ImageBase':
"Set parameters that control how we `grid_sample` the image after transforms are applied."
self.sample_kwargs = kwargs
return self
def clone(self):
"Mimic the behavior of torch.clone for `Image` objects."
return self.__class__(self.px.clone())
@property
def shape(self)->Tuple[int,int,int]: return self._px.shape
@property
def size(self)->Tuple[int,int]: return self.shape[-2:]
@property
def device(self)->torch.device: return self._px.device
def __repr__(self): return f'{self.__class__.__name__} {tuple(self.shape)}'
def _repr_png_(self): return self._repr_image_format('png')
def _repr_jpeg_(self): return self._repr_image_format('jpeg')
def _repr_image_format(self, format_str):
with BytesIO() as str_buffer:
plt.imsave(str_buffer, image2np(self.px), format=format_str)
return str_buffer.getvalue()
def apply_tfms(self, tfms:TfmList, do_resolve:bool=True, xtra:Optional[Dict[Callable,dict]]=None,
size:Optional[Union[int,TensorImageSize]]=None, resize_method:ResizeMethod=None,
mult:int=None, padding_mode:str='reflection', mode:str='bilinear', remove_out:bool=True)->TensorImage:
"Apply all `tfms` to the `Image`, if `do_resolve` picks value for random args."
if not (tfms or xtra or size): return self
tfms = listify(tfms)
xtra = ifnone(xtra, {})
default_rsz = ResizeMethod.SQUISH if (size is not None and is_listy(size)) else ResizeMethod.CROP
resize_method = ifnone(resize_method, default_rsz)
if resize_method <= 2 and size is not None: tfms = self._maybe_add_crop_pad(tfms)
tfms = sorted(tfms, key=lambda o: o.tfm.order)
if do_resolve: _resolve_tfms(tfms)
x = self.clone()
x.set_sample(padding_mode=padding_mode, mode=mode, remove_out=remove_out)
if size is not None:
crop_target = _get_crop_target(size, mult=mult)
if resize_method in (ResizeMethod.CROP,ResizeMethod.PAD):
target = _get_resize_target(x, crop_target, do_crop=(resize_method==ResizeMethod.CROP))
x.resize(target)
elif resize_method==ResizeMethod.SQUISH: x.resize((x.shape[0],) + crop_target)
else: size = x.size
size_tfms = [o for o in tfms if isinstance(o.tfm,TfmCrop)]
for tfm in tfms:
if tfm.tfm in xtra: x = tfm(x, **xtra[tfm.tfm])
elif tfm in size_tfms:
if resize_method in (ResizeMethod.CROP,ResizeMethod.PAD):
x = tfm(x, size=_get_crop_target(size,mult=mult), padding_mode=padding_mode)
else: x = tfm(x)
return x.refresh()
def refresh(self)->None:
"Apply any logit, flow, or affine transfers that have been sent to the `Image`."
if self._logit_px is not None:
self._px = self._logit_px.sigmoid_()
self._logit_px = None
if self._affine_mat is not None or self._flow is not None:
self._px = _grid_sample(self._px, self.flow, **self.sample_kwargs)
self.sample_kwargs = {}
self._flow = None
return self
def save(self, fn:PathOrStr):
"Save the image to `fn`."
x = image2np(self.data*255).astype(np.uint8)
PIL.Image.fromarray(x).save(fn)
@property
def px(self)->TensorImage:
"Get the tensor pixel buffer."
self.refresh()
return self._px
@px.setter
def px(self,v:TensorImage)->None:
"Set the pixel buffer to `v`."
self._px=v
@property
def flow(self)->FlowField:
"Access the flow-field grid after applying queued affine transforms."
if self._flow is None:
self._flow = _affine_grid(self.shape)
if self._affine_mat is not None:
self._flow = _affine_mult(self._flow,self._affine_mat)
self._affine_mat = None
return self._flow
@flow.setter
def flow(self,v:FlowField): self._flow=v
def lighting(self, func:LightingFunc, *args:Any, **kwargs:Any):
"Equivalent to `image = sigmoid(func(logit(image)))`."
self.logit_px = func(self.logit_px, *args, **kwargs)
return self
def pixel(self, func:PixelFunc, *args, **kwargs)->'Image':
"Equivalent to `image.px = func(image.px)`."
self.px = func(self.px, *args, **kwargs)
return self
def coord(self, func:CoordFunc, *args, **kwargs)->'Image':
"Equivalent to `image.flow = func(image.flow, image.size)`."
self.flow = func(self.flow, *args, **kwargs)
return self
def affine(self, func:AffineFunc, *args, **kwargs)->'Image':
"Equivalent to `image.affine_mat = image.affine_mat @ func()`."
m = tensor(func(*args, **kwargs)).to(self.device)
self.affine_mat = self.affine_mat @ m
return self
def resize(self, size:Union[int,TensorImageSize])->'Image':
"Resize the image to `size`, size can be a single int."
assert self._flow is None
if isinstance(size, int): size=(self.shape[0], size, size)
if tuple(size)==tuple(self.shape): return self
self.flow = _affine_grid(size)
return self
@property
def affine_mat(self)->AffineMatrix:
"Get the affine matrix that will be applied by `refresh`."
if self._affine_mat is None:
self._affine_mat = torch.eye(3).to(self.device)
return self._affine_mat
@affine_mat.setter
def affine_mat(self,v)->None: self._affine_mat=v
@property
def logit_px(self)->LogitTensorImage:
"Get logit(image.px)."
if self._logit_px is None: self._logit_px = logit_(self.px)
return self._logit_px
@logit_px.setter
def logit_px(self,v:LogitTensorImage)->None: self._logit_px=v
@property
def data(self)->TensorImage:
"Return this images pixels as a tensor."
return self.px
def show(self, ax:plt.Axes=None, figsize:tuple=(3,3), title:Optional[str]=None, hide_axis:bool=True,
cmap:str=None, y:Any=None, **kwargs):
"Show image on `ax` with `title`, using `cmap` if single-channel, overlaid with optional `y`"
cmap = ifnone(cmap, defaults.cmap)
ax = show_image(self, ax=ax, hide_axis=hide_axis, cmap=cmap, figsize=figsize)
if y is not None: y.show(ax=ax, **kwargs)
if title is not None: ax.set_title(title)
The code can be found in fastai.vision.image |
st48995 | StatisticDean:
Image
I am experiencing the same exact memory growing problem, however I am not using fastai library. If I read the image as a class attribute, and pass it onto class functions, e.g. _pad_img and _patch_extraction, the memory grows linearly with the iterations.
This is how I make my Dataset, which is next used by the Python DataLoader:
class MyDataset(Dataset):
def __init__(self, patch_size, subdivisions, image_file):
self.patch_size = patch_size
self.subdivisions = subdivisions
self.image = self._pad_img(cv2.cvtColor(cv2.imread(image_file), cv2.COLOR_BGR2RGB))
self.coords = self._extract_patches(self.image)
def _pad_img(self, img):
aug = int(round(self.patch_size * (1 - 1.0 / self.subdivisions)))
padding = ((aug, aug), (aug, aug), (0, 0))
img_padded = np.pad(img, pad_width=padding, mode='reflect')
return img_padded
def _extract_patches(self, img):
step = int(self.patch_size / self.subdivisions)
row_range = range(0, img.shape[0] - self.patch_size + 1, step)
col_range = range(0, img.shape[1] - self.patch_size + 1, step)
coords = []
for row in row_range:
for col in col_range:
left = col
upper = row
right = col + self.patch_size
lower = row + self.patch_size
coords += [(left, upper, right, lower)]
return coords
def __len__(self):
return len(self.coords)
def __getitem__(self, idx):
box = self.coords[idx]
image = self.image[box[1]:box[3], box[0]:box[2]]
return image
I have tried reading the image in _extract_patches. There is no memory problem that way, but the code becomes extremely slow.
Any ideas on how this can be fixed is appreciated. |
st48996 | I am also experiencing this issue, whether or not I set my num_workers to 0.
I have a 1TB dataset and here is my code:
class DicomDatasetRetriever(torch.utils.data.Dataset):
def __init__(self, df, transforms=[], mix_ratio=1, mode='val'):
self.df_main = df.copy()
self.mode = mode
self.mix_ratio = mix_ratio
if self.mode == 'val':
self.df = self.df_main
else:
self.update_train_df()
self.lut = df[['SOPInstanceUID', 'image_path']].set_index('SOPInstanceUID')
if not(len(transforms)):
self.transforms = None
else:
self.transforms = A.Compose(transforms)
self.default_transforms = A.Compose([
A.Normalize(0.449, 0.226),
ToTensorV2(),
])
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
row = self.df.iloc[idx]
study = row.StudyInstanceUID
img_id = row.SOPInstanceUID
img = self.load_image(img_id)
pe_ratio = row.r_pe_present_on_image
target = row.pe_present_on_image
# transforms
if self.transforms is not None and self.mode != 'val' and (row[3] == 1 or random.random() < 0.1):
img = self.transforms(image=img)['image']
# default transformation
img = self.default_transforms(image=img)['image']
return {
'img': img,
'img_id': img_id,
'study_id': study,
'pe_ratio': torch.tensor([pe_ratio]).float(),
'target': torch.tensor([target]).float()
}
def load_image(self, img_id):
# img = cv2.imread(self.lut.loc[img_id, 'image_path'])
with open(self.lut.loc[img_id, 'image_path'], 'rb') as f:
img = JPEG.decode(f.read())
if img is None:
print(f"Warning while trying to load image. No file at {file_path}")
img = np.zeros(shape=SHAPE)
img = img.astype(np.float32)
img /= 255
return img
def update_train_df(self):
df0 = self.df_main[self.df_main.pe_present_on_image==0]
df1 = self.df_main[self.df_main.pe_present_on_image==1]
df0 = df0.sample(frac=1)
upto = min(int(self.mix_ratio * len(df1)), len(df0))
self.df = pd.concat([df0.iloc[:upto],df1], axis=0)
self.df = self.df.sample(frac=1)
class DataLoaders:
self.train_loader = torch.utils.data.DataLoader(
self.train_dataset, batch_size=self.batch_size, shuffle=True,
num_workers=self.num_workers, pin_memory=True) |
st48997 | Hi,
I am attempting to run my code on a HPC cluster. I want to limit the number of threads used to the number of cpus I demand. I find, however, that during the backward step thread usage shoots up.
As example: I demand 1 CPU, but during the backward step, 4 threads are used (and afterwards they remain used).
I have set the following environment variables at the start of the code:
import os
os.environ[“OMP_NUM_THREADS”] = “1”
os.environ[“OPENBLAS_NUM_THREADS”] = “1”
os.environ[“MKL_NUM_THREADS”] = “1”
os.environ[“VECLIB_MAXIMUM_THREADS”] = “1”
os.environ[“NUMEXPR_NUM_THREADS”] = “1”
and also set
import torch
torch.set_num_threads(1)
But thread usage during the backward step remains at 4.
Another thing is that the command in the linux terminal (with PID meaning process id)
ps -o nlwp {PID}
and the method
torch.get_num_threads()
return different results: the former command tells me my process is using 4 threads, the latter says it only sees 1 thread. I am inclined to believe the former.
Help would be greatly appreciated. |
st48998 | Hi,
I guess your cluster has 4 GPUs?
The autograd engine also uses threading to be able to send work to the GPUs fast enough during the backward.
Is that a problem in your setup? |
st48999 | You probably just want to limit concurrency of compute-heavy stuff, so set OMP_NUM_THREADS=1 and num_workers=1 . But thread switching in general is fast, you won’t get much benefit from getting rid of threads that sit idle most of the time. You can see what all the threads are doing “sudo gdb -p … thread apply all bt” |
st49000 | @albanD
My cluster is divided up into several nodes, the current node I am utilizing does not have any GPUs.
@Yaroslav_Bulatov
I am currently not using a dataloader so I don’t believe num_workers=1 is relevant to my case.
The main reason why I want to limit the number of threads is because in the past my jobs have overloaded the compute nodes due to using more threads than the number of CPUs. |
st49001 | If you don’t have GPUs, then the autograd won’t use extra threads.
Are you using the jit? |
st49002 | hmmm not sure where these threads could be coming from then…
@VitalyFedyunin might have an idea?
Another solution would be to start your job with gdb and when 4 threads are used, interrupt it and check the stack trace for each thread. That should tell us why it happens. |
st49003 | Here is the trace for the threads:
(gdb) thread apply all bt
Thread 4 (Thread 0x7facf308c700 (LWP 16191)):
#0 0x00007fad7454bd1f in accept4 () from /lib64/libc.so.6
#1 0x00007facf32b6a5a in ?? () from /lib64/libcuda.so.1
#2 0x00007facf32a85bd in ?? () from /lib64/libcuda.so.1
#3 0x00007facf32b8118 in ?? () from /lib64/libcuda.so.1
#4 0x00007fad74f2ae65 in start_thread () from /lib64/libpthread.so.0
#5 0x00007fad7454a88d in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7facf288b700 (LWP 16192)):
#0 0x00007fad74f2e9f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00007fad644e582c in std::condition_variable::wait(std::unique_lockstd::mutex&) ()
from /lib64/libstdc++.so.6
#2 0x00007fad1dc8ce23 in torch::autograd::ReadyQueue::pop() ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch.so
#3 0x00007fad1dc8ed7c in torch::autograd::Engine::thread_main(std::shared_ptr<torch::autogr ad::GraphTask> const&, bool) ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch.so
#4 0x00007fad1dc88979 in torch::autograd::Engine::thread_init(int) ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch.so
#5 0x00007fad64d9408a in torch::autograd::python::PythonEngine::thread_init(int) ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so
#6 0x00007fad659afdef in execute_native_thread_routine ()
—Type to continue, or q to quit—return
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-li nux-gnu.so
#7 0x00007fad74f2ae65 in start_thread () from /lib64/libpthread.so.0
#8 0x00007fad7454a88d in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7facf208a700 (LWP 16193)):
#0 0x00007fad74f2e9f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00007fad644e582c in std::condition_variable::wait(std::unique_lockstd::mutex&) ()
from /lib64/libstdc++.so.6
#2 0x00007fad1dc8ce23 in torch::autograd::ReadyQueue::pop() ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch.so
#3 0x00007fad1dc8ed7c in torch::autograd::Engine::thread_main(std::shared_ptr<torch::autogr ad::GraphTask> const&, bool) ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch.so
#4 0x00007fad1dc88979 in torch::autograd::Engine::thread_init(int) ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch.so
#5 0x00007fad64d9408a in torch::autograd::python::PythonEngine::thread_init(int) ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so
#6 0x00007fad659afdef in execute_native_thread_routine ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-li nux-gnu.so
#7 0x00007fad74f2ae65 in start_thread () from /lib64/libpthread.so.0
#8 0x00007fad7454a88d in clone () from /lib64/libc.so.6
—Type to continue, or q to quit—return
Thread 1 (Thread 0x7fad75853740 (LWP 16174)):
#0 0x00007fad744d2424 in memalign () from /lib64/libc.so.6
#1 0x00007fad744d404c in posix_memalign () from /lib64/libc.so.6
#2 0x00007fad1960ab4a in c10::alloc_cpu(unsigned long) ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libc10.so
#3 0x00007fad1960c5fa in c10::DefaultCPUAllocator::allocate(unsigned long) const ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libc10.so
#4 0x00007fad1b93c78a in at::native::empty_cpu(c10::ArrayRef, c10::TensorOptions cons t&, c10::optionalc10::MemoryFormat) ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch.so
#5 0x00007fad1bb8530b in at::CPUType::(anonymous namespace)::empty(c10::ArrayRef, c10 ::TensorOptions const&, c10::optionalc10::MemoryFormat) ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch.so
#6 0x00007fad1bbccb37 in c10::detail::wrap_kernel_functor_unboxed_<c10::detail::WrapRuntime KernelFunctor_<at::Tensor ()(c10::ArrayRef, c10::TensorOptions const&, c10::optional< c10::MemoryFormat>), at::Tensor, c10::guts::typelist::typelist<c10::ArrayRef, c10::Ten sorOptions const&, c10::optionalc10::MemoryFormat > >, at::Tensor (c10::ArrayRef, c1 0::TensorOptions const&, c10::optionalc10::MemoryFormat)>::call(c10::OperatorKernel, c10: :ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) ()
from /home/nfs/USER/.local/lib/python3.6/site-packages/torch/lib/libtorch.so
#7 0x00007fad1db8c795 in torch::autograd::VariableType::(anonymous namespace)::empty(c10::A
It does seem like it is autograd that is using extra threads, despite the fact that there is no GPU to make use of. |
st49004 | So thread 4 one seems to be a cuda driver thread. Not sure we can do anything about this ? (cc @ptrblck)
thread 1 is your main worker thread
thread 3/4 are autograd worker thread (not sure why there are 2), but the CPU worker thread should run only while thread 1 will be blocked on waiting for it to finish. So that should use more than one core. |
st49005 | Just in case OMP_NUM_THREADS controls ONLY number of operator threads, and does nothing with Autograd threads or Data Loading threads. |
st49006 | Hello. This is my CustomDataSetClass:
class CustomDataSet(Dataset):
def __init__(self, main_dir, transform):
self.main_dir = main_dir
self.transform = transform
all_imgs = os.listdir(main_dir)
self.total_imgs = natsort.natsorted(all_imgs)
for file_name in self.total_imgs:
if '.txt' in file_name: self.total_imgs.remove(file_name)
if file_name == 'semantic': self.total_imgs.remove('semantic')
def __len__(self):
return len(self.total_imgs)
def __getitem__(self, idx):
img_loc = os.path.join(self.main_dir, self.total_imgs[idx])
image = Image.open(img_loc).convert("RGB")
tensor_image = self.transform(image)
return tensor_image
Here is how I create a list of datasets:
all_datasets = []
while folder_counter < num_train_folders:
#some code to get path_to_imgs which is the location of the image folder
train_dataset = CustomDataSet(path_to_imgs, transform)
all_datasets.append(train_dataset)
folder_counter += 1
Then I concat my datasets and create the dataloader and do the training:
final_dataset = torch.utils.data.ConcatDataset(all_datasets)
train_loader = data.DataLoader(final_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
drop_last=True)
So, is the order of my data preserved? During training, will I go to each folder in theexact order that the concatenation was done and then grab all the images sequentially? For example:
I grab 150 images from folder 1, 100 images from folder 2 and 70 images from folder 3. I concatenate my the three datasets. During training I do:
for idx, input_seq in enumerate(data_loader):
#code to train
So, will the dataloader go through folder 1 and grab all the images inside there sequentially and then go to folder 2 and do the same and finally go to folder 3 and do the same as well? I tried reading the code for ConcatDataset but I can’t understand whether the order of my data willl be preserved or not. |
st49007 | Solved by ptrblck in post #2
Yes, the order should be preserved as shown in this simple example using TensorDatasets:
datasets = []
for i in range(3):
datasets.append(TensorDataset(torch.arange(i*10, (i+1)*10)))
dataset = ConcatDataset(datasets)
loader = DataLoader(
dataset,
shuffle=False,
num_workers=0,
b… |
st49008 | Yes, the order should be preserved as shown in this simple example using TensorDatasets:
datasets = []
for i in range(3):
datasets.append(TensorDataset(torch.arange(i*10, (i+1)*10)))
dataset = ConcatDataset(datasets)
loader = DataLoader(
dataset,
shuffle=False,
num_workers=0,
batch_size=2
)
for data in loader:
print(data) |
st49009 | what if I want to grab data from different file but after i want it to be concatinated and shuffled not preserving the same order
because I have data in different folders so i grabe each of them but afterwards i want it to be all shuffled. |
st49010 | You could use shuffle=True when creating the DataLoader, which will shuffle the passed ConcatDataset. |
st49011 | list_1 = [1,2,3,4,5]
list_2 = [6,7,8,9,10]
list_3 = [22,23,24,25,26,27]
dataset_list = [list_1, list_2, list_3]
dataset_loader = DataLoader(dataset_list, shuffle=True, batch_size=3)
for i in range(30):
for x in dataset_loader:
print(x)
output789×518 11 KB
My question is why does each batch(size 3) has same data but shuffled within themselves?
My usecase is, I need to shuffle the entire dataset after concatenation, such that over each epoch I have different batch of dataset shuffled over all the datasets[list_1, list_2, list_3] |
st49012 | Passing nested lists to the DataLoader might have these kind of side effects and thus I would recommend to create tensors, pass them to a TensorDataset, and this dataset to the DataLoader, which should then properly index and shuffle the data. |
st49013 | Hi Thre
How do I fix the below error:
NotImplementedError Traceback (most recent call last)
<ipython-input-2-db4fff233cd4> in <module>
6 input = torch.rand(5,3)
7 print(input)
----> 8 out = model(input)
9 for epoch in range(2):
10 running_loss = 0.0
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _forward_unimplemented(self, *input)
221 # https://github.com/python/mypy/issues/8795
222 def _forward_unimplemented(self, *input: Any) -> None:
--> 223 raise NotImplementedError
224
225 r"""Defines the computation performed at every call.
NotImplementedError:
My code is below:
# Setting up the environment
import torch
import torchvision
from torchvision import transforms, datasets
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import torch.optim as optim
#Global variables
batch_size = 10
num_images_display = 60
#Load data
train_data = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('', train=True, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
test_data = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
#Neural network class
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.L1 = nn.Linear(28 * 28,512)
self.L2 = nn.Linear(512, 10)
def foward(self, x):
x = self.L1(x)
x = self.L2(x)
x = nn.Softmax(x, dim = 1)
return x
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = Net().to(device)
print(model)
# Train the model / network
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
model.train()
for epoch in range(2):
running_loss = 0.0
for images, labels in train_data:
# Zero the parameter gradients and training pass
optimizer.zero_grad()
# Outputs
outputs = model(images) |
st49014 | Solved by ptrblck in post #2
You have a typo in forward (you are using foward), so you would need to fix this.
Once this is done, remove the nn.Softmax, as nn.CrossEntropyLoss expects raw logits.
PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging these issues easier. I’ve formatte… |
st49015 | You have a typo in forward (you are using foward), so you would need to fix this.
Once this is done, remove the nn.Softmax, as nn.CrossEntropyLoss expects raw logits.
PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging these issues easier. I’ve formatted the code for you. |
st49016 | Thank you so much, fixing the typo error and add the below code worked:
images = images.view(images.shape[0], -1) |
st49017 | Hello,
I’ve got problem to which I can’t find the proper answer.
I need to create very large tensors (with zeros) - my_tensor.shape = [256, 3, 500000] and assign to specific indexes “1”. I have a tensor with .shape = [256, 3] containing indexes at which should I assign “1” to every tensor inside my_tensor.
So I was doing it in the loop one by one and concatenate to create 256 sized batch (which is time consuming), and now I’m thinking if there is possibility to make it faster in that way:
my_tensor = torch.zeros([256, 3, 500000])
my_tensor[#here some magic indexes] = 1
Could you suggest me some efficient way to solve my problem?
Best regards |
st49018 | I think that I found a “bug”.
To reproduce, try first this:
import torch
t1 = torch.zeros([3, 3, 10])
t2 = torch.tensor([[0,1,2],[3,4,5],[6,7,8]])
t1[range(t2.shape[0]), torch.tensor([range(t2.shape[1])] * t2.shape[0]).T, t2.T] = 1
Output:
tensor([[[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.]]])
This works, and now with one small modification (type of the tensor modified):
import torch
t1 = torch.zeros([3, 3, 10])
t2 = torch.tensor([[0,1,2],[3,4,5],[6,7,8]], dtype=torch.uint8)
t1[range(t2.shape[0]), torch.tensor([range(t2.shape[1])] * t2.shape[0]).T, t2.T] = 1
Output:
IndexError: too many indices for tensor of dimension 3
Do you have some explanation for this?
Best regards |
st49019 | torch.uint8 was used as a bool index in past PyTorch versions and you should also get a warning:
UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead.
To index a tensor use LongTensors or a mask via BoolTensors. |
st49020 | In my mind, the eval mode will normalize feature maps with the recorded running mean and running var. But in eval mode, they cannot be updated.
In train mode, the running mean and running var can be updated, but the BN layer doesn’t use them to normalize feature.
How to update running mean and var and then use them to normalize the input features? |
st49021 | Shanyan_Guan:
How to update running mean and var and then use them to normalize the input features?
Do you want to calculate the batch stats, update the running stats, and use the running stats to normalize the data?
If so, I think the cleanest way would be to implement a custom batchnorm layer by changing the logic in the current implementation. |
st49022 | Hi, I want to use Pytorch with Quadro K3100M which is of Cuda capability 3.0. But i get this error :
" Found GPU0 Quadro K3100M which is of Cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum Cuda capability that we support is 3.5. "
So, I did some research and found that i should have installed Pytorch from source, therefore i am trying to do it. But after i run “python setup.py install”, i get this error :
[3894/5004] Building NVCC (Device) obj…cuda_generated_THCTensorTopKShort.cu.o
FAILED: caffe2/CMakeFiles/torch_cuda.dir//aten/src/THC/generated/torch_cuda_generated_THCTensorTopKShort.cu.o
cd /home/ionur1/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir//aten/src/THC/generated && /home/ionur1/anaconda3/envs/conda_env/bin/cmake -E make_directory /home/ionur1/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir//aten/src/THC/generated/. && /home/ionur1/anaconda3/envs/conda_env/bin/cmake -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=/home/ionur1/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir//aten/src/THC/generated/./torch_cuda_generated_THCTensorTopKShort.cu.o -D generated_cubin_file:STRING=/home/ionur1/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir//aten/src/THC/generated/./torch_cuda_generated_THCTensorTopKShort.cu.o.cubin.txt -P /home/ionur1/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir//aten/src/THC/generated/torch_cuda_generated_THCTensorTopKShort.cu.o.Release.cmake
/home/ionur1/pytorch/aten/src/THC/THCDeviceUtils.cuh(38): error: identifier “__ldg” is undefined
detected during:
instantiation of “T doLdg(const T *) [with T=int16_t]”
/home/ionur1/pytorch/aten/src/ATen/native/cuda/SortingRadixSelect.cuh(201): here
instantiation of “void at::native::countRadixUsingMask<scalar_t,bitwise_t,index_t,CountType,RadixSize,RadixBits>(CountType *, CountType *, bitwise_t, bitwise_t, int, index_t, index_t, scalar_t *) [with scalar_t=int16_t, bitwise_t=uint32_t, index_t=uint32_t, CountType=int, RadixSize=4, RadixBits=2]”
/home/ionur1/pytorch/aten/src/ATen/native/cuda/SortingRadixSelect.cuh(337): here
instantiation of “void at::native::radixSelect<scalar_t,bitwise_t,index_t,Order>(scalar_t *, index_t, index_t, index_t, int *, scalar_t *) [with scalar_t=int16_t, bitwise_t=uint32_t, index_t=uint32_t, Order=true]”
/home/ionur1/pytorch/aten/src/THC/THCTensorTopK.cuh(69): here
instantiation of “void gatherTopK<T,IndexType,Dim,Order>(TensorInfo<T, IndexType>, IndexType, IndexType, IndexType, IndexType, TensorInfo<T, IndexType>, IndexType, IndexType, TensorInfo<int64_t, IndexType>, IndexType) [with T=int16_t, IndexType=uint32_t, Dim=1, Order=true]”
/home/ionur1/pytorch/aten/src/THC/generic/THCTensorTopK.cu(127): here
1 error detected in the compilation of “/tmp/tmpxft_0000297e_00000000-6_THCTensorTopKShort.cpp1.ii”.
CMake Error at torch_cuda_generated_THCTensorTopKShort.cu.o.Release.cmake:281 (message):
Error generating file
/home/ionur1/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated/./torch_cuda_generated_THCTensorTopKShort.cu.o
[3901/5004] Building NVCC (Device) obj…rch_cuda_generated_THCTensorIndex.cu.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File “setup.py”, line 724, in
build_deps()
File “setup.py”, line 312, in build_deps
build_caffe2(version=version,
File “/home/ionur1/pytorch/tools/build_pytorch_libs.py”, line 62, in build_caffe2
cmake.build(my_env)
File “/home/ionur1/pytorch/tools/setup_helpers/cmake.py”, line 346, in build
self.run(build_args, my_env)
File “/home/ionur1/pytorch/tools/setup_helpers/cmake.py”, line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File “/home/ionur1/anaconda3/envs/conda_env/lib/python3.8/subprocess.py”, line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command ‘[‘cmake’, ‘–build’, ‘.’, ‘–target’, ‘install’, ‘–config’, ‘Release’, ‘–’, ‘-j’, ‘8’]’ returned non-zero exit status 1.
Note: I tried to run with python and python3, nothing has changed. I activated conda environment before running the commands.
OS: Ubuntu 18.04.1 LTS
CUDA Version 10.2.89
cudNN Version 7603
Python : 3.8.5
GPU : GPU0 Quadro K3100M
GCC version : 8.4.0
I would like to be very glad if you help me about this. |
st49023 | iremonur:
error: identifier “__ldg” is undefined
As stated in the error message, the minimal compute capability is 3.5, since e.g. __ldg is defined for >=3.5 as seen here 4. |
st49024 | I inherited someone’s code which has usages like the following all over the place:
a = colors[idxs.long()]
b = coords[idxs.long()]
c = foo1(offsets.cuda())
d = foo2(offsets.cuda())
In the first two lines, call to idxs.long() gets repeated. In the other two lines, call to “offsets.cuda()” gets repeated.
Is it more efficient to assign the results of those calls to some temporary variables and then use them instead of replicating the calls, or, does some kind of internal optimization take care of such cases anyway?
Thanks. |
st49025 | I would avoid repeating transformations and transforms the tensors once.
Often you also don’t need the same tensor in different dtypes or devices, so I’m wondering if e.g. offsets is used anywhere later on the CPU. |
st49026 | Hi Guys,
I’m currently executing several kinds of models at the same time as multiple processes.
and I’m using same Imagenet 1k dataset which inside of my data folder.
the problem is when I do validate the single(mobilenetv2) model and when I do validate with multiple(mobiletv2 + other dnns)models, the accuracy shows slightly different.
I think multiple dataloader looking at the same data folder might be the problem but not sure it is.
Do you know what makes this problem?
I use PyTorch 1.4.0 on NVIDIA Jetson Xavier
I got the data sets and dataloader code from the below page
one thing different is that i set num_worker as a 0.
pytorch.org
(beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials...
Thanks |
st49027 | How reproducible is this behavior, i.e. are you always seeing this issue using e.g. 10 different runs with different seeds? |
st49028 | I am trying to install pytorch from source because my gpus compute capability is 3.0. I get the error below for different files each time. Also, there is a warning at the beginning.
These are installed:
nvidia driver: 418.113
cuda: 10.1
cudnn: 8.0.4.30
I followed the instructions on https://github.com/pytorch/pytorch#from-source 3
and the line below before starting the installation.
$ export TORCH_CUDA_ARCH_LIST=3.0
Error:
[38/1087] Building NVCC (Device) object caffe2/CMakeFiles/torch...n/src/THC/generated/torch_cuda_generated_THCTensorTopKChar.cu.o
FAILED: caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated/torch_cuda_generated_THCTensorTopKChar.cu.o
cd /home/srv/anaconda3/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated && /home/srv/anaconda3/bin/cmake -E make_directory /home/srv/anaconda3/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated/. && /home/srv/anaconda3/bin/cmake -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=/home/srv/anaconda3/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated/./torch_cuda_generated_THCTensorTopKChar.cu.o -D generated_cubin_file:STRING=/home/srv/anaconda3/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated/./torch_cuda_generated_THCTensorTopKChar.cu.o.cubin.txt -P /home/srv/anaconda3/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated/torch_cuda_generated_THCTensorTopKChar.cu.o.Release.cmake
/home/srv/anaconda3/pytorch/aten/src/THC/THCDeviceUtils.cuh(38): error: identifier "__ldg" is undefined
detected during:
instantiation of "T doLdg(const T *) [with T=int8_t]"
/home/srv/anaconda3/pytorch/aten/src/ATen/native/cuda/SortingRadixSelect.cuh(201): here
instantiation of "void at::native::countRadixUsingMask<scalar_t,bitwise_t,index_t,CountType,RadixSize,RadixBits>(CountType *, CountType *, bitwise_t, bitwise_t, int, index_t, index_t, scalar_t *) [with scalar_t=int8_t, bitwise_t=uint32_t, index_t=uint32_t, CountType=int, RadixSize=4, RadixBits=2]"
/home/srv/anaconda3/pytorch/aten/src/ATen/native/cuda/SortingRadixSelect.cuh(337): here
instantiation of "void at::native::radixSelect<scalar_t,bitwise_t,index_t,Order>(scalar_t *, index_t, index_t, index_t, int *, scalar_t *) [with scalar_t=int8_t, bitwise_t=uint32_t, index_t=uint32_t, Order=true]"
/home/srv/anaconda3/pytorch/aten/src/THC/THCTensorTopK.cuh(69): here
instantiation of "void gatherTopK<T,IndexType,Dim,Order>(TensorInfo<T, IndexType>, IndexType, IndexType, IndexType, IndexType, TensorInfo<T, IndexType>, IndexType, IndexType, TensorInfo<int64_t, IndexType>, IndexType) [with T=int8_t, IndexType=uint32_t, Dim=1, Order=true]"
/home/srv/anaconda3/pytorch/aten/src/THC/generic/THCTensorTopK.cu(127): here
1 error detected in the compilation of "/tmp/tmpxft_0000040e_00000000-6_THCTensorTopKChar.cpp1.ii".
CMake Error at torch_cuda_generated_THCTensorTopKChar.cu.o.Release.cmake:281 (message):
Error generating file
/home/srv/anaconda3/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated/./torch_cuda_generated_THCTensorTopKChar.cu.o
[45/1087] Building NVCC (Device) object caffe2/CMakeFiles/torch...__/aten/src/THC/torch_cuda_generated_THCTensorMathPairwise.cu.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "setup.py", line 724, in <module>
build_deps()
File "setup.py", line 312, in build_deps
build_caffe2(version=version,
File "/home/srv/anaconda3/pytorch/tools/build_pytorch_libs.py", line 62, in build_caffe2
cmake.build(my_env)
File "/home/srv/anaconda3/pytorch/tools/setup_helpers/cmake.py", line 346, in build
self.run(build_args, my_env)
File "/home/srv/anaconda3/pytorch/tools/setup_helpers/cmake.py", line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File "/home/srv/anaconda3/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '8']' returned non-zero exit status 1.
The warning at the beginning:
CMake Warning at caffe2/CMakeLists.txt:700 (add_library):
Cannot generate a safe runtime search path for target torch_cpu because
files in some directories may conflict with libraries in implicit
directories:
runtime library [libgomp.so.1] in /usr/lib/gcc/x86_64-linux-gnu/7 may be hidden by files in:
/home/srv/anaconda3/lib
Some of these libraries may not be found correctly.
CMake Warning at cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake:1854 (add_library):
Cannot generate a safe runtime search path for target
caffe2_detectron_ops_gpu because files in some directories may conflict
with libraries in implicit directories:
runtime library [libgomp.so.1] in /usr/lib/gcc/x86_64-linux-gnu/7 may be hidden by files in:
/home/srv/anaconda3/lib
Some of these libraries may not be found correctly.
Call Stack (most recent call first):
modules/detectron/CMakeLists.txt:13 (CUDA_ADD_LIBRARY) |
st49029 | Solved by ptrblck in post #2
I don’t think you can compile for compute capability 3.0, as __ldg() is defined for >=3.5 as described here. |
st49030 | I don’t think you can compile for compute capability 3.0, as __ldg() is defined for >=3.5 as described here 1. |
st49031 | Here is my code in which I am getting error
class Darknet(nn.Module):
def __init__(self, cfgfile):
super(Darknet, self).__init__()
self.blocks = parse_cfg(cfgfile)
self.net_info, self.module_list = create_modules(self.blocks)
def forward(self, x, CUDA):
modules = self.blocks[1:]
outputs = {} #We cache the outputs for the route layer
write = 0
for i, module in enumerate(modules):
module_type = (module["type"])
if module_type == "convolutional" or module_type == "upsample":
x = self.module_list[i](x)
elif module_type == "route":
layers = module["layers"]
layers = [int(a) for a in layers]
if (layers[0]) > 0:
layers[0] = layers[0] - i
if len(layers) == 1:
x = outputs[i + (layers[0])]
else:
if (layers[1]) > 0:
layers[1] = layers[1] - i
map1 = outputs[i + layers[0]]
map2 = outputs[i + layers[1]]
x = torch.cat((map1, map2), 1)
elif module_type == "shortcut":
from_ = int(module["from"])
x = outputs[i-1] + outputs[i+from_]
elif module_type == 'yolo':
anchors = self.module_list[i][0].anchors
#Get the input dimensions
inp_dim = int (self.net_info["height"])
#Get the number of classes
num_classes = int (module["classes"])
#Transform
x = x.data
x = predict_transform(x, inp_dim, anchors, num_classes, CUDA)
if not write: #if no collector has been intialised.
detections = x
write = 1
else:
detections = torch.cat((detections, x), 1)
outputs[i] = x
return detections
model = Darknet("cfg/yolov3.cfg")
inp = get_test_input()
pred = model(inp, torch.cuda.is_available())
print (pred)
my script for predict_transform where I am getting error
def predict_transform(prediction, inp_dim, anchors, num_classes, CUDA = True):
batch_size = prediction.size(0)
stride = inp_dim // prediction.size(2)
grid_size = inp_dim // stride
bbox_attrs = 5 + num_classes
num_anchors = len(anchors)
prediction = prediction.view(batch_size, bbox_attrs*num_anchors, grid_size*grid_size)
prediction = prediction.transpose(1,2).contiguous()
prediction = prediction.view(batch_size, grid_size*grid_size*num_anchors, bbox_attrs)
anchors = [(a[0]/stride, a[1]/stride) for a in anchors]
#Sigmoid the centre_X, centre_Y. and object confidencce
prediction[:,:,0] = torch.sigmoid(prediction[:,:,0])
prediction[:,:,1] = torch.sigmoid(prediction[:,:,1])
prediction[:,:,4] = torch.sigmoid(prediction[:,:,4])
#Add the center offsets
grid = np.arange(grid_size)
a,b = np.meshgrid(grid, grid)
x_offset = torch.FloatTensor(a).view(-1,1)
y_offset = torch.FloatTensor(b).view(-1,1)
if CUDA:
x_offset = x_offset.cuda()
y_offset = y_offset.cuda()
x_y_offset = torch.cat((x_offset, y_offset), 1).repeat(1,num_anchors).view(-1,2).unsqueeze(0)
prediction[:,:,:2] += x_y_offset
#log space transform height and the width
anchors = torch.FloatTensor(anchors)
if CUDA:
anchors = anchors.cuda()
anchors = anchors.repeat(grid_size*grid_size, 1).unsqueeze(0)
prediction[:,:,2:4] = torch.exp(prediction[:,:,2:4])*anchors
prediction[:,:,5: 5 + num_classes] = torch.sigmoid((prediction[:,:, 5 : 5 + num_classes]))
prediction[:,:,:4] *= stride
return prediction
error message
result = self.forward(*input, **kwargs)
File "F:/Detection/darknet.py", line 219, in forward
x = predict_transform(x, inp_dim, anchors, num_classes, CUDA)
File "F:\Detection\util.py", line 56, in predict_transform
prediction = prediction.view(batch_size, bbox_attrs*num_anchors, grid_size*grid_size)
RuntimeError: shape '[1, 255, 3025]' is invalid for input of size 689520 |
st49032 | I’m not familiar with the Darkflow model, but your reshape does not work for the passed shapes.
Based on the error message it looks like prediction contains 689520 values, which could be reshaped to e.g. [1, 255, 2704]. Could you check if your input size, the grid_size etc. are set to their expected values? |
st49033 | I am trying to check the values for the input parameters from the data I have been able to find out values for the function predict_transform.
inp_dim=608
anchors=9
num_classes=80
But here the problem lies I am not able to determine prediction.size(0)
x = x.data
so I need to size(0) of this x… for that I have tried printing it in the function so for batch_size=prediction.size(0) I get 1
stride is 11
grid size is around 55
bbox_attrs is 85
num_anchors=9
Now what might be the error??
edit: I got value of batch_size
by adding print(batch_size) in predict_transform before the line which produces error same for stride… |
st49034 | Just a guess, but if your grid_size would be 52, prediction would have the shape [1, 255, 2704], which would perfectly match the input size of 689520.
Could this be a problem? |
st49035 | I am sorry I did not revert back, my code was working when I made some changes in the cfg file after changing in to text format…
Now it is working . The final shape of my prediction is 1x10647x85 |
st49036 | I work on the similar project. The problem is, pytorch keep gives me error in
prediction[:,:,:2] += x_y_offset
RuntimeError: expected type torch.FloatTensor but got torch.cuda.FloatTensor
edit : already solve it by adding this line in top of def predict_transform
prediction = prediction.to(torch.device("cuda"))
but right now, i found the same exact problem with you … |
st49037 | I am stuck with the same problem. Can you please mention the changes that you’ve made in cfg file? |
st49038 | you can see the
x = predict_transform(x, inp_dim, anchors, num_classes, CUDA)
the second parameter represents input image dim。this value is 608 in yolov3.cfg,
but in the get_test_input() function,the image is resized 416.so it confict.
img = cv2.resize(img, (416,416))
so you can change them same.
you have two choice |
st49039 | def get_test_input():
img = cv2.imread("dog-cycle-car.png")
img = cv2.resize(img, (416,416)) #Resize to the input dimension
img_ = img[:,:,::-1].transpose((2,0,1)) # BGR -> RGB | H X W C -> C X H X W
img_ = img_[np.newaxis,:,:,:]/255.0 #Add a channel at 0 (for batch) | Normalise
img_ = torch.from_numpy(img_).float() #Convert to float
img_ = Variable(img_) # Convert to Variable
return img_
The size of the resized input image in the get_test_input() must be the same with width and height in cfg file. In this case, it has to be resized to (608, 608) instead of (416, 416). Otherwise, self.net_info[“height”] in forward function have to be 416. |
st49040 | Thanks, I solved by the same.
I changed both values ‘width’ and ‘height’ to 416. |
st49041 | Hello community,
When I get a model on CPU then do model.load_state_dict(torch.load(PATH, map_location=device)) as explained here 15, model.device doesn’t return the device specified in model.load_state_dict(torch.load(PATH, map_location=device)) but “cpu” instead. I then have to perform model.to(device) for it to be on the desired device.
Yet when I load a random Tensor stored on a HDD with my_tensor = torch.load(PATH, map_location=device) then model.device does returns the device specified in torch.load(PATH, map_location=device).
Why is that? Is it load_state_dict that behaves in a special way? Or do I also need to do my_tensor = my_tensor.to(device) after my_tensor = torch.load(PATH, map_location=device)?
And can I do my_tensor = torch.load(PATH, map_location="cpu") then my_tensor = my_tensor.to("cuda:0")? I don’t quite get if both are related or not, if they should be consistent and performed successively or not.
Thanks for the help. |
st49042 | Solved by albanD in post #4
Yes the end result is the same. So no need to add a .to() if you already used map_location properly. |
st49043 | Hi,
The map_location changes the device of the Tensors in the state dict that is returned.
But when you load_state_dict(), then these values are loaded (and only values) into the model. But that does not change the model’s device! you will need to move the model itself with .to() if you want to have it on a different device. |
st49044 | Ok, thanks for the answer. So the model is kind of separate from its weights.
Now talking about Tensors, do I need to do my_tensor = my_tensor.to(device) after my_tensor = torch.load(PATH, map_location=device)?
In other words, are my_tensor = torch.load(PATH, map_location="cpu") then my_tensor = my_tensor.to("cuda:0") VERSUS my_tensor = torch.load(PATH, map_location="cuda:0") exactly the same? |
st49045 | FlorentMeyer:
In other words, are my_tensor = torch.load(PATH, map_location="cpu") then my_tensor = my_tensor.to("cuda:0") VERSUS my_tensor = torch.load(PATH, map_location="cuda:0") exactly the same?
Yes the end result is the same. So no need to add a .to() if you already used map_location properly. |
st49046 | I get
RuntimeError: arguments are located on different GPUs at /pytorch/torch/lib/THC/generic/THCTensorIndex.cu:390
When I use index_selct function with torch.nn.DataParallel(net).
During forwarding pass of my model. I use tenspr.index_select() to flatten the output feature map of a Conv layer to feed it into the linear layer. Basically, I am implementing next CONV layer with a linear. Which I am going to use to increase the flexibility of my model. It works fine on one GPU and able to reproduce results of next CONV layer using index_select and the linear layer on top. Here is code part where I get this error.
import torch.nn as nn
import torch, math
from torch.autograd import Variable
class SpatialPool(nn.Module):
def __init__(self, amd0=225, kd=3):
super(SpatialPool, self).__init__()
print('*** spatial_pooling.py : __init__() ***', amd0,kd,fd)
self.use_gpu = True
self.amd0 = amd0 #225
self.kd = kd
self.padding = nn.ReplicationPad2d(1).cuda()
ww = hh = int(math.sqrt(amd0)) ## 15
counts = torch.LongTensor(amd0,kd*kd) ## size [225,9]
v = [[(hh+2)*i + j for j in range(ww+2)] for i in range(hh+2)]
count = 0
for h in range(1,hh+1):
for w in range(1,ww+1):
counts[count,:] = torch.LongTensor([v[h - 1][w - 1], v[h - 1][w], v[h - 1][w + 1],
v[h][w - 1], v[h][w], v[h][w + 1],
v[h + 1][w - 1], v[h + 1][w], v[h + 1][w + 1]])
count += 1
self.counts = counts.cuda()
def forward(self, fm):
fm = self.padding(fm) ## FM is Variable of size[batch_size,512,15,15]
fm = fm.permute(0, 2, 3, 1).contiguous()
fm = fm.view(fm.size(0), -1, fm.size(3))
print('fm size and max ', fm.size(), torch.max(self.counts))
pfm = fm.index_select(1,Variable(self.counts[:,0]))
for h in range(1,self.kd*self.kd):
pfm = torch.cat((pfm,fm.index_select(1, Variable(self.counts[:, h]))),2)
# print('pfm size:::::::: ', pfm.size()) #[batch_size,225,512*9]
return pfm
Here fm is a matrix of size.
Full error message looks like below
File "/home/gurkirt/Dropbox/sandbox/ssd-pytorch-linear/layers/modules/feat_pooling.py", line 48, in forward
pfm = self.spatial_pool1(fm) # pooled feature map
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "/home/gurkirt/Dropbox/sandbox/ssd-pytorch-linear/layers/modules/spatial_pooling.py", line 36, in forward
pfm = fm.index_select(1,Variable(self.counts[:,0]))
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py", line 681, in index_select
return IndexSelect.apply(self, dim, index)
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/tensor.py", line 297, in forward
return tensor.index_select(dim, index)
RuntimeError: arguments are located on different GPUs at /pytorch/torch/lib/THC/generic/THCTensorIndex.cu:390 |
st49047 | Solved by richard in post #7
Okay, I repro-ed the problem. You should use register_buffer to add the counts tensor to the Module so that DataParallel will pick it up. The following will fix it:
import torch.nn as nn
import torch
from torch.autograd import Variable
import math
def main():
torch.set_default_tensor_type('tor… |
st49048 | I don’t know what version of pytorch you’re using, but I tested this on a build from source and it doesn’t error out. |
st49049 | richard:
I don’t know what version of pytorch you’re using, but I tested this on a build from source and it doesn’t error out
Thank for the reply. Did you testes by applying dataparallel on multiple GPUs? |
st49050 | Yes, I tested with dataparallel on multiple GPUs. You could try building from source to see if the problem goes away, or wait for the next release (should be out soon). |
st49051 | Hi, Richard thanks for the help. I seriously need help. I am stuck at this point for few days. I still get the same error. I installed pytorch latest source from GitHub by following the instruction from https://github.com/pytorch/pytorch#from-source 13.
Here the complete script to reproduce the error. Please check the part in main function where I create mynet and apply Dataparallel. I run the below snippet using two GPUs. If I put the whole thing below in python script name test.py then run it using CUDA_VISIBLE_DEVICES=0,1 python test.py
import torch.nn as nn
import torch
from torch.autograd import Variable
def main():
torch.set_default_tensor_type('torch.cuda.FloatTensor')
net = mynet()
net = net.cuda()
net = torch.nn.DataParallel(net)
x = torch.autograd.Variable(torch.FloatTensor(64, 512, 15, 15).cuda()) # batch_sizexCxWxH
out = net(x)
print(out.size())
class SpatialPool(nn.Module):
def __init__(self, amd0=225, kd=3):
super(SpatialPool, self).__init__()
print('*** spatial_pooling.py : __init__() ***', amd0,kd)
self.use_gpu = True
self.amd0 = amd0 #225
self.kd = kd
self.padding = nn.ReplicationPad2d(1).cuda()
ww = hh = int(math.sqrt(amd0)) ## 15
counts = torch.LongTensor(amd0,kd*kd) ## size [225,9]
v = [[(hh+2)*i + j for j in range(ww+2)] for i in range(hh+2)]
count = 0
for h in range(1,hh+1):
for w in range(1,ww+1):
counts[count,:] = torch.LongTensor([v[h - 1][w - 1], v[h - 1][w], v[h - 1][w + 1],
v[h][w - 1], v[h][w], v[h][w + 1],
v[h + 1][w - 1], v[h + 1][w], v[h + 1][w + 1]])
count += 1
self.counts = counts.cuda()
def forward(self, fm):
fm = self.padding(fm) ## FM is Variable of size[batch_size,512,15,15]
fm = fm.permute(0, 2, 3, 1).contiguous()
fm = fm.view(fm.size(0), -1, fm.size(3))
print('fm size and max ', fm.size(), torch.max(self.counts))
pfm = fm.index_select(1,Variable(self.counts[:,0]))
for h in range(1,self.kd*self.kd):
pfm = torch.cat((pfm,fm.index_select(1, Variable(self.counts[:, h]))),2)
# print('pfm size:::::::: ', pfm.size()) #[batch_size,225,512*9]
return pfm
class mynet(nn.Module):
def __init__(self):
super(mynet, self).__init__()
self.cl = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
self.featPool = SpatialPool(amd0=225)
def forward(self, x):
x = self.cl(x)
x = self.featPool(x)
return x
if __name__ == '__main__':
main()
When I put the whole code above in python script name test.py then run it using CUDA_VISIBLE_DEVICES=0,1 python test.py then I get this:
File "dummy_test.py", line 61, in <module>
main()
File "dummy_test.py", line 12, in main
out = net(x)
File "/home/gurkirt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/gurkirt/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 68, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/gurkirt/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 78, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/gurkirt/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply
raise output
File "/home/gurkirt/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 42, in _worker
output = module(*input, **kwargs)
File "/home/gurkirt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "dummy_test.py", line 57, in forward
x = self.featPool(x)
File "/home/gurkirt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "dummy_test.py", line 42, in forward
pfm = fm.index_select(1,Variable(self.counts[:,0]))
RuntimeError: arguments are located on different GPUs at /home/gurkirt/pytorch/aten/src/THC/generic/THCTensorIndex.cu:452 |
st49052 | Okay, I repro-ed the problem. You should use register_buffer to add the counts tensor to the Module so that DataParallel will pick it up. The following will fix it:
import torch.nn as nn
import torch
from torch.autograd import Variable
import math
def main():
torch.set_default_tensor_type('torch.cuda.FloatTensor')
net = mynet()
net = net.cuda()
net = torch.nn.DataParallel(net)
x = torch.autograd.Variable(torch.FloatTensor(64, 512, 15, 15).cuda()) # batch_sizexCxWxH
out = net(x)
print(out.size())
class SpatialPool(nn.Module):
def __init__(self, amd0=225, kd=3):
super(SpatialPool, self).__init__()
print('*** spatial_pooling.py : __init__() ***', amd0,kd)
self.use_gpu = True
self.amd0 = amd0 #225
self.kd = kd
self.padding = nn.ReplicationPad2d(1).cuda()
ww = hh = int(math.sqrt(amd0)) ## 15
counts = torch.LongTensor(amd0,kd*kd) ## size [225,9]
v = [[(hh+2)*i + j for j in range(ww+2)] for i in range(hh+2)]
count = 0
for h in range(1,hh+1):
for w in range(1,ww+1):
counts[count,:] = torch.LongTensor([v[h - 1][w - 1], v[h - 1][w], v[h - 1][w + 1],
v[h][w - 1], v[h][w], v[h][w + 1],
v[h + 1][w - 1], v[h + 1][w], v[h + 1][w + 1]])
count += 1
# self.counts = counts.cuda()
self.register_buffer("counts", counts)
def forward(self, fm):
fm = self.padding(fm) ## FM is Variable of size[batch_size,512,15,15]
fm = fm.permute(0, 2, 3, 1).contiguous()
fm = fm.view(fm.size(0), -1, fm.size(3))
print('fm size and max ', fm.size(), torch.max(self.counts))
pfm = fm.index_select(1,Variable(self.counts[:,0]))
for h in range(1,self.kd*self.kd):
pfm = torch.cat((pfm,fm.index_select(1, Variable(self.counts[:, h]))),2)
# print('pfm size:::::::: ', pfm.size()) #[batch_size,225,512*9]
return pfm
class mynet(nn.Module):
def __init__(self):
super(mynet, self).__init__()
self.cl = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
self.featPool = SpatialPool(amd0=225)
def forward(self, x):
x = self.cl(x)
x = self.featPool(x)
return x
if __name__ == '__main__':
main() |
st49053 | Hi all,
I am new to pytorch and also new to autograd. My question is : Do I need to compute the partial derivatives for my functions parameters?
For example: My new layer want to compute a 1-d gaussian probability density value, the function is,
f(x)=a * exp^((x-b)^2 / c)) ,
where a,b,c are the parameters need to be updated. I think all of these operations are basic operation and the output is a scalar. Do I still need to write backward code like we do in Caffe? Or May I just define a new module only with forward function then pytorch will compute the parameters’ derivatives automatically for me? |
st49054 | Sure, this will be handled for you. For example:
import torch.nn as nn
from torch.autograd import Variable
class Gaussian(nn.Module):
def __init__(self):
self.a = nn.Parameter(torch.zeros(1))
self.b = nn.Parameter(torch.zeros(1))
self.c = nn.Parameter(torch.zeros(1))
def forward(self, x):
# unfortunately we don't have automatic broadcasting yet
a = self.a.expand_as(x)
b = self.b.expand_as(x)
c = self.c.expand_as(x)
return a * torch.exp((x - b)^2 / c)
module = Gaussian()
x = Variable(torch.randn(20))
out = module(x)
loss = loss_fn(out)
loss.backward()
# Now module.a.grad should be non-zero. |
st49055 | Thank you so much for your kind help! I still would like to ask few questions:
(1) should we add super(Gaussian, self).__init__() after def __init__(self):?
(2) After we add this new module to our network and start to train it, Do I only need to use
optimizer = optim.SGD(net.parameters(), lr = 0.01) optimizer.zero_grad() output = net(input) loss = criterion(output, target) loss.backward() optimizer.step()
to update all the parameters? |
st49056 | My bad, I have typed the example quickly just to give you a sense of how the code should look like, so I forgot about the super call. Nice catch!
Yes, that should do it. Since a, b and c are nn.Parameter objects, that were assigned directly to a module, they should appear in .parameters() of the main container. |
st49057 | Now we have defined the Gaussian layer, I want to know how to use it in a network of several layers. For example,
class MyNet(nn.Module):
def __init__(self):
super(MyNet, self).__init__()
self.fc1 = nn.Linear(100, 20)
self.gauss = Gaussian()
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.gauss(x)
net = MyNet()
Is the above code correct? Or do we have to set up other things in order to use the newly-defined layer in other network? |
st49058 | Can I define my own layer with a forward/backward function or I will have to define the layer as a Function so that I can define the backward function too? |
st49059 | You have no need to define the backward function yourself as long as the functions you are using to calculate the loss function is in PyTorch scope. You should definitely define a forward function just as shown by @jdhao. |
st49060 | By the way if we implement a layer and we have scipy or numpy operation inside of it Like in here 34, can it be run and accelerate on GPU?, or the layer just run on CPU because our numpy and scipy can not run on GPU? |
st49061 | you can add .cuda() to both the module and the input to see if the given example can run without any error. |
st49062 | @jdhao Yes it works I have tried it before. However I just want all operation works on GPU. In that example I think (please correct me if I am wrong) it still can run but the operation which run on GPU is just tensor operation like in here 3 or some basic pytorch function. So when we use .cuda() it will have big communication cost between GPU and CPU. For example when I am trying manual convolution using python loop operation to pytorch tensor it needs so much time, compare with ready made pytorch convolution. I just want to know what I miss in extending pytorch, whether all operation even when using scipy declared in pytorch can really work on GPU or scipy process still works on CPU but the tensor on GPU which give big communication cost between CPU and GPU. |
st49063 | Most of the time we do not need to extend PyTorch using numpy. As long as you use the builtin method of Variable, you can only write forward method and backward gradient computation is handled by autograd. So using a composition of builtin Variable method to achieve what you want is more time-saving. |
st49064 | @herleeyandi, the portions of your code which use scipy will not be GPU-accelerated. Only operations on CUDA Torch tensors and Variables will be GPU-accelerated. |
st49065 | @colesbury I see, so how about if I want to create some functions which is GPU accelerated?, should I use CFFI which coded using CUDA C++ ?, Can you give me more hint |
st49066 | Yes, you can do that. Writing new CUDA kernels usually requires a lot of effort. If you can express your layer in terms of existing Tensor operations, then that’s usually a better way to get started. If you can’t do that, then you might have to write new kernels. |
st49067 | What if the forward function is not differentiable? How should I update the parameters and return the gradient w.r.t inputs? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.