id stringlengths 3 8 | text stringlengths 1 115k |
|---|---|
st100100 | I encountered a strange issue and the simplified version of my codes are as below:
import torch
import torch.nn as nn
import math
class MyLinear(nn.Module):
def __init__(self, nin, nout):
super(MyLinear, self).__init__()
self.nout = nout
self.nin = nin
self.weight = nn.Parameter(torch.randn(self.nout... |
st100101 | I have the same error. Did you ever resolve it? To me, it seems like each gpu produces its own scalar output. |
st100102 | Make sure the model outputs are tensors, NOT scalars.
If you do need to output a scalar, reshape it with output.reshape([1]). |
st100103 | Hi,
I’m using the pytorch installed from source, and I got the error RuntimeError: cuda runtime error (9) : invalid configuration argument at /data/users/mabing/pytorch/aten/src/ATen/native/cuda/EmbeddingBag.cu:257 when run loss.backward().
And when i replace all cuda() with cpu(), it works perfectly.
Here is the test ... |
st100104 | I found that this error is caused by empty size input tensor of embeddingbag layers.
When i change component_in = [[3,5],[2,4,5],[2,3,4],[]] to component_in = [[3,5],[2,4,5],[2,3,4],[2]], it works.
But why pytorch with cuda() ( cpu() can run correctly) don’t support empty size input tensor for embeddingbag layers? |
st100105 | I was trying to access the doc just now but the webpage either appears blank or looks like plain html? Does anyone else have problem opening the docs? https://pytorch.org/docs/stable/index.html 4 |
st100106 | Hi, I wonder if there’s been a PyTorch implementation of,
Tunable Efficient Unitary Neural Networks (EUNN) 60
It’s something that definitely seems to be a solid piece of work !
@smth this seems like something FAIR must have in house already? You, Yann LeCun and Martin Arjovsky have been working on this for a quite a wh... |
st100107 | There isn’t a PyTorch implementation of this publicly available as far as I know. |
st100108 | Thanks!
It’s in Tensorflow by one of the authors, Li Jing,
https://github.com/jingli9111/EUNN-tensorflow 29
Compared to LSTM, at least this is mathematically interpretable !
Multi-layer bi-directional LSTM works great, but you can’t do any theory on it? |
st100109 | Late to the party, but I will leave this here for anyone who bumps into this conversation.
The last few days I have been working on a pytorch implementation which can be found here:
GitHub
flaport/torch_eunn 28
A Pytorch implementation of an efficient unitary neural network (https://arxiv.... |
st100110 | The PyTorch version 0.3.0 I am using gives me an error with the following line.
loss = torch.zeros(1).to(pred_traj_gt)
AttributeError: 'torch.FloatTensor' object has no attribute 'to'
What should I replace the code with ? This is someone else code I am trying to run to understand. My GPU only allows me to use 0.3.0 so... |
st100111 | I cannot test it because I don’t have a 0.3 env atm, but you should have a .type() method, which you should be able to call like .type(pred_traj_gt.dtype) or something similar. |
st100112 | I am adding a native function and having trouble using other utility methods w/in ATen. For example, see this simple native function 8
This is the quick test I’m using:
import torch
s = torch.sparse.DoubleTensor(
torch.LongTensor([[0,2,0], [0,2,1]]),
torch.FloatTensor([[1,2],[3,4],[5,6... |
st100113 | For reference, this is the native function I added:
aten/src/ATen/native/TensorShape.cpp
Tensor my_op(const Tensor& self) {
if(self.is_sparse()){
printf("%ld\n", self._sparseDims()); // prints "2"
printf("%lld\n", _get_sparse_impl(self)->sparseDims()); // prints "140495002593472" on CPU; segfault on CUDA
}
... |
st100114 | Hi there, I’m probably missing something simple, but can’t get it though.
The goal is to test the MNIST example with a custom image dataset:
from torchvision.datasets import ImageFolder
from torchvision.transforms import ToTensor
data = ImageFolder(root='PytorchTestImgDir', transform=ToTensor())
print(data.classes)
f... |
st100115 | Looks like the MNIST example’s model expects one color channel (MNIST is black and white) but the images you’re providing are being loaded with three (RGB) channels. |
st100116 | Hi,
I would like to get some general ideas on exporting ONNX model when model accepts both input sequence and output labels as arguments. How would you set up Variables to traverse through the model to export ONNX?
Thanks. |
st100117 | I ran the following code, got an error.
Ryohei_Waitforit_Iwa:
a = torch.Tensor([[1, 2, 3], [4, 5, 6]])
torch.where(a[:, 0] == 1)
TypeError: where() missing 2 required positional argument: “input”, “other”
Numpy allows us to use where function without 2 arguments, but Pytorch not.
What I want to do is to select t... |
st100118 | From looking at numpy’s doc 6 when only a single argument is given, it is equivalent to condition.nonzero(). So just do (a[:, 0] == 1).nonzero() ? |
st100119 | Hello.
I think there is a bug in in-place bernoulli sampling. I put here the code that check for that. The code samples using in-place and non in-place mode.
import torch
import numpy
print "----BERNOULLI----"
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print a.bernoulli_().numpy()
a... |
st100120 | I think, there is no sufficient documentation available for the APIs. According to the code here 3, the probability is taken as 0.5 in case if there is no parameter provided for p.
If you change the code as below, it seems to be giving same functionality as non in-place operator.
import torch
import numpy
torch.manual... |
st100121 | Yes, I agree with you. I think documentation should be more clear as if we follow torch.benoulli() docs it seems we fill vector “a” with probabilities taken from that vector, or at least that is what I understood and that is how it works torch.normal() |
st100122 | Agreed, both the in-place and non in-place version needs arguments for the probabilities, which in not clear in the doc |
st100123 | import torch
import numpy as np
a = np.zeros((3,3))
b = torch.from_numpy(a).type(torch.float)
AttributeError Traceback (most recent call last)
in ()
3
4 a = np.zeros((3,3))
----> 5 b = torch.from_numpy(a).type(torch.float)
AttributeError: module ‘torch’ has no attribute ‘float’ |
st100124 | I used this to install conda install pytorch=0.1.12 cuda75 -c pytorch following https://pytorch.org/previous-versions/ 15. Do you mean that torch.float and torch.FloatTensor are the same ? |
st100125 | For most purposes yes they are the same. torch.float did not exist in 0.1 though so you will need to upgrade pytorch to be able to use it. |
st100126 | I’m training a Transformer language model with 5000 vocabularies using a single M60 GPU (w/ actually usable memory about 7.5G).
The number of tokens per batch is about 8000, and the hidden dimension to the softmax layer is 512. In other words, the input to nn.Linear(256, 5000) is of size [256, 32, 256]. So, if I unders... |
st100127 | Solved by samarth-robo in post #2
Memory for network parameters:
(256*5000 + 5000) * 4 * 2 = 10 Mbytes, where the factor of 2 is because the network has 1 tensor for weights and 1 tensor for gradients, and the additional 5000 is for biases.
Memory for data:
8192 * 512 *... |
st100128 | Memory for network parameters:
(256*5000 + 5000) * 4 * 2 = 10 Mbytes, where the factor of 2 is because the network has 1 tensor for weights and 1 tensor for gradients, and the additional 5000 is for biases.
Memory for data:
8192 * 512 * 4 * 2 = 32 Mbytes
So by those rough calculations, the memory consumption for the so... |
st100129 | Thank you very much. Apparently, from your calculation I was calculating an excessive factor (5000) for “Memory for data” part. |
st100130 | Hi, what I understood from your answer is that the number of parameters (weights and biases) are stored twice in pytorch as per your Memory for network parameters. However, I didn’t quite get the Memory for data part. Shouldn’t the total calculation for a generic network in Pytorch be something like this so that it tak... |
st100131 | What is the difference between
a) torch.from_numpy(a).type(torch.FloatTensor)
b) torch.from_numpy(a).type(torch.float) ?
When I installed PyTorch via the command conda install pytorch torchvision -c pytorch, (b) works. When I installed PyTorch via the command conda install pytorch=0.1.12 cuda75 -c pytorch on another PC... |
st100132 | Solved by albanD in post #2
Hi,
torch.float has been added recently, it was not in old releases like 0.1.xx, this is why it does not work. |
st100133 | Hi,
torch.float has been added recently, it was not in old releases like 0.1.xx, this is why it does not work. |
st100134 | The documentation for nn.CrossEntropyLoss states
The input is expected to contain scores for each class.
input has to be a 2D Tensor of size (minibatch, C).
This criterion expects a class index (0 to C-1) as the target for each value of a 1D tensor of size minibatch
However the following code appears to work:
loss = n... |
st100135 | It seems you are right.
I tested it with this small example:
loss = nn.CrossEntropyLoss(reduce=False)
input = torch.randn(2, 3, 4)
input = Variable(input, requires_grad=True)
target = torch.LongTensor(2,4).random_(3)
target = Variable(target)
output = loss(input, target)
loss1 = F.cross_entropy(input[0:1, :, 0], targe... |
st100136 | Hi Vellamike,
I still don’t get why you would want to do that.
CrossEntropy simply compares the scores that your model outputs against a one-hot encoded vector where the 1 is in the index corresponding to the true label.
If your input are sequences of length 10, then you need to build a model that accepts 10 inputs and... |
st100137 | @PabloRR100 Sorry for not answering earlier - I just saw your reply.
The reason I want to do that is that is I am doing a sequence-to-sequence network. My labels are sequences themselves - I have one label per sample in my sequence. So each sequence does not fall into one of 3 classes, each element of the sequence fall... |
st100138 | When the mini-batch size is 1, it’s often the case that building the model, calling outputs.backward() and optimizer.step() themselves are more time consuming than the actual gradient computation. Do you have any suggestions? I know the coming JIT support can potentially resolve the model building issue, but the other ... |
st100139 | Hi,
The jit will help both for model building and backward pass.
Unfortunately I don’t know of any way to speed up the optimizer.step() further. |
st100140 | Thanks! If backward() is also supported, then I think the doc 1 has put this wrong: It does not say it supports torch Variable type. |
st100141 | Hi,
Tensors and Variables have been merged a while ago now. So it supports Tensors, both the ones that requires_grad and the ones that don’t. |
st100142 | I originally use caffe, and now I have convert model trained by caffe to pytorch. And in caffe I use lmdb package the training images. I also read these lmdb in pytorch. But I do not know how to read the images from lmdb in batchsize. Now I only read one by one.
Could anybody give me some suggestion? |
st100143 | Hi,
I have some questions about to() methods of PyTorch when using device as the arguments
I have a class abc
class abc(nn.Module):
def __init__(self):
super(abc, self).__init__()
self.linear2 = nn.Linear(10,20)
self.linear1 = nn.Linear(20,30)
self.a_parmeters = [self.a]
def forw... |
st100144 | Q1 + Q2: if you don’t call .clone() or manually make a deepcopy, pytorch tries to use the same storage if possible. So the answer to both questions: usually the same storage will be used.
Q3: for instances of torch.nn.Module the changes are made in the self variable (python uses something similar to call by reference) ... |
st100145 | Pretty much the question in the title.
Someone asked this a year ago, but I think they did not receive a satisfying answer. None of the official pytorch examples use clean code.
For example, the pytorch DataLoader uses a batchsize parameter, but if someone writes their transformations in their dataset class, then that ... |
st100146 | Could you point to the examples where you have the feeling the code is not clean?
Usually random rotations, distortions etc. are used per-sample, so that the batch size stays the same.
There are a few exceptions, e.g. FiveCrop 6 which return five crops for a single image.
You can just use the torchvision.transforms 30 ... |
st100147 | emcenrue:
in their dataset class, then that batchsize parameter is no longer adhered to, because the dataloader would then be generating a batch of size batchsize*however_many_transformations_are_applied_to_a_single_sample
I don’t think this is true. |
st100148 | Are you proposing that the dataset always should produce a single sample x, y for each call to getitem?
If so, how does one augment the dataset so that it incorporates random rotations/crops/shifts like here:
https://keras.io/preprocessing/image/ 30
?
The only solution I can see is that it will randomly select a sample... |
st100149 | I mentioned that I don’t think any of the examples are clean.
Also, if the random rotations/distortions/etc. are used per-sample, does that mean that the original sample could potentially never be used for training? In keras, the augmentation produces additional samples. Is this not the case for pytorch? In other words... |
st100150 | The usual approach is to just implement the code to load and process one single sample, yes.
That makes it quite easy to write your own code as you don’t have to take care of the batching.
The DataLoader will take care of it even using multiprocessing.
If you want to apply multiple transformations on your data, you cou... |
st100151 | I want to create a mask tensor given a list of lengths. This would mean that there should be k ones and all other zeros for each row in the tensor.
eg:
input :
[2, 3, 5, 1]
output
[1 1 0 0 0
1 1 1 0 0
1 1 1 1 1
1 0 0 0 0]
Is the most efficient way as below or something similar?
seq_lens = torch.tensor([2,3,5,1])
max_le... |
st100152 | Solved by justusschock in post #2
You could use binary masking to achieve this:
seq_lens = torch.tensor([2,3,5,1]).unsqueeze(-1)
max_len = torch.max(seq_lens)
# create tensor of suitable shape and same number of dimensions
range_tensor = torch.arange(max_len).unsqueeze(0... |
st100153 | You could use binary masking to achieve this:
seq_lens = torch.tensor([2,3,5,1]).unsqueeze(-1)
max_len = torch.max(seq_lens)
# create tensor of suitable shape and same number of dimensions
range_tensor = torch.arange(max_len).unsqueeze(0)
range_tensor = range_tensor.expand(seq_lens.size(0), range_tensor.size(1))
# un... |
st100154 | Hi,
TLDR: Is there a flag or some configuration option to make FULL_CAFFE2=1 python3 setup.py install link to a custom BLAS library for optimizations, instead of using Eigen?
For background, we have built a novel data-parallel accelerator, and have compiled an optimized BLAS library targeting this architecture. We woul... |
st100155 | Is there a good reason for this? I would like to use general hashable objects as keys in my ParameterDict, just as I would in a normal Python dict.
Currently, the Pytorch documentation lies by claiming “ParameterDict can be indexed like a regular Python dictionary”, since the indexing must be done only with strings. |
st100156 | There is an issue 11 about a similar topic on github (ModuleDict instead of ParameterDict).
If you like, you could describe your use case there. |
st100157 | Thanks, I’m also just going with the str(int) solution for now, it’s probably not that much of a performance-loss. Still a bit ugly is all. |
st100158 | Sure, I get it, and I think it’s a valid discussion, since there seem to be a few users noting this behavior. |
st100159 | The main reason is that we need to get a string key for the registered submodule, for purpose like saving state dict. Many things are hashable, but not all of them hash to same value when you run the same code in a new process. |
st100160 | I have been making some checks on the softmax log softmax and negative log likelihood with pytorch and I have seen there are some inconsistencies. As example suppose a logit output for cifar100 database in which one of the classes has a very high logit in comparison with the rest. For this, the softmax function outputs... |
st100161 | I think thats the whole point why we use log softmax instead of softmax. i.e., numerical stability.
If we recall the softmax formula, It involves exponential powers. When we have large numbers (as in the array you mentioned), due to limited numerical precision of our machine, the softmax just kills the precision of num... |
st100162 | Hello, thanks for your reply.
That is not the point of my question. For example for computing the softmax people use a trick for numerical stability and you can get accurate softmax post activations without using the log. For example a cuda core that implements this trick is:
//Softmax->implemented for not saturating
_... |
st100163 | The max trick that you have mentioned (in C code) helps when the logit values are moderately high (refer max trick 7 here). But the example numbers that you have provided in your question are quite large that even the ‘max trick’ will fail in this case (due to the exponential of large -ve numbers, for example e^(-15188... |
st100164 | I get the runtime error “DynamicCUDAInterface::get_device called before CUDA library was loaded” upon trying to call
torch.nn.LSTM(…)
This seems to only happen when I try to do something on a machine without CUDA. However I have the cpu-version of Pytorch installed so I’m not sure why I’m getting this error. |
st100165 | For some reason this error happens when you try to pass an array instead of a scalar for (hidden_size = ). I changed that and it was fixed.
I think the error message is odd. |
st100166 | I am trying to check the count of element-wise equality between two tensors. I have narrowed my issue down to the following short example. The last line results in an “Illegal instruction” message and crashing out of Python.
import torch
torch.manual_seed(1)
x = torch.randint(0, 5, (1000, ))
x.eq(x).sum()
I am using... |
st100167 | I can’t repro this on Linux but I will open an issue on GitHub for you: https://github.com/pytorch/pytorch/issues/10483 17 |
st100168 | This sounds like the CPU capability dispatch code might not be working properly on Windows. Do you know what model CPU you have? |
st100169 | Thanks all.
I am using a VMware virtual machine - according to the system information within the virtual machine, I have an Intel Xeon CPU E5-2680. |
st100170 | I’m curious if the following works (in a new iPython process):
import os
os.environ['ATEN_DISABLE_AVX2'] = '1'
import torch
torch.manual_seed(1)
x = torch.randint(0, 5, (1000, ))
x.eq(x).sum() |
st100171 | Still got the same error.
By the way, it doesn’t seem to matter what x is. I used a random number generator, but you could replace x with x = torch.ones(5) or x = torch.ones(5, 5) and still get the same error. |
st100172 | I think the issue is that the sum() call is running a kernel that uses AVX2 instructions, but the CPU doesn’t support AVX2 instructions (only AVX).
There are two likely causes:
The CPU capability detection code isn’t working on Windows (or maybe the VM?) and incorrectly thinks the CPU supports AVX2 instructions
The li... |
st100173 | I have the same crash problem in caffe2.dll while calling tensor.sum().
My environment is win7 + python 3.6 + pytorch 0.4.1.
CPU is Intel Pentium which does not support AVX or AVX2 instruction set. |
st100174 | We are compiling caffe2.dll with AVX and AVX2 instruction set. So if your CPU doesn’t support it, you may have to build it yourself. |
st100175 | I have to give multiple image inputs to the following code. I have my input images in a folder. How can I give it as an input one by one and save the output in every single iteration?
This is my code :
from __future__ import print_function
import matplotlib.pyplot as plt
%matplotlib inline
import argparse
import os
os... |
st100176 | It is really difficult to explain my situation. but I will try to do my best:)
I have (128, 1) tensor which includes 128 rows and each row has 0 or 1 value.
And I have another tensor (128, 2). using previous tensor, I want to choose each rows’ value and transformed second tensor to a new tensor (128,1)
how can I achiev... |
st100177 | I think gather would work for you:
x = torch.randn(128, 2)
index = torch.empty(128, 1, dtype=torch.long).random_(2)
x.gather(1, index) |
st100178 | Hello,
I witnessed a strange behavior recently using F.mse_loss.
Here’s the test I ran:
import torch
import torch.nn as nn
import torch.nn.functional as F
layer = nn.Linear(1,3)
x = torch.rand(1,1)
label = torch.rand(1,3)
out = layer(x)
print('Input: {}\nLabel: {}\nResult: {}'.format(x, label, out))
loss_1 = F... |
st100179 | Hi,
I can’t reproduce that, I get exact same values for both on my machine.
Which version of pytorch are you using?
How did you installed pytorch?
I think I remember issues where mm operations were not behaving properly for some wrongly installed/incompatible blas libraries. |
st100180 | How can the MFCC features extracted from a speech signal be used to perform word/sentence boundary detection with pytorch?
Also can Connectionist Temporal classification cost be used to achieve the same?? |
st100181 | All the time I try to find out some api or usage on pytorch documentation through web, it is really slow and dismotivated me. Is there any way to get PDF so that I find what I want easily in my local computer? |
st100182 | Easiest way -
open the pytorch documentation using Chrome… Hit Ctrl+P it will open a print page for you. Save it. |
st100183 | I read this in tensor.view() function documentation.
Could you take this example?
I tried but I got error
z
Out[12]:
tensor([[ 0.9739, 0.6249],
[ 1.6599, -1.1855],
[ 1.4894, -1.7739],
[-0.8980, 1.5969],
[-0.4555, 0.7884],
[-0.3798, -0.3718]])
z = x.view(1, 2)
Traceback (most recent call last):
File “D:\temp\Python3... |
st100184 | Hi,
What is x is that example?
Anyway view is like reshape in numpy (with additional considerations I am not familiar with) but if you call:
x.view(shape)
then their should be as many elements in x as in a tensor with size()==shape. (to go from a 3D tensor of size C,H,W to a 2D tensor of size()==shape then shape[0] x ... |
st100185 | x is just a tensor. I understood same as you said about torch.Tensor.view() .
my question is from here 1.
“The returned tensor shares the same data and must have the same number of elements, but may have a different size” |
st100186 | As @el_samou_samou explained, the number of elements stays the same, while the size may differ.
Here is a small example:
x = torch.randn(2, 2, 2, 2)
print(x.size())
x = x.view(-1, 2, 1)
print(x.size())
x = x.view(-1)
print(x.size())
While the underlying data of x stays the same, its size changes after each view call.
... |
st100187 | Now I understand! I understood it as total size could be changed. Thanks for example |
st100188 | Hello everyone,
I’m getting some problem that my memory consumption on dashboard looks so weird…
At runtime each memory consumption is about 8G, when training on single GPUs, single machine; And each memory consumption has increased from 8G to 32G, when I use multiple machines.
But each consumption has decreased when I... |
st100189 | I get something. I rewrote line 41 of DistributedSampler.class 5
indices = list(torch.randperm(len(self.dataset), generator=g))
as follow:
indices = torch.randperm(len(self.dataset), generator=g).numpy().tolist()
It works for me and the memory consumption is maintained at a certain level. |
st100190 | stack trace
self.lstm = nn.LSTM(input_size = n_features, hidden_size=hidden_size, batch_first=True)
File “/pytorch4/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 409, in init
super(LSTM, self).init(‘LSTM’, *args, **kwargs)
File “/pytorch4/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 52, in... |
st100191 | Hi all,
I am trying to train vgg13 (pre train model) to classify the images. I put my classified
classifier = nn.Sequential(OrderedDict([
(‘fc1’, nn.Linear(25088, 5000)),
(‘relu’, nn.ReLU()),
(‘dropout’, nn.Dropout(0.2)),
(‘fc2’, nn.Linear(5000, 102)),
(‘output’, nn.LogSoftmax(dim=1))
]))
when I start training this mo... |
st100192 | Could you explain a bit more, what you are experiencing?
Is your training slow using a CNN?
How did you time the training? |
st100193 | It is like the time for training a CNN model in multiple GPUs is roughly the same compared to training using one single GPU. But for LSTMs there will be a difference. Is that normal? Thanks! |
st100194 | The time for each iteration or epoch?
In the former case this would be a perfectly linear speedup.
In the latter case, the bottleneck might be your data loading, e.g. loading from a HDD instead of a SSD.
Are you using the same DataLoader for the CNN and LSTM run? |
st100195 | I checked the time when I called something like for i in data_loader and that is pretty fast. The majority of time was spent at the step result = model(data) and optimizer.step() so I am not sure what happened. It does not seem to be a data loader issue.
I track time for 50 steps so I think it is close to your later ca... |
st100196 | So the 50 steps using multiple GPUs take the same time as 50 steps using a single GPU, e.g. 1 minute?
Assuming you’ve scaled up your batch size for DataParallel this would be perfectly fine, as your wall time for an epoch will be divided by your number of GPUs now. |
st100197 | yes, roughly.
Should I scale the batch size up? I am wondering a too-large batch size leads to bad performance (say 128 -> 1024 with 8 GPUs). |
st100198 | Your data will be split across the devices by chunking in the batch dimension.
If your single model worked good for a batch size of e.g. 128, you could use a batch size of 128*4 for 4 GPUs.
Each model will get a batch of 128 samples, so that the performance should not change that much. |
st100199 | Okay. Sorry I am still a little bit confused here. You said that each model will get 128 samples, however, at the backward step, how will the training work? Will that be something like taking the sum from each GPU? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.