id stringlengths 3 8 | text stringlengths 1 115k |
|---|---|
st100000 | I have an integer to be returned, along with some other stuff. Should I just return the integer, or return something like torch.LongTensor(num)? |
st100001 | If you are return something with grad, you shouldn’t convert it as it will be used for back prop. But if its just for visualization that value, you could do it after backprop step. |
st100002 | This is the data so it is neither grad nor visualization. The data will be used in forward step for the model. |
st100003 | Also if its data for forward step, you should keep it as Tensor type as while backprop needs grad. Pytorch backprop operations are defined only for Tensor type. |
st100004 | Have you ever tried to return an integer itself? That will also be converted to a tensor…
The question then is whether we want to explicitly convert that. |
st100005 | I have an integer to be returned, along with some other stuff.
Why do you want to return this integer? What does this integer contain? Is it the model output? Is the model input type Tensor?
Have you ever tried to return an integer itself?
Yes. If you try to backprop with this returned int it will thrown an error as... |
st100006 | Let us say if you want to do a rnn model, you will need to have a padded sequence and its original length (that comes to the integer). I got no issue when just returning an integer so I am not sure why you are getting the error. |
st100007 | I got no issue when just returning an integer so I am not sure why you are getting the error.
Maybe this 7 post will make things clear about backprop problem I mentioned. |
st100008 | I did not get this issue. I think we were probably making the problem complex so I’d try to rephrase it. The fundamental question here is, when making a customized dataset, what is the best way to return the sequence (with variable lengths) and its length?
For example, it would be something like
input: ["some string", ... |
st100009 | According to this 5 comment, ScatterAssign only works on CPU, but I see that it also exists in the corresponding .cu file here 2.
Is the first comment accurate? |
st100010 | Hi, I have a question regarding using register_forward_hook for recurrent networks. Suppose I have a module that contains a LSTMCell as a submodule which runs for N time steps in one forward() call. I want to examine the hidden states of this LSTMCell at every time step. But, if I do register_forward_hook for that LSTM... |
st100011 | after I saw the method,I still don’t know How to extract features of an image from a pretrained model,can you give a complete example?I need help. |
st100012 | How to extract features of an image from a trained model
To complement @apaszke reply, once you have a trained model, if you want to extract the result of an intermediate layer (say fc7 after the relu), you have a couple of possibilities.
You can either reconstruct the classifier once the model was instanti... |
st100013 | Screen Shot 2018-09-21 at 7.36.10 AM.png1366×1014 193 KB
I’m getting this error when trying to train my network. I’ve followed this tutorial https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel 2 to use a custom dataloader and whenever I reference the dataloader object I get this same error.
Any a... |
st100014 | I am trying to convert torchvision vgg layers into a block of layers which can be seen in the following code. I am trying to put them in a defaultdict and use it as a block. When I try to print the model the defaultdict doesnot appear. How do I make defaultdict appear in my model ?
# here feature head is vgg model take... |
st100015 | I finally solved it by using ModuleDict instead of defaultdict.
Here is the implementation of ModuleDict
https://pytorch.org/docs/stable/_modules/torch/nn/modules/container.html#ModuleDict 5 |
st100016 | I have 1 encoder and 2 decoder i want to give weighted loss to encoder, meaning the decoders will be trained from their respective loss, but while going in to the encoder, their will be some weight-age to both the decoder losses. How can i implement the weighted back propagation for this setup? |
st100017 | Hi,
I’m trying to train a network using cudnn but at every execution I’m getting different results. I have no idea why, as I’m trying to ensure determinism in every way I know.
Here is what I’m currently using to do so:
torch.manual_seed(0)
numpy.random.seed(0)
random.seed(0)
torch.backends.cudnn.deterministic = True
a... |
st100018 | I’m also struggling with reproducibility, and I’m interested to see what the solution(s) discovered by this thread are. By the way, did you try checking with cpu, and seeing if the cpu version is more reproducible? |
st100019 | If you are sampling random numbers on the GPU, you might have to set the torch.cuda.manual_seed.
Have a look at this example code 24. |
st100020 | I found that manual seed set both for me. (At least, on 0.4.0). https://pytorch.org/docs/stable/_modules/torch/random.html#manual_seed 22 |
st100021 | hughperkins:
l seed set both for me. (At le
Yes, using the CPU I always get the same results. So it must be something to do with cudnn. |
st100022 | Idea: provide a short piece of code that is sufficient to reproduce the issue reliably. |
st100023 | It’s a very large network, so it is going to be very dificult for me to reproduce the issue with a short piece of code.
But I’m getting closser to the issue, as changing the tensor type to double instead of float using:
torch.set_default_tensor_type('torch.DoubleTensor')
solves the issue and allows me to get determinis... |
st100024 | Know this convo is a little old but I’m under the impression there’s some non-determinism in a few cuDNN operations, like atomic adds on floating points? Might be the issue here
https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#reproducibility 65 |
st100025 | Does no_grad still allow to update the batch normalization statistics?
I have the following code:
def train_add_stat(model, trn_loader, optimizer, ib_max):
model.train()
with torch.no_grad():
for _, (inputs, targets) in enumerate(trn_loader):
inputs = Variable(inputs.cuda(), requires_grad=Fa... |
st100026 | After upgrading from 0.4 to 0.4.1, I found that a C++ API I used to create variables is deprecated.
For example, in 0.4:
auto max_val = at::zeros(torch::CUDA(at::kFloat), {batch, channel, height});
How can I achieve the same thing in 0.4.1 with TensorOptions? |
st100027 | The equality 1/4 sum(zi, zi) = 3(xi + 2)^2 is incorrect; it should be 1/4 sum(zi, zi) = 1/4 * 3(xi + 2)^2 |
st100028 | It’s this 2 one.
I think, @chrobles misunderstood.
There is no equality here. i.e., these are two assignment statements separated by a comma.
o = 1/4 sum(z_i)
z_i = 3(x_i + 2)^2 |
st100029 | I am trying to deep copy the LSTM model in my code. But it raised this error: Only Variables created explicitly by the user (graph leaves) support the deepcopy protocol at the moment
How can I solve this?
Thanks! |
st100030 | When I convert a variable to numpy first, then to change through the function of numpy, and finally to
convert to variable of pytorch and as an input of the neural network, so that the reverse autograd of the original variable can not be grad? If I want to make a derivative of my previous variable, what should I do? |
st100031 | Hi,
The autograd engine only support pytorch’s operations. You cannot use numpy operation if you want gradients to be backpropagated. |
st100032 | import pyforms
from pyforms import basewidget
from pyforms.controls import ControlButton
from pyforms.controls import ControlText
class MainPage(basewidget):
…
when i run the program it rises a TypeError:
class Main_Page(basewidget):
TypeError: module() takes at most 2 arguments (3 given )
i ... |
st100033 | I’m afraid you might ask in the wrong place. This is the PyTorch Forum, not the PyForms Forum.
Best of luck resolving this issue |
st100034 | I have a dataset whose labels are from 0 to 39. And i wrap it using torch.utils.data.DataLoader, if i set num_workers to be 0, everything works fine, However if it is set to be 2, then the labels batch (a 1-D byte tensor)it loads at some epoch always seems to be bigger than 39, which seems to be 255. what causes this p... |
st100035 | hi, guy, what do you mean by one worker? i used to run on the same machine using 2 workers with other project, and it is fine. By the way, the code works fine using 2 workers util some random epoch of trianing when it output labels with value 255 to stop my training. I guess the code here 34 may cause the problem. a... |
st100036 | below is my code.
from __future__ import print_function
import torch.utils.data as data
import os
import os.path
import errno
import torch
import json
import h5py
from IPython.core.debugger import Tracer
debug_here = Tracer()
import numpy as np
import sys
import json
class Modelnet40_V12_Dataset(data.Dataset):
... |
st100037 | if your data is numpy.array, you can try like this
self.train_data = torch.from_numpy(self.modelnet40_data['train']['data'].value) |
st100038 | hi, this could partly solve my problem. because this method loads all the data into the memory. However, when the dataset is big(in .h5 file), this it is impractical to load all the data to the memory. donot it?? And the problem still exists. |
st100039 | yes, this method loads all the data into memory. If the data is large, I guess you can do this way. (I don’t try this)
def __getitem__(self, index):
if self.train:
shape_12v, label = self.modelnet40_data['train']['data'][index], self.modelnet40_data['train']['label'][index]
I don’t know if it works, you can te... |
st100040 | hi, i tried this one, but it still doesnot work. And I suspect this issue is related to the multi-thread synchronization issues in dataloader class. |
st100041 | I have been seeing similar problems with DataLoader when num_workers is greater than 1. My per sample label is [1, 0 …, 0] array. When loading a batch of samples, most of the labels are OK, but I could get something like [70, 250, …, 90] in one row. This problem does not exist when num_workers=1.
Any solution or sugges... |
st100042 | I have also met similar problems. Does anyone can figure out how to solve it? Thanks a lot! |
st100043 | This is always the case if you are using Windows (in my computer).
Try it from the command line, not from Jupyter. |
st100044 | Thanks. But I use pytorch in Linux(Archlinux), and the version of pytorch is 0.2-post2. |
st100045 | Can you share your full source code so that I can try it and see that it works on my system? |
st100046 | I have the same problem! How do you solve the problem?
Besides, seemingly there is little anserwers about that. |
st100047 | This might be related to those two issues. What version of PyTorch are you using? Perhaps updating to 0.4.1 might help.
github.com/pytorch/pytorch
Issue: Always get error "ConnectionResetError: [Errno 104] Connection reset by peer" 19
opened by Mabinogiysk
on 2018-07-03
closed by SsnL... |
st100048 | Could pytorch print out a list of parameters in a computational graph if the parameters are not in a module? For example, print the list of parameters until d in the following computational graph:
import torch
from torch.autograd import Variable
a = Variable(torch.rand(1, 4), requires_grad=True)
b = a**2
c = b*2
d = c.... |
st100049 | Hi,
No such function exist at the moment.
I guess you could traverse the graph using d.grad_fn and.next_functions, finding all the AccumulateGrad Functions and getting their .variable attribute. This would give you all the tensors in which gradients will be accumulated (possibly 0 valued) if you call backward on d.
Why... |
st100050 | class DeephomographyDataset(Dataset):
'''
DeepHomography Dataset
'''
def __init__(self,hdf5file,imgs_key='images',labels_key='labels',
transform=None):
'''
:argument
:param hdf5file: the hdf5 file including the images and the label.
:param transform (call... |
st100051 | Take a look at this example
Mode state saving code:
github.com
pytorch/examples/blob/master/imagenet/main.py#L165-L171 428
save_checkpoint({
'epoch': epoch + 1,
'arch': args.arch,
'state_dict': model.state_dict(),
'best_prec1': best_prec1,
'optimizer' : optimizer.state_dict(),
}... |
st100052 | If you run your experiments inside a Docker container you may find this link interesting:
github.com
docker/cli/blob/master/experimental/checkpoint-restore.md 48
# Docker Checkpoint & Restore
Checkpoint & Restore is a new feature that allows you to freeze a running
container by checkpointing it, wh... |
st100053 | I am trying to search “what has to be done to add a Operation …”
Where is this file?
https://github.com/pytorch/pytorch/blob/v0.4.1/aten/src/ATen/function_wrapper.py 7 |
st100054 | It might be an easy question but I am not familiar with Maxpool layer.
When I use Embedding layer it increases the dimention of tensor.
embedding = nn.Embedding(10, 5)
input = torch.LongTensor([[[1,2,4,5],[4,3,2,9]],[[1,2,4,5],[4,3,2,9]]])
output = embedding(input)
input.seze()
torch.Size([2, 2, 4])
output.size()
torc... |
st100055 | Solved by o2h4n in post #2
I have flagged topic to remove but I found the answer :
m = nn.MaxPool2d((4,1))
output = m(input) |
st100056 | I have flagged topic to remove but I found the answer :
m = nn.MaxPool2d((4,1))
output = m(input) |
st100057 | It seems your GPU supports compute capability 3.0 based on this source 3, which isn’t shipped in the pre-built binaries anymore.
You could compile from source using this instructions 20 to use your GPU. |
st100058 | Hello,
Pytorch doesn’t accept string classes. So, l converted my labels to int and feed my network as follow :
However l need to keep track of my real labels (string names). I need at the end of my learning get the perfomance on each example and map the int label to string label.
How can l do that ?
with open('/home/... |
st100059 | Would a simple dict work?
idx_to_label = {
0: 'class0',
1: 'class1',
2: 'class2'
}
preds = torch.argmax(torch.randn(10, 2), 1)
for pred in preds:
print(idx_to_label[pred.item()])
or what do you want to do with these labels? |
st100060 | Hey !
I was trying to get a Variational Autoencoder to work recently, but to no avail. Classic pattern was that the loss would quickly decrease to a small value at the beginning, and just stay there.
Far from optimal, the network would not generate anything useful, only grey images with a slightly stronger intensity i... |
st100061 | i think the reason is because of the nature of the task (mind you this is going to be non math for my part). basically in a VAE you are asking the question ‘how closely did each reconstruction match?’ not on average how well did all my constructions match? so you want the ‘amount’ of error from a batch not the average ... |
st100062 | Well, I’m not completely sure of that because a simple auto-encoder works just fine without this flag. However, when you take into account the variational inference, it just doesn’t work anymore without this specific instruction. That’s what puzzled me. |
st100063 | I’ve bumped into the same thing. It really bothered me. I should be able to counteract this mean over batch using larger learning rate, but it didn’t seem to work. Setting too high LR for Adam just blow the training. Setting it to roughly the higher value before blowing training gives just blob of white pixels in the c... |
st100064 | Consider the following extension of torch.autograd.function:
class MorphingLayer(Function):
@staticmethod
def forward(ctx, input, idx):
ctx.save_for_backward(input, idx)
#implementation of forward pass
return output1, output2
Assume that the gradient w.r.t. input can be obtained ... |
st100065 | This should give you a start for the automatically calculated gradient bit:
class MyFn(torch.autograd.Function):
@staticmethod
def forward(ctx, a_):
with torch.enable_grad():
a = a_.detach().requires_grad_()
res = a**2
ctx.save_for_backward(a, res)
return res.deta... |
st100066 | That’s extremely helpful, thanks! Can someone explain what’s going on under the hood (why detach? why retain_graph?) and if it’s safe to combine autograd and a custom backward pass in this way. Does it break other functionality? |
st100067 | @tom, I think that is really a nice approach. I’d like to explain it a bit:
In PyTorch autograd usually automatically computes the gradients of all operations, as long as requires_grad is set to True. If you however need operations that are not natively supported by PyTorch’s autograd, you can manually define the funct... |
st100068 | Nice explanation! @Florian_1990
Florian_1990:
One thing you could do, is to check if the input ( a_ ) requires gradients and use torch.no_grad() instead of torch.enable_grad() or skip the requires_grad_() part to prevent the calculation of some unnecessary gradients.
It might be easiest to have a wrapper (you wo... |
st100069 | Hi, I really look forward to 1.0 with the tracing and jit compiling capabilities.
To check it out I am using pytorch-nightly from anaconda right now, and in python torch.jit.trace works, and I can save and load the saved Script/Traced Modules.
In the docs (https://pytorch.org/docs/master/jit.html#torch.jit.ScriptModule... |
st100070 | Hi Johannes,
I am waiting for the same API and documentation to become available. I found a recent tutorial: https://pytorch.org/tutorials/advanced/cpp_export.html 21, it might help until 1.0 is released. |
st100071 | Hi,
I am a newbie in PyTorch.
I am trying to implement a multi-label classifier using MultiLabelMarginLoss() as the loss function.
INPUT_DIM = 74255
NUM_OF_CATEGORIES = 20
NUM_OF_HIDDEN_NODE = 64
class HDNet(nn.Module):
def _init_(self):
super(HDNet, self)._init_()
self.hidden_1 = nn.Linear(INPUT_DIM, NUM_OF_HIDDE... |
st100072 | Hi,
I’m trying to define the Dataset class for our EHR data to be able to utilize the DataLoader, but it comes in the format of a list of list of list for a single subject, see below.
Basically the entire thing is a medical history for a single patient.
For each second-level list of list, e.g.[ [0], [ 7, 364, 8, 30, ... |
st100073 | How would you like to get or process the data further?
Using an own colalte_fn for your DataLoader you can just return the medical history as you’ve saved it:
# Create Data
data = [[[random.randint(0, 100)], torch.randint(0, 500, (random.randint(3, 10),)).tolist()]
for _ in range(random.randint(20, 30))]
clas... |
st100074 | Thanks so much! It helps a lot, as currently our model just takes the nested list and go from there.
But just curious if I want to take a look at the NLP approach and do the padding (2D) for both code length of a single visit and number of visits for each patient, would you mind pointing me to relevant materials (links... |
st100075 | I think pad_sequence 17 could be a good starter, but I’m not that familiar with NLP. |
st100076 | I have a data set of ~1k images, each about 60MB on disk. I want to train a UNet-like model with patches of the images, but I am unsure about the best way to construct a training Dataset to feed the model. The images are too large to fit all of them in RAM at the same time, and are too slow to load to have each trainin... |
st100077 | I have spent the last 2 days trying to get a piece of code running and am at my wits end. Someone please help me. I am using nvidia gtx 780m, nvcc -V 9.0 and nvidia smi version 396.44.
Running my code in an environment with PyTorch installed via the command conda install pytorch torchvision -c pytorch returns me the me... |
st100078 | I tried running this snippet
import torch
print(torch.rand(3,3).cuda())
which gave me this error
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-23-78a66e8a8408> in <module>()
1 import torch
---... |
st100079 | Hi,
Is there a particular reason why you want to use such an old version? I am not sure at that time that the default package was compiled with cuda support. |
st100080 | Hello thanks for the help. My GPU only supports CUDA 7.5. I am okay with using the latest version but I am not sure what it is. I went to https://pytorch.org/previous-versions/ 71 and thought that 0.1.12 is the latest one. What is the latest version that can be used with CUDA 7.5 ? |
st100081 | I am currently looking for the optimal learning rate when training a GAN. Therefore I generated a generator and a discriminator model and copied both models tree times to evaluate four different learning rates.
For copying I tried both
copy.deepcopy(module)
and
module_copy = copy.deepcopy(module)
module_copy.load_stat... |
st100082 | How are the results indicating that the training is being continued?
If you checked, that the models have different parameters, could you just pass a tensor with all ones through the trained model and the randomly initialized model and compare the outputs?
I assume both the generator and discriminator are copied?
Do yo... |
st100083 | ptrblck:
How are the results indicating that the training is being continued?
I am looking at estimates of Wasserstein distances between different classes of training data and generated data. For copies of the same generator, those estimates should be almost identical before training. However, the distances at the b... |
st100084 | ptrblck:
If you checked, that the models have different parameters, could you just pass a tensor with all ones through the trained model and the randomly initialized model and compare the outputs?
I ran the experiment again and checked the outputs of each generator before, between and after training. These are the r... |
st100085 | Hello, Pytorch users!
I am implementing multiagent reinforcement learning and finished to test it,
and I am trying to convert cpu tensor to cuda tensor by myself.
github.com/verystrongjoe/multiagent_rl
make tensor run in gpu 9
by verystrongjoe
on 08:56AM - 19 Sep 18
chan... |
st100086 | Hello, I have a tensor with size of BxCxHxW. I want to obtain the new tensor with size of Bx3xCxHxW, How can I do it? I have used unsqueeze(1) function but it only provides Bx1xCxHxW. Thanks
Edit: I also think another solution may works
B= torch.cat([torch.zeros(A.size()).unsqueeze(1),torch.zeros(A.size()).unsqueeze(1)... |
st100087 | Solved by albanD in post #4
This should do it then to create a tensor of same type and on the same device as A.
You can change the dtype and device arguments if you need a different type and/or device.
size = list(A.size())
size.insert(1, 3)
B = torch.zeros(size, dtype=A... |
st100088 | This should do it then to create a tensor of same type and on the same device as A.
You can change the dtype and device arguments if you need a different type and/or device.
size = list(A.size())
size.insert(1, 3)
B = torch.zeros(size, dtype=A.dtype, device=A.device) |
st100089 | As in TensorFlow we can specify GPU memory fraction(as given below) , how can we do the same in Pytorch?
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) |
st100090 | Hi,
There is no such option in pytorch. It will allocate the memory as it needs it. |
st100091 | Is there a way to efficiently calculate top-k accuracy in Pytorch when using batches for CNNs?
Currently I use a scikitlearn method called accuracy_score which takes an array of argmax values (so one value in the array is the argmax of an image prediction by the CNN) and compares it to an array of target values (where ... |
st100092 | I’m a PyTorch beginner and I’m trying to implement a recommender system based on the paper “Scalable Recommender Systems through Recursive Evidence Chains” (https://arxiv.org/abs/1807.02150v1 1).
In this paper the latent factors Ui for users are defined in (3.1)
and the latent factors Vj for items are are defined in (... |
st100093 | I facing a memory issue. i was trying to use 500,000 images to train my model, but it can not load image before train my model.
At the beginning, i used ImageFolder to load dataset. i searched from forums, someone said i should use my own dataset class.
image_datasets = datasets.ImageFolder(dataset_dir, data_transforms... |
st100094 | batch_size is 4. I think its not a batch_size problem, i cannot finish dataset codes. |
st100095 | It may be a simple question. I downloaded the tarball from github releases page, however the tarball doesn’t work. How to make it work ?
(eee) [ pytorch-0.4.1]$ python setup.py install
fatal: not a git repository (or any of the parent directories): .git
running install
running build_deps
Could not find /home/liangste... |
st100096 | I don’t think it would work. It probably is best to check out the git branch v0.4.1 and use that.
If you clone and then do
git checkout -b v0.4.1 origin/v0.4.1
should do the trick.
Then you can run the git submodule update --init as you suggested.
Best regards
Thomas |
st100097 | After I do
git checkout -b v0.4.1 origin/v0.4.1
and run
python setup.py install
However “pip list” shows the wrong torch version:
torch 0.5.0a0+a24163a
How does this happen? |
st100098 | That just means its legit :
github.com/pytorch/pytorch
Issue: Wrong version number for the 0.4.1 release 3
opened by EKami
on 2018-07-27
closed by soumith
on 2018-07-27
Issue description
The __version__ provided by Pytorch is wrong
See this line
Best ... |
st100099 | environment: Windows10+vs 2015+cuba 9.2+python 2.7.14+cmake 3.12
How could I install caffe2 successfully?
At the command “H:\pytorch\scripts>build_windows.bat”, the result has several mistakes. I only downloaded the cmake and do not use it. The following is the code:
H:\pytorch\scripts>build_windows.bat
Requirement al... |
End of preview. Expand in Data Studio
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
language:
- en language_creators:
- found multilinguality:
- monolingual pretty_name: hello size_categories:
- '100K<n<1M
- Downloads last month
- 13