title
stringlengths
15
126
category
stringclasses
3 values
posts
list
answered
bool
2 classes
Pytorch converges much slower than Theano
null
[ { "contents": "I’m trying out Pytorch by comparing model with its Theano version equivalence, and I noticed: The Pytorch model is defined by: <SCODE>class model_5_1(nn.Module):\n def __init__(self, batchsize=None, channel=1, imsize=(256, 256), Nclass=16, kernel_size=3, border_mode='same'):\n super(model_5_1, self).__init__()\n self.batchsize = batchsize\n self.channel = channel\n self.imsize = imsize\n self.Nclass = Nclass\n self.kernel_size = kernel_size\n self.border_mode = border_mode\n if border_mode == 'same':\n pad = kernel_size // 2\n else:\n pad = 0\n self.conv0 = nn.Conv2d(channel, 32, kernel_size, padding=pad)\n self.conv1 = nn.Conv2d(32, 64, kernel_size, padding=pad)\n self.conv2 = nn.Conv2d(96, 128, kernel_size, padding=pad)\n self.conv3 = nn.Conv2d(128, 128, kernel_size, padding=pad)\n self.conv4 = nn.Conv2d(128, 128, kernel_size, padding=pad)\n self.conv5 = nn.Conv2d(256, 512, kernel_size, padding=pad)\n self.bn0 = nn.BatchNorm2d(128)\n self.bn1 = nn.BatchNorm2d(256)\n self.bn2 = nn.BatchNorm2d(512)\n self.rnn0 = nn.LSTM(input_size=512, hidden_size=100, batch_first=True, bidirectional=True)\n self.rnn1 = nn.LSTM(input_size=200, hidden_size=100, batch_first=True, bidirectional=True)\n self.fc0 = nn.Linear(200, Nclass)\n\n def forward(self, x):\n x = F.max_pool2d(F.relu(self.conv0(x)), (2, 2))\n x1 = F.relu(self.conv1(x))\n x = tr.cat((x, x1), 1)\n x = F.max_pool2d(x, (2,2))\n x = F.relu(self.bn0(self.conv2(x)))\n\n x = F.max_pool2d(F.relu(self.conv3(x)), (2,2))\n x1 = F.relu(self.conv4(x))\n x = tr.cat((x, x1), 1)\n x = self.bn1(x)\n\n x = F.max_pool2d(F.relu(self.conv5(x)), (4,4))\n x = self.bn2(x)\n x = x.view(x.size(0), x.size(1), x.size(2) * x.size(3))\n x = tr.transpose(x, 1, 2)\n x, _ = self.rnn0(x)\n x = F.tanh(x)\n x, _ = self.rnn1(x)\n x = x[:,-1,:]\n x = F.tanh(x)\n x = F.softmax(self.fc0(x))\n return x\n<ECODE> And I use cross_entropy for loss, Adadelta for optimizer. For Theano version I use categorical_crossentropy for loss, Adadelta with same parameters for optimizer. Anyone has any thought on this problem?", "isAccepted": false, "likes": 1, "poster": "david-leon" }, { "contents": "You don’t need to use softmax in the forward operation. Maybe it is the cause of the problem.", "isAccepted": false, "likes": null, "poster": "kim.seonghyeon" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "david-leon" } ]
false
How to specify GPU usage?
null
[ { "contents": "I am training different models on different GPUs. I have 4 GPUs indexed as 0,1,2,3 I try this way: <SCODE>model = torch.nn.DataParallel(model, device_ids=[0,1]).cuda()\n<ECODE> But actual process use GPU index 2,3 instead. and if I use: <SCODE>model = torch.nn.DataParallel(model, device_ids=[1]).cuda()\n<ECODE> I will get the error: How to specify the GPU usage with index?", "isAccepted": false, "likes": 4, "poster": "zhoubinxyz" }, { "contents": "I am using Ubuntu 16.04. The GPU indexing are the same as you have. <SCODE>CUDA_VISIBLE_DEVICES=2,3 python xxx.py\n<ECODE> In addition, I don’t think that dataparallel accepts only one gpu.", "isAccepted": false, "likes": 6, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "zhoubinxyz" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "apaszke" }, { "contents": "I am using pytorch 0.1.9 and Ubuntu 16.04. When I use CUDA_VISIBLE_DEVICES=2,3 (0,1), ‘nvidia-smi’ tells me that gpus 0,1 (2,3) are used. I do not know the reason, but the gpu id used in nvidia-smi and the gpu id used in pytorch are reversed. You can check it if you use Ubuntu 16.04.", "isAccepted": false, "likes": 2, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "shicai" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "zhoubinxyz" }, { "contents": "I’m curious about this as well. Can you currently use fractional GPU usage as in tensorflow? The tf equivalent is something like this: <SCODE> with tf.device(FLAGS.device):\n gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=FLAGS.device_percentage)\n sess_cfg = tf.ConfigProto(allow_soft_placement=FLAGS.allow_soft_placement,\n gpu_options=gpu_options)\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "jramapuram" }, { "contents": "<SCODE>dtype = torch.FloatTensor\n# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU\n<ECODE> Also, do I really have to track how GPUs are assigned, I am fine with torch just doing its stuff automagically. In particular I would love to see how: is made into a GPU version of it.", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "Dear All, I am getting the below error: RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/THC/generic/THCStorage.cu:58 I think pytorch is not communicating with the Nvidia GPU, please advise.", "isAccepted": false, "likes": null, "poster": "saurabhjha21" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "Yes you are correct, it was fine after i restart the machine", "isAccepted": false, "likes": null, "poster": "saurabhjha21" }, { "contents": "for a Unix command soln you can also do: <SCODE>export CUDA_VISIBLE_DEVICES=$i\n<ECODE> though of course that only works if the scripts are independent and stuff like that…otherwise the other solutions here are probably better…", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "CUDA_VISIBLE_DEVICES=$i python main.py", "isAccepted": false, "likes": null, "poster": "JannaShen" }, { "contents": "hi, do you have the answer?", "isAccepted": false, "likes": null, "poster": "labixiaoK" }, { "contents": "Is there anyone who knows that… When I attach below code in python file(in main.py), <SCODE>import os\nos.environment[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\nos.environment[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n<ECODE> The former one doesn’t specify(divide) GPU but, the latter one works well. It seems strange to me. Thanks ahead.", "isAccepted": false, "likes": null, "poster": "DonghunP" }, { "contents": "I wouldn’t recommend the first approach, since you would have to make sure these lines of code are imported before any other library, which might take the GPU. If some script imports PyTorch and these lines are executed afterwards, they won’t have any effect anymore. The second approach makes sure to mask the devices before running the Python script.", "isAccepted": false, "likes": 2, "poster": "ptrblck" }, { "contents": "Totally understand thanks!!", "isAccepted": false, "likes": 1, "poster": "DonghunP" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "LucasX" } ]
false
Initalize the weights of nn.ConvTranspose2d
null
[ { "contents": "how shoud I initalize the weights of nn.ConvTranspose2d ? like nn.Conv2d? is this any special for Pytorch <SCODE>for m in self.modules():\n if isinstance(m, nn.Conv2d):\n n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n m.weight.data.normal_(0, math.sqrt(2. / n))\n elif isinstance(m, nn.BatchNorm2d):\n m.weight.data.fill_(1)\n m.bias.data.zero_()\n<ECODE> infer : the bias will automatically initialize with random value . is that right? <SCODE>def weights_initG(m):\n for p in m.modules():\n if isinstance(p,nn.Conv2d):\n n = p.kernel_size[0] * p.kernel_size[1] * p.out_channels\n p.weight.data.normal_(0, math.sqrt(2. / n))\n elif isinstance(p,nn.BatchNorm2d):\n p.weight.data.normal_(1.0, 0.02)\n p.bias.data.fill_(0)\n elif isinstance(p,nn.ConvTranspose2d):\n n=p.kernel_size[1]\n factor = (n+1)//2\n if n%2 ==1:\n center = factor - 1\n else :\n center = factor -0.5\n og = np.ogrid[:n,:n]\n weights_np=(1-abs(og[0]-center)/factor)* (1-abs(og[1]-center)/ factor)\n p.weight.data.copy_(torch.from_numpy(weights_np))\n<ECODE>", "isAccepted": false, "likes": null, "poster": "huxc_ustc" }, { "contents": "<SCODE>def weights_init(m):\n classname = m.__class__.__name__\n if classname.find('Conv') != -1:\n m.weight.data.normal_(0.0, 0.02)\n elif classname.find('BatchNorm') != -1:\n m.weight.data.normal_(1.0, 0.02)\n m.bias.data.fill_(0)\n\nnetG.apply(weights_init)\n<ECODE> it should work.", "isAccepted": false, "likes": 1, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "david-leon" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "acgtyrant" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "bille_du" }, { "contents": "<SCODE>m.weight.data.normal_(1.0, 0.02).clamp_(min=0,max=2)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "ok,thank you!by the way,if I want to clamp [-1, -0.1] and [0.1,1] 。How to operate?", "isAccepted": false, "likes": null, "poster": "bille_du" }, { "contents": "<SCODE>a.clamp_(min=-1,max=1)\na[a.abs()<0.1]=t.sign(a[a.abs()<0.1])*0.1\n<ECODE>", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "bolt25" }, { "contents": "<SCODE>lin = nn.Linear(10, 10, bias=False)\n\nwith torch.no_grad():\n lin.weight.normal_(0.0, 1.0)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "ptrblck" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "bolt25" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ptrblck" } ]
false
How to delete PyTorch objects correctly from memory
null
[ { "contents": "I’m having an issue with properly deleting PyTorch objects from memory. With this Tensor: Then delete the object: CUDA memory is not freed up. Is there a clean way to delete a PyTorch object from CUDA memory?", "isAccepted": false, "likes": 3, "poster": "111" }, { "contents": "Hi, It is because the cuda backend uses a caching allocator. This means that the memory is freed but not returned to the device.", "isAccepted": false, "likes": 18, "poster": "albanD" }, { "contents": "Ah Thanks a lot! It’s really helpful!", "isAccepted": false, "likes": null, "poster": "111" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "nionjo" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "SANTOSH_S" }, { "contents": "Hi,", "isAccepted": false, "likes": 1, "poster": "albanD" } ]
false
What’s the difference between nn.ReLU() and nn.ReLU(inplace=True)?
null
[ { "contents": "", "isAccepted": true, "likes": 32, "poster": "yunjey" }, { "contents": "", "isAccepted": true, "likes": 81, "poster": "apaszke" }, { "contents": "", "isAccepted": true, "likes": 6, "poster": "laoreja" }, { "contents": "is not an in place operation, because you use the same variable name, but it’s not the same variable underneath. You just point your x name to a new variable, the old one is still in memory (because it’s referenced by the pytorch graph)", "isAccepted": true, "likes": 14, "poster": "cdancette" }, { "contents": "", "isAccepted": true, "likes": 7, "poster": "laoreja" }, { "contents": "I’m a newbie of pytorch. So I wonder whether nn.ReLI(inplace=True) would do harm to backprop? And what about F.ReLU(inplace)?", "isAccepted": true, "likes": 6, "poster": "harryhan618" }, { "contents": "", "isAccepted": true, "likes": 5, "poster": "rasbt" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "yxchng" }, { "contents": "For relu, when input is negative, both the grad and output should be zero, grads should stop propagating from there, so inplace doesn’t hurt anything while saves memory.", "isAccepted": true, "likes": 11, "poster": "streaver91" }, { "contents": "Is this an in-place operation? Thanks!", "isAccepted": true, "likes": null, "poster": "Jay_Timbadia" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "cdancette" }, { "contents": "You can check on pytorch: <SCODE>>>> a = torch.tensor(1.0)\n>>> torch.sigmoid(a)\ntensor(0.7311)\n>>> print(a)\ntensor(1.0)\n<ECODE> <SCODE>>>> a = torch.tensor(1.0)\n>>> torch.sigmoid_(a)\ntensor(0.7311)\n>>> print(a)\ntensor(0.7311)\n<ECODE> In the first case, a still has its original value, while in the second case, a is different.", "isAccepted": true, "likes": 6, "poster": "cdancette" }, { "contents": "In case y = F.relu(x, inplace=True), it won’t hurt anything if value of x should always be positive in your computational graph. However, some other node that shares x as input while it requires x has both positive and negative value, then your network may malfunction. For example, in the following situation, If (1) is declared first and execuated first, then value of x is changed, then z may have incorrect expected value.", "isAccepted": true, "likes": 1, "poster": "helson73" }, { "contents": "Hi all, Even though there are multiple answers, I will explain my problem here. I am facing the following error message: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [4, 64, 3, 3]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! <SCODE> def forward(self, z, xlr=None, logdet=0, logpz=0, eps=None, reverse=False,\n use_stored=False):\n\n self.h_new, self.c_next = self.conv_lstm(z, (self.h, self.c))\n\n # Encode\n if not reverse:\n for i in range(self.L):\n print(\"Level\", i)\n for layer in self.level_modules[i]:\n\n if isinstance(layer, modules.Squeeze):\n z = layer(z, reverse=False)\n self.h_new = layer(self.h_new, reverse=False)\n\n elif isinstance(layer, FlowStep):\n z, logdet = layer(z, lr_feat_map=self.h_new, # lr_downsampled_feats[i + 1], # TODO: change this part\n x_lr=xlr, logdet=logdet, reverse=False)\n\n elif isinstance(layer, modules.GaussianPrior):\n z, logdet, logpz = layer(z, logdet=logdet, logpz=logpz,\n lr_feat_map=self.h_new, #lr_downsampled_feats[i + 1],\n eps=eps, reverse=False)\n self.h = self.last_squeezer(self.h_new, reverse=True)\n self.c = self.c_next\n<ECODE> Do you think it would be the best to just pass on the hidden and context states through the function outputs? I have already set loss.mean().backward(retain_graph=True) and skimmed the code for other inplace operations. Any help would be much appreciated !! Please let me know if further code snippets are required.", "isAccepted": true, "likes": null, "poster": "cwinkler" } ]
true
Freeze the learnable parameters of resnet and attach it to a new network
vision
[ { "contents": "I would like to know is this approach correct or not? Actually, in my point of view, only the 6 or 8 last layers have learnable parameters. Am I right?", "isAccepted": false, "likes": 2, "poster": "mderakhshani" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "<SCODE>for param in net.features.parameters():\n param.requires_grad = False\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "and here is its error:", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "My code: Error:", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "<SCODE>parameters = ifilter(lambda p: p.requires_grad, net.parameters())\n<ECODE>", "isAccepted": false, "likes": 10, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": 5, "poster": "apaszke" }, { "contents": "I am wondering whether to set .eval() for those frozen layers since they may still update their running mean and running var during training while not learning their parameters.", "isAccepted": false, "likes": 1, "poster": "James_Chen" } ]
false
Writing custom cuda kernels for pytorch
null
[ { "contents": "I am trying to write custom cuda kernel for pytorch for a specific computation. Is there any available documentation for writing custom cuda kernels for pytorch?", "isAccepted": false, "likes": null, "poster": "Gaurav_Pandey" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Matrix Vector Basic operations
null
[ { "contents": "Thanks a lot !", "isAccepted": false, "likes": null, "poster": "D_Kay" }, { "contents": "Hi,", "isAccepted": false, "likes": 2, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "D_Kay" } ]
false
Problem on Variable.grad.data?
null
[ { "contents": "<SCODE>print(x.grad)\nNone\n\ndir(x.grad)\n['__class__', '__delattr__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']\n<ECODE> Whereas is it working fine on my personal computer , on PyTorch ‘0.1.9+aaf41c6’. Is this an issue or there is something that i Don’t know about? Thanks a lot !", "isAccepted": false, "likes": 4, "poster": "D_Kay" }, { "contents": "Hi, <SCODE>x=torch.Tensor(5,5).normal_()\nx=Variable(x,requires_grad=True)\nprint(hasattr(x.grad, \"data\")) # prints False\nx.add(1).sum().backward()\nprint(hasattr(x.grad, \"data\")) # prints True\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "Is there a way to easily init the gradient buffer ? I also wondered where I can get the latest patch notes? The last time I checked on github, there were only the notes for the 0.1.9 version Thanks again !", "isAccepted": false, "likes": null, "poster": "D_Kay" }, { "contents": "Hi, For the 0.1.10 release note, I think Soumith is currently working on them and they should be on github soon.", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "I managed to get access to it by doing a “dummy” backprop in the _ _init _ _. That’s a quick and dirty solution, and I’m open to better ones. Thanks again !", "isAccepted": false, "likes": null, "poster": "D_Kay" }, { "contents": "No, you can’t initialize it manually, but I don’t really see why would you need to do that (it’d be a tensor of zeros anyway). You can still access and modify the gradient, but only once the backward has been computed. Doesn’t this work for you: <SCODE>output = model(input)\nloss(output).backward()\nmodel.weight.grad.data # not None anymore. Can be modified\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "ypxie" }, { "contents": "I see, that’s actually a good point, we haven’t thought about that. We’ll have to solve it somehow", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "How did you perform the “dummy” backprop?", "isAccepted": false, "likes": null, "poster": "Ilya_Kostrikov" }, { "contents": "I used the following workaround. <SCODE> for shared_param in shared_model.parameters():\n if not hasattr(shared_param.grad, 'data'):\n dummy_loss = 0\n for this_para in shared_model.parameters():\n dummy_loss += torch.mean(this_para)\n dummy_loss.backward()\n break\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "So when exactly is it a problem? How are you implementing A3C? Are you sharing the main model parameters or does your training loop body look like this: <SCODE>loss = fn(input)\nloss.backward()\ncopy_grads_to_shared_model(model, shared_model)\nshared_model_optimizer.step()\ncopy_params_to_local_model(model, shared_model)\n\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "D_Kay" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thank you~ For the safe issue, i noticed that DM’s paper explicitly said they don’t put a lock on the shared weights. For your solution to the shared grad to pytorch-a3c. <SCODE>def ensure_shared_grads(model, shared_model):\n for param, shared_param in zip(model.parameters(), shared_model.parameters()):\n if shared_param.grad is not None:\n return\n shared_param._grad = param.grad\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jingweiz" }, { "contents": "So what’s the verdict here? Should we just remove the if condition? Because I can see no reason for the check. From what I’ve seen _grad is always accessible, and the global parameter grad is always updated to match the local, after I remove the if condition.", "isAccepted": false, "likes": null, "poster": "longhuei" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jhliew" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "xuehy" } ]
false
Weird difference between function.forward(input) and function(input)
null
[ { "contents": "If I create a function and I apply it by calling forward method, the gradient computed seems independent of my backward() method and seems correct, even if the backward() was incorrect. For example, with this code: <SCODE>class Cube(Function):\n def forward(self,input):\n self.save_for_backward(input)\n return input*input*input\n\n def backward(self, grad_output):\n input, = self.saved_tensors\n # wrong backward function:\n return grad_output\n\ncube = Cube()\ninput = Variable(torch.ones(2,2).double(), requires_grad=True)\noutput = cube(input).sum()\noutput.backward()\nprint(input.grad) # gives [[1,1],[1,1]] what does my backward do\n\ninput.grad.data.zero_()\noutput = cube.forward(input).sum()\noutput.backward()\nprint(input.grad) # gives [[3,3],[3,3]] the good gradient ?!\n<ECODE>", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" } ]
false
Error using torch.load on update
null
[ { "contents": "I recently updated from pytorch 0.1.8 to 0.1.10 and I now get the following somewhat confusing error when I try to call torch.load on a file serialized with 0.1.8: <SCODE>----> 1 a = torch.load(\"file.pth7\")\n\n/usr/local/lib/python2.7/dist-packages/torch/serialization.pyc in load(f, map_location, pickle_module)\n 220 f = open(f, 'rb')\n 221 try:\n--> 222 return _load(f, map_location, pickle_module)\n 223 finally:\n 224 if new_fd:\n\n/usr/local/lib/python2.7/dist-packages/torch/serialization.pyc in _load(f, map_location, pickle_module)\n 353 # try the legacy loader first, which only works if f is a tarfile\n 354 try:\n--> 355 return legacy_load(f)\n 356 except tarfile.TarError:\n 357 pass\n\n/usr/local/lib/python2.7/dist-packages/torch/serialization.pyc in legacy_load(f)\n 297 args = pickle_module.load(f)\n 298 key, location, storage_type = args\n--> 299 obj = storage_type._new_with_file(f)\n 300 obj = restore_location(obj, location)\n 301 deserialized_objects[key] = obj\n\nRuntimeError: Success\n<ECODE>", "isAccepted": false, "likes": null, "poster": "temerick" }, { "contents": "Huh, that’s surprising. No idea what could have caused that at the moment.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>save_checkpoint({\n 'epoch': epoch + 1,\n 'arch': options['model']['arch'],\n 'state_dict': model.state_dict(),\n 'best_prec1': best_prec1,\n }, is_best)\n<ECODE> But when I try to load it with: <SCODE>torch.load(filename)\n<ECODE> I get this error: <SCODE> Traceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/home/cadene/anaconda3/envs/vqa/lib/python3.6/site-packages/torch/serialization.py\", line 222, in load\n return _load(f, map_location, pickle_module)\n File \"/home/cadene/anaconda3/envs/vqa/lib/python3.6/site-packages/torch/serialization.py\", line 377, in _load\n deserialized_objects[key]._set_from_file(f, offset)\nRuntimeError: Success\n<ECODE> It could be due to the fact that the data in my state_dict are of type torch.cuda.FloatTensor.", "isAccepted": false, "likes": null, "poster": "Cadene" } ]
false
Adding Gaussion Noise in CIFAR10 dataset
null
[ { "contents": "If I want to add some Gaussion noise in the CIFAR10 dataset which is loaded by torchvision, how should I do it? Or, if I have defined a dataset by torch.utils.data.TensorDataset, how can I add more data samples there? Is any function like append( )? Thanks.", "isAccepted": false, "likes": null, "poster": "Shiyu" }, { "contents": "There’s a few ways you can do this. <SCODE>import random\n\nclass RandomNoise(object):\n def __init__(self, probability):\n self.probabilit = probability\n def __call__(self, img):\n if random.random() <= self.probability:\n return img.clone().normal_(0,1)\n return img\n \ncifar10_dataset = torchvision.datasets.CIFAR10(root=\"...\", download=True, transform=input_transform)\n<ECODE> <SCODE>def __getitem__(self, index):\n if index < 50000:\n return super(MySubClass, self).__getitem__(index)\n else:\n return torch.Tensor(size).normal_(0,1), torch.Tensor(1).fill_(class_label)\n\n<ECODE>", "isAccepted": false, "likes": null, "poster": "alykhantejani" }, { "contents": "Hi, I use torchvision.transform to do it, it has a lambda function which you can customized a funciton to add noise to the data. But the CIFAR10 image is small just 32 * 32 * 10, after add sp or gaussion noise on them, the final result seems like not well .", "isAccepted": false, "likes": null, "poster": "maxiaoyuzdz" } ]
false
Implementation of function like `numpy.roll`
null
[ { "contents": "Ultimately, I want to implement image gradient with forward difference and Neumann boundary condition. For example, the numpy version of it is as follows: <SCODE>def grad(u):\n # u: 2-d images\n ux = np.roll(u, -1, axis=1) - u\n uy = np.roll(u, -1, axis=0) - u\n ux[:,-1] = 0\n uy[-1,:] = 0\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "Ja-Keoung_Koo" }, { "contents": "Thanks for your reply, although I checked before. As you said, it seems hard to implement.", "isAccepted": false, "likes": null, "poster": "Ja-Keoung_Koo" }, { "contents": "If should be quite simple to implement yourself. Just slice the tensor into two pieces, swap them, and cat along the same dimension that you used to split.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Ryan_Burgert" }, { "contents": "haven’t tested this extensively, but this seems to cover if you just want a single split. logic if shift is negative could probably be cleaned up a little <SCODE>def roll(tensor, shift, axis):\n if shift == 0:\n return tensor\n\n if axis < 0:\n axis += tensor.dim()\n\n dim_size = tensor.size(axis)\n after_start = dim_size - shift\n if shift < 0:\n after_start = -shift\n shift = dim_size - abs(shift)\n\n before = tensor.narrow(axis, 0, dim_size - shift)\n after = tensor.narrow(axis, after_start, shift)\n return torch.cat([after, before], axis)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "azban" }, { "contents": "Simple solution to roll around first axis: <SCODE>def roll(x, n): \n return torch.cat((x[-n:], x[:-n]))\n<ECODE> Test like: <SCODE>x = torch.arange(5)\nprint(\"Orig:\", x)\nprint(\"Roll 2:\", roll(x, 2))\nprint(\"Roll -2:\", roll(x, -2))\n<ECODE> Outputs: <SCODE>Orig: tensor([0, 1, 2, 3, 4])\nRoll 2: tensor([3, 4, 0, 1, 2])\nRoll -2: tensor([2, 3, 4, 0, 1])\n<ECODE> To roll around second axis, use: <SCODE>def roll_1(x, n): \n return torch.cat((x[:, -n:], x[:, :-n]), dim=1)\n<ECODE> It probably can be generalised, but I didn’t need it.", "isAccepted": false, "likes": 7, "poster": "jaromiru" }, { "contents": "<SCODE>def roll(x: torch.Tensor, shift: int, dim: int = -1, fill_pad: Optional[int] = None):\n\n if 0 == shift:\n return x\n\n elif shift < 0:\n shift = -shift\n gap = x.index_select(dim, torch.arange(shift))\n if fill_pad is not None:\n gap = fill_pad * torch.ones_like(gap, device=x.device)\n return torch.cat([x.index_select(dim, torch.arange(shift, x.size(dim))), gap], dim=dim)\n\n else:\n shift = x.size(dim) - shift\n gap = x.index_select(dim, torch.arange(shift, x.size(dim)))\n if fill_pad is not None:\n gap = fill_pad * torch.ones_like(gap, device=x.device)\n return torch.cat([gap, x.index_select(dim, torch.arange(shift))], dim=dim)\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "Alexander_Reshytko" }, { "contents": "I tried to use yours, but I get a compilation error saying that Optional is not defined.", "isAccepted": false, "likes": null, "poster": "Zuanazzi" }, { "contents": "<SCODE>from typing import Optional\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "Alexander_Reshytko" }, { "contents": "<SCODE>from typing import Optional\n\ndef roll(x: torch.Tensor, shift: int, dim: int = -1, fill_pad: Optional[int] = None):\n\n device = x.device\n \n if 0 == shift:\n return x\n\n elif shift < 0:\n shift = -shift\n gap = x.index_select(dim, torch.arange(shift, device=device))\n if fill_pad is not None:\n gap = fill_pad * torch.ones_like(gap, device=device)\n return torch.cat([x.index_select(dim, torch.arange(shift, x.size(dim), device=device)), gap], dim=dim)\n\n else:\n shift = x.size(dim) - shift\n gap = x.index_select(dim, torch.arange(shift, x.size(dim), device=device))\n if fill_pad is not None:\n gap = fill_pad * torch.ones_like(gap, device=device)\n return torch.cat([gap, x.index_select(dim, torch.arange(shift, device=device))], dim=dim) \n<ECODE>", "isAccepted": false, "likes": 1, "poster": "tsnowak" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "lucasb-eyer" } ]
false
Embeddings with half precision
null
[ { "contents": "<SCODE># Long \nembedding = nn.Embedding(embedding_dim=5, num_embeddings=10)\nembedding.cuda()\n\nx_device = torch.LongTensor([1,2,0,1]).cuda()\nxv = Variable(x_device)\no = embedding(xv)\nt = torch.zeros(o.size()).cuda()\no.backward(t)\n\n# Half Crahes\nembedding = nn.Embedding(embedding_dim=5, num_embeddings=10)\nembedding.cuda().half()\n\nx_device = torch.LongTensor([1,2,0,1]).cuda()\nxv = Variable(x_device)\no = embedding(xv)\nprint(o)\nt = torch.zeros(o.size()).cuda().half()\no.backward(t)<ECODE>", "isAccepted": false, "likes": null, "poster": "skaae" }, { "contents": "<SCODE>if grad_output.is_cuda:\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "fmassa" }, { "contents": "Thanks. I’ll try to understand the Embedding code better and create a PR.", "isAccepted": false, "likes": null, "poster": "skaae" } ]
false
Swap axes in pytorch?
null
[ { "contents": "Is there equivalent operation in pytorch?", "isAccepted": true, "likes": 6, "poster": "Response777" }, { "contents": "", "isAccepted": true, "likes": 22, "poster": "apaszke" }, { "contents": "Thanks, I have just checked the Docs, but it seems that I just miss it…", "isAccepted": true, "likes": null, "poster": "Response777" }, { "contents": "<SCODE>a = torch.rand(1,2,3,4)\nprint(a.transpose(0,3).transpose(1,2).size())\nprint(a.permute(3,2,1,0).size())\n<ECODE>", "isAccepted": true, "likes": 24, "poster": "fmassa" }, { "contents": "Indeed, it can be a shortcut to use <SCODE>tensor.transpose_(0, 1)\n<ECODE> instead of <SCODE>tensor = tensor.transpose(0, 1)\n<ECODE>", "isAccepted": true, "likes": 4, "poster": "fmassa" }, { "contents": "", "isAccepted": true, "likes": 1, "poster": "rahulbhalley" } ]
true
CUBLAS error if I replace torch.svd() with torch.symeig()
null
[ { "contents": "I am trying to get eigenvalues and eigenvectors of a real symmetric matrix A. Any idea about this? Thanks in advance", "isAccepted": false, "likes": null, "poster": "OCY" }, { "contents": "What’s your PyTorch version? Did you do a binary or source install?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "But still have the same error message.", "isAccepted": false, "likes": null, "poster": "OCY" }, { "contents": "If possible, can you give me a 10-line program that produces this error? I will look at it and try to resolve it next week.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Because I changed a lot, I cannot reproduce that error, unfortunately. If I have the same problem, I will continue this post with reproduced code. Thanks!", "isAccepted": false, "likes": null, "poster": "OCY" }, { "contents": "Hey, I ran into this problem on a current build of pytorch (October 2017). Here’s a minimal repro - it’s triggered for me when the symmetric matrix is 129x129 or larger.", "isAccepted": false, "likes": null, "poster": "David_Bau" } ]
false
CUDA error when calling backward() on Tesla M2070
null
[ { "contents": "I have two systems, where the first has GeForce GTX 780 Ti with CUDA 8.0 (driver version: 375.26) and the other has Tesla M2070 with CUDA 7.5.18 (driver version: 352.99). I tested the following simple code: <SCODE>import torch\nfrom torch.autograd import Variable\na = Variable(torch.randn(3,4,5), requires_grad=True).cuda()\nb = torch.randn(3,4,5).cuda()\na.backward(b)\n<ECODE> The code works on the first machine but failed on the other machine as follows: <SCODE>THCudaCheck FAIL file=/users/PAS0396/osu7806/pytorch/torch/lib/THC/generic/THCTensorCopy.c line=65 error=46 : all CUDA-capable devices are busy or unavailable\nTraceback (most recent call last):\n File \"test.py\", line 5, in <module>\n a.backward(b)\n File \"/users/PAS0396/osu7806/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py\", line 146, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"/users/PAS0396/osu7806/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/tensor.py\", line 163, in backward\n return grad_output.cpu()\n File \"/users/PAS0396/osu7806/anaconda3/lib/python3.6/site-packages/torch/tensor.py\", line 31, in cpu\n return self.type(getattr(torch, self.__class__.__name__))\n File \"/users/PAS0396/osu7806/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py\", line 276, in type\n return super(_CudaBase, self).type(*args, **kwargs)\n File \"/users/PAS0396/osu7806/anaconda3/lib/python3.6/site-packages/torch/_utils.py\", line 33, in _type\n return new_type(self.size()).copy_(self, async)\nRuntimeError: cuda runtime error (46) : all CUDA-capable devices are busy or unavailable at /users/PAS0396/osu7806/pytorch/torch/lib/THC/generic/THCTensorCopy.c:65\n<ECODE>", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "You might try building form source but it would require some additional patches (look for closed issues in the main repo). But we don’t support them officially.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>export CMAKE_PREFIX_PATH=/home/kimjook/anaconda3\nconda install numpy mkl setuptools cmake gcc cffi\nconda install -c soumith magma-cuda75\ngit clone https://github.com/pytorch/pytorch\ncd pytorch\npip install -r requirements.txt\npython setup.py install\n<ECODE> Thanks! (I understand that supporting old devices is annoying, but I am somewhat frustrated since my almost the same model worked well on Torch7 doesn’t work on PyTorch.)", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "That’s the issue I was thinking about, but maybe you don’t need it for some reason.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fazlerabbitanjil" }, { "contents": "We should update that part. I’m doing it now. We started with the commitment of cc >= 2.0, but it has been infeasible, as 2.0 is simply too old and several newer APIs dont work on it.", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
REINFORCE in batch mode
null
[ { "contents": "I have a feeling that this code need to be modified: <SCODE>for action, r in zip(model_Net.saved_actions, rewards):\n action.reinforce(r)\noptimizer.zero_grad()\nautograd.backward(model_Net.saved_actions, [None for _ in model_Net.saved_actions])\n<ECODE> My question is how to pass the ‘r’ and ‘action’ in a batch mode in back-propagation ? It might related to reshape the ‘action’ values in a way to allow back propagation. Right now I came up with an idea that is to compute ‘r’ and ‘action’ in batch mode in forward passes, but update the gradients sequentially (1 sample at a time) in back-propagations (e.g., run ‘finish_episode’ several times). But it’s obviously not optimal.", "isAccepted": false, "likes": 1, "poster": "sscommanderh" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "jekbradbury" }, { "contents": "Nothing, complicated, (or long to train like gym environments or Atari), just say a synthetic linear regression dataset with some noise added? This would be really helpful to newbs !", "isAccepted": false, "likes": null, "poster": "AjayTalati" } ]
false
How does the backward pass get plumbed in and other rudimentary loss function questions
null
[ { "contents": "Since I’m just imitating what I see elsewhere there’s a few things I don’t understand about loss functions. My second question is what the last two lines are doing. What does the view(*repeat(… do? And what does mul_(tensor.expand_as(…) do? Last question is, does the DiceLoss class make sense? When I first started looking at adding a new loss function a few days ago it seemed overwhelming, but this seems like a rather trivial piece of code. Thanks in advance.", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Bug in int8 ndarray to tensor conversion or in my understanding?
null
[ { "contents": "I can obviously preallocate an appropriately sized CharTensor and then copy myself, but it seems like that shouldn’t be necessary.", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "Trying to work around it with a ByteTensor as dst and ndarray of uint8 as follows: <SCODE>def copy_tensor(dst, src):\n for z in range(Z_MAX):\n for y in range(Y_MAX):\n for x in range(X_MAX):\n val = src[z][y][x]\n dst[z][y][x] = val\n<ECODE> The labels masks are 192x160x192, so it really would be nicer to keep them as single byte tensors.", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "this does work - albeit slowly. And it’s actually so slow as to be useless. <SCODE>def copy_tensor(dst, src):\n for z in range(Z_MAX):\n for y in range(Y_MAX):\n for x in range(X_MAX):\n val = src[z][y][x]\n if val != 0:\n dst[z][y][x] = 1\n<ECODE> It appears that the only ndarray to tensor conversion that actually works is floating point.", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "Hi,", "isAccepted": false, "likes": 2, "poster": "albanD" }, { "contents": "Converting to np.uint8 and then calling from_numpy works. Thanks.", "isAccepted": false, "likes": 1, "poster": "mattmacy" }, { "contents": "Thank you very much", "isAccepted": false, "likes": 1, "poster": "xiao1228" } ]
false
Anaconda Stopping kernel When I use my Customized Cost Function
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "Try running the code outside of an iPython notebook. It should print the full error then", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "Try running this: <SCODE>gdb python\n<some output printed here>\n> r your_script.py\n<some more output. it will tell you that you got SIGSEV and drop into shell again>\n> where\n<a few lines looking like `#0 THPFloatTensor... `. Paste it in a GitHub gist\nand put a link in this thread>\n\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "mderakhshani" } ]
false
How to reduce memory usage when one large networks called many times
null
[ { "contents": "Hi, there, I am a newbie to PyTorch and I am really impressed by its speed and flexibility. Amazing work! But I have one question about the memory usage. Any suggestions would be welcome! Thank you! <SCODE>model = AntoEncoderDecoder()\noutput = Variable(torch.zeros(256,256))\nfor i in range(iterations):\n output = model( output )<ECODE>", "isAccepted": false, "likes": null, "poster": "qianguih" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "qianguih" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Yes, it is 0.1.10. I guess theano has optimized every computation graph a lot during the function compilation.", "isAccepted": false, "likes": null, "poster": "qianguih" }, { "contents": "Maybe. It’s hard to say what’s happening without having the model details.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Fast data loader for Imagenet
null
[ { "contents": "", "isAccepted": false, "likes": 1, "poster": "longcw" }, { "contents": "How many workers are you using? What kind of disk are you using to store the dataset?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "longcw" }, { "contents": "Yes, the data transforms can be quite expensive.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Unfortunately all those mass consumption dataset loaders and feeders are far from optimal, everyone serious about this ends up writing their own performant code (or compensate by getting expensive SSDs and CPUs) but don’t publish it because it’s embarrassingly written. People often say you need to use SSDs and whatnot, but I use a couple of ancient HDDs and it’s enough to feed a duo 1080 setup. It’s just troublesome to write a properly multi threaded application for this, you don’t even need to do it in C if you have a good CPU.", "isAccepted": false, "likes": 2, "poster": "Veril" }, { "contents": "More workers seems slower. The data loading is extremely slow with a low cpu load, which means the dataloader can’t make full use of the cpu.", "isAccepted": false, "likes": null, "poster": "Chong_Lv" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "gngdb" }, { "contents": "did you store the images file after preprocessing?", "isAccepted": false, "likes": null, "poster": "lkywk" }, { "contents": "Yeah, but it’s a one-off cost. I just copied it to the other servers after.", "isAccepted": false, "likes": null, "poster": "gngdb" }, { "contents": "I run the script, but I meet the problem.", "isAccepted": false, "likes": null, "poster": "lkywk" }, { "contents": "Could be that you’re missing protobuf. You could install it with conda. If that works, I’ll add it to the README, I don’t have time to test it myself right now.", "isAccepted": false, "likes": null, "poster": "gngdb" }, { "contents": "I fixed it days ago. I think it is caused by protobuf.", "isAccepted": false, "likes": null, "poster": "lkywk" }, { "contents": "Hi, I’ve used your data but the loading speed is still 4s for batch=256, num_worker=1, which I think is not fast enough, is it normal?", "isAccepted": false, "likes": null, "poster": "charlesjiangxm" }, { "contents": "I used 4 workers, and each minibatch took 0.59s to process, including the time for forward and backward propagation. Maybe you should try with more workers?", "isAccepted": false, "likes": null, "poster": "gngdb" }, { "contents": "Hi, I use your method. But when I train the imagenet, I meet the problem", "isAccepted": false, "likes": null, "poster": "Chenzhi_Jiang" }, { "contents": "I fixed it. Just need to downgrade msgpack to 0.5.6", "isAccepted": false, "likes": null, "poster": "Chenzhi_Jiang" } ]
false
Expanding Variable with zero-padding (for DepthConcat)
null
[ { "contents": "For example, <SCODE># a: 1x2x2, b: 1x4x4\na = Variable(torch.FloatTensor([[[1,2],[3,4]]]))\nb = Variable(torch.FloatTensor([[[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4]]]))\n<ECODE> What I want to get from cat([a, b], 0) is <SCODE>(0 ,.,.) =\n0 0 0 0\n0 1 2 0\n0 3 4 0\n0 0 0 0\n\n(1 ,.,.) =\n1 1 1 1\n2 2 2 2\n3 3 3 3\n4 4 4 4\n<ECODE> I may try to use narrow() and copy_(), but I’m not sure copying to Variable in forward() is ok.", "isAccepted": false, "likes": null, "poster": "thnkim" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "thnkim" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "thnkim" } ]
false
How can i change nn.LSTM to eval mode?
null
[ { "contents": "<SCODE>rnn = nn.LSTM(10, 20, 2)\ninput = Variable(torch.randn(5, 3, 10))\noutput, hn = rnn(input)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I solved this problem as you said. I should have thought a bit more. Thanks.", "isAccepted": false, "likes": null, "poster": "yunjey" } ]
false
Is there any different between torch.sigmoid and torch.nn.functional.sigmoid?
null
[ { "contents": "", "isAccepted": false, "likes": 4, "poster": "Teaonly" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 20, "poster": "fmassa" }, { "contents": "Thanks, i see. More different is help message, there are help text for python object, no text for registered functions in C.", "isAccepted": false, "likes": 1, "poster": "Teaonly" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "cakeeatingpolarbear" }, { "contents": "<SCODE>UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\n warnings.warn(\"nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\")\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "Haydnspass" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "drevicko" } ]
false
Siamese network not converging
null
[ { "contents": "I have implemented a Siamese network for text similarity. Here is something I observed. When I feed the two sequence batches (one batch of left sequences and another batch of right sequences in separate autograd vars) separately to the LSTM and then compute similarity on the last hidden state of the output, the model works just fine. But If I feed in both the text sequences as a single input batch and then use alternative indexing to separate out the first and second set of sequences (even indices are the left sequence and the odd indices are the right sequence) and then compute similarity, the model doesn’t converge and the outputs are random. Am I somehow losing the gradients when I am slicing the hidden state variable?", "isAccepted": false, "likes": null, "poster": "Nilabhra" }, { "contents": "What version are you running on?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "n0obcoder" } ]
false
CUDA out of memory when a Function is used multiple times
null
[ { "contents": "Hi, there Sample code as below: <SCODE>\nimport torch\nfrom torch.autograd import Function\nfrom torch.autograd import Variable\n\n\nclass Identity(Function):\n\n def forward(self, input):\n return input\n\n def backward(self, grad_output):\n return grad_output\n\n\nclass Linear(Function):\n\n def forward(self, input, weight):\n self.save_for_backward(input, weight)\n return input.mm(weight.t())\n\n def backward(self, grad_output):\n\n input, weight = self.saved_tensors\n grad_input = grad_weight = None\n\n if self.needs_input_grad[0]:\n grad_input = grad_output.mm(weight)\n if self.needs_input_grad[1]:\n grad_weight = grad_output.t().mm(input)\n\n return grad_input, grad_weight\n\n\nx = Variable(torch.rand(4000, 3000).cuda(), requires_grad=True)\nw = Variable(torch.rand(3000, 3000).cuda(), requires_grad=True)\n\ngrad_output = torch.rand(4000, 3000).cuda()\n\nlr = 0.01\nfor i in range(10000):\n\n # (1) cuda memory stays the same\n # identity = Identity()\n # loss1 = identity(x)\n # loss2 = identity(x)\n\n # (2) cuda memory continuously increase\n linear = Linear()\n loss1 = linear(x, w)\n loss2 = linear(x, w)\n\n loss = loss1 + loss2\n loss.backward(grad_output)\n\n x.data = x.data - lr * x.grad.data\n x.grad.data.zero_()\n\n if i % 100 == 0:\n print(i)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "xhwang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Module can be reused. However, when the same Module calls forward() multiple times, it actually creates a new Function object each time, only the Parameters in it (e.g. weight in nn.Linear) are shared. Am I right? Thanks.", "isAccepted": false, "likes": null, "poster": "xhwang" }, { "contents": "Yes. Modules are safe to be reused, Functions aren’t.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
PyTorch Tutorial for Deep Learning Researchers
null
[ { "contents": "Hi,", "isAccepted": false, "likes": 16, "poster": "yunjey" }, { "contents": "Yes whoever came up with pytorch’s high level design was a genius. I think its design is objectively superior to any other python framework. In TF or Theano you invariably end up ditching the object oriented style (if you had one to begin at all), in pytorch it makes too much sense to ditch.", "isAccepted": false, "likes": 3, "poster": "Veril" }, { "contents": "We’re glad that you like it.", "isAccepted": false, "likes": 10, "poster": "smth" }, { "contents": "What am I doing wrong?", "isAccepted": false, "likes": null, "poster": "ninzine" }, { "contents": "Great tutorial!", "isAccepted": false, "likes": null, "poster": "Mike" }, { "contents": "Adding my own recommended list:", "isAccepted": false, "likes": 1, "poster": "QuantScientist" } ]
false
Torch dependencies when installing from source
null
[ { "contents": "I have installed PyTorch from source using an Anaconda environment. <SCODE>$ ldd torch/lib/libTHC.so.1 \n\tlinux-vdso.so.1 => (0x00007fffeebe7000)\n\tlibcudart.so.8.0 => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudart.so.8.0 (0x00007fb1aac3a000)\n\tlibcublas.so.8.0 => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcublas.so.8.0 (0x00007fb1a8289000)\n\tlibTH.so.1 => /home/tudor/pytorch/torch/lib/libTH.so.1 (0x00007fb1a7c92000)\n\tlibcurand.so.8.0 => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcurand.so.8.0 (0x00007fb1a3d29000)\n\tlibcusparse.so.8.0 => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcusparse.so.8.0 (0x00007fb1a121a000)\n\tlibstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fb1a0e98000)\n\tlibm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fb1a0b8f000)\n\tlibgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fb1a0978000)\n\tlibc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb1a05af000)\n\t/lib64/ld-linux-x86-64.so.2 (0x0000562627347000)\n\tlibdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fb1a03ab000)\n\tlibpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fb1a018d000)\n\tlibrt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fb19ff85000)\n\tlibmkl_intel_lp64.so => not found\n\tlibmkl_intel_thread.so => not found\n\tlibmkl_core.so => not found\n\tlibiomp5.so => not found\n\tlibgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007fb19fd61000)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "Tudor_Berariu" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
How to do repeat operation like numpy.repeat in pytorch on Variable like tensor?
null
[ { "contents": "Assume that I have got a tensor like below: Are there any good approaches for doing above operation?", "isAccepted": false, "likes": 1, "poster": "mderakhshani" }, { "contents": "Is this what you are looking for? You may want to copy() it after that if you actually need to change 7x7 panes independently.", "isAccepted": false, "likes": 1, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "mderakhshani" }, { "contents": "You should also look into torch.repeat. That’s also similar and useful in some cases!", "isAccepted": false, "likes": null, "poster": "SpandanMadan" }, { "contents": "When should you use repeat vs expand?", "isAccepted": false, "likes": null, "poster": "himat" }, { "contents": "‘repeat’ copies the original data and allocates new memory and ‘expand’ creates a new view on the existing tensor.", "isAccepted": false, "likes": null, "poster": "Qing_En" } ]
false
torch.cuda.floatTensor type and torch.floatTensor
null
[ { "contents": "i have encountered the following error: (float value)\ndidn’t match because some of the arguments have invalid types: (torch.cuda.FloatTensor) (torch.FloatTensor other)\ndidn’t match because some of the arguments have invalid types: (torch.cuda.FloatTensor) (float value, torch.FloatTensor other)", "isAccepted": false, "likes": null, "poster": "Shiyu" }, { "contents": "Could you try: <SCODE>gradient = torch.ge(inputs.grad.data, 0.0)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Problem solved, thanks. Here is another problem, how do convert a torch.cuda.floatTensor type tensor to a torch.floatTensor. Thanks.", "isAccepted": false, "likes": null, "poster": "Shiyu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Thanks a lot!!!", "isAccepted": false, "likes": null, "poster": "Shiyu" } ]
false
Why GPU memory usage keeps ceaselessly growing when training the model?
null
[ { "contents": "Hello everyone. Recently, I implemented a simple recursive neural network. When training this model on sample/small data set, everything works fine. However, when training it on large data and on GPUs, “out of memory” is raised. Along with the training goes on, usage of GPU memory keeps growing up. So, I want to know, why does this happen? I would be grateful if you could help. The model and training procedure are defined as follow: <SCODE> def train_step(self, data):\n train_loss = 0\n for _data in data:\n p_tree = _data['p_tree']\n h_tree = _data['h_tree']\n if args.cuda:\n target = Variable(torch.LongTensor([_data['label']]).cuda())\n else:\n target = Variable(torch.LongTensor([_data['label']]))\n self.optimizer.zero_grad()\n # self.model is an instance of class RootAlign\n output = self.model(p_tree, h_tree)\n loss = F.nll_loss(output, target)\n loss.backward()\n self.optimizer.step()\n train_loss += loss.data[0]\n return train_loss\n\nclass RootAlign(nn.Module):\n def __init__(self, word_embedding, config):\n super(RootAlign, self).__init__()\n self.rnn = VanillaRecursiveNN(word_embedding, config['hidden_dim'], config['cuda_flag'])\n self.linear = nn.Linear(config['hidden_dim'] * 2, config['relation_num'])\n\n def forward(self, p_tree, h_tree):\n p_tree.postorder_traverse(self.rnn)\n h_tree.postorder_traverse(self.rnn)\n\n out = F.log_softmax(self.linear(F.sigmoid(torch.cat((p_tree.calculate_result, h_tree.calculate_result), 1))))\n return out\n\nclass VanillaRecursiveNN(nn.Module):\n def __init__(self, word_embedding, hidden_dim, cuda_flag=False):\n super(VanillaRecursiveNN, self).__init__()\n self.word_dim = word_embedding.embeddings.size(1)\n self.hidden_dim = hidden_dim\n\n self.embedding = nn.Embedding(word_embedding.embeddings.size(0),\n self.word_dim)\n self.embedding.weight = nn.Parameter(word_embedding.embeddings)\n\n self.word2hidden = nn.Linear(self.word_dim, self.hidden_dim, False)\n self.hidden2hidden = nn.Linear(2 * self.hidden_dim, self.hidden_dim)\n\n self.cuda_flag = cuda_flag\n\n def forward(self, node):\n if not node.val is None:\n if self.cuda_flag:\n node.calculate_result = self.word2hidden(\n self.embedding(Variable(torch.LongTensor([node.word_id]).cuda())))\n else:\n node.calculate_result = self.word2hidden(\n self.embedding(Variable(torch.LongTensor([node.word_id]))))\n return node.calculate_result\n else:\n assert len(node.children) == 2\n node.calculate_result = self.hidden2hidden(torch.cat((node.children[0].calculate_result,\n node.children[1].calculate_result), 1))\n return node.calculate_result\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I observed similar GPU memory behavior. <SCODE>def function1(A):\n B = A**2 - 2*A\n C = torch.sqrt(B)\n return C\n\ndef function2(A):\n return torch.sqrt(A**2 - 2*A)\n<ECODE> Thus, in terms of memory usage, it seems that function2 is twice efficient than function1. This doesn’t apply to all the cases, but in many cases, removing intermediate variables reduces GPU memory usage a lot in my programs. This seems to be that underlying CUDA does not free memory immediately after the moment that memory is not needed anymore. I think some GPU memory Garbage Collection method in pytorch is needed for efficient GPU memory management.", "isAccepted": false, "likes": null, "poster": "OCY" }, { "contents": "Both functions will consume the same amount of memory. The execution will look like this (in parenthesis you have current/peak memory usage in multiplies of A size): Assume A is allocated (1/1) Compute A**2 (2/2) Compute 2*A (3/3) Compute A**2 - 2*A (4/4) Free A**2 and 2*A (2/4) Compute torch.sqrt(B) (3/4) Return and free everything except the input and result (2/4) There’s no way for the framework to know when a tensor won’t be needed anymore, we don’t have that knowledge upfront, abnd this is why it’s impossible to implement any garbage collection. The memory management is already very efficient and all tensors are freed as soon as you let them go.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Thanks! Things became clear.", "isAccepted": false, "likes": null, "poster": "OCY" }, { "contents": "Thanks for the help. I did the replacement as you stated above in the following ways. <SCODE># way No.1\nif not node.val is None:\n if self.cuda_flag:\n variable =Variable(torch.LongTensor([node.word_id]).cuda())\n else:\n variable = Variable(torch.LongTensor([node.word_id]))\n result = self.word2hidden(self.embedding(variable))\n node.calculate_result = result.data\n return node.calculate_result\n\n# way No.2\nif not node.val is None:\n if self.cuda_flag:\n node.calculate_result = self.word2hidden(self.embedding(\n Variable(torch.LongTensor([node.word_id]).cuda()))).data\n else:\n node.calculate_result = self.word2hidden(self.embedding( \n Variable(torch.LongTensor([node.word_id])))).data\n return node.calculate_result\n# way No.3\nif not node.val is None:\n if self.cuda_flag:\n result = self.word2hidden(\n self.embedding(Variable(torch.LongTensor([node.word_id]).cuda())))\n else:\n result = self.word2hidden(\n self.embedding(Variable(torch.LongTensor([node.word_id]))))\n node.calculate_result = result.data\n return node.calculate_result\n<ECODE> <SCODE>Traceback (most recent call last):\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py\", line 172, in <module>\n t.train()\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py\", line 111, in train\n train_loss = self.train_step(self.data.train)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py\", line 143, in train_step\n output = self.model(p_tree, h_tree)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py\", line 210, in __call__\n result = self.forward(*input, **kwargs)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/align_model.py\", line 18, in forward\n p_tree.postorder_traverse(self.rnn)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py\", line 186, in postorder_traverse\n c.postorder_traverse(func)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py\", line 186, in postorder_traverse\n c.postorder_traverse(func)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py\", line 186, in postorder_traverse\n c.postorder_traverse(func)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py\", line 187, in postorder_traverse\n func(self)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py\", line 219, in __call__\n var = var[0]\nTypeError: 'float' object has no attribute '__getitem__'\n<ECODE> <SCODE>Traceback (most recent call last):\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py\", line 172, in <module>\n t.train()\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py\", line 111, in train\n train_loss = self.train_step(self.data.train)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py\", line 143, in train_step\n output = self.model(p_tree, h_tree)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py\", line 210, in __call__\n result = self.forward(*input, **kwargs)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/align_model.py\", line 18, in forward\n p_tree.postorder_traverse(self.rnn)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py\", line 186, in postorder_traverse\n c.postorder_traverse(func)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py\", line 186, in postorder_traverse\n c.postorder_traverse(func)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py\", line 186, in postorder_traverse\n c.postorder_traverse(func)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py\", line 187, in postorder_traverse\n func(self)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py\", line 210, in __call__\n result = self.forward(*input, **kwargs)\n File \"/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree_models.py\", line 26, in forward\n self.embedding(Variable(torch.LongTensor([[node.word_id]]).cuda())))\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py\", line 210, in __call__\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/modules/linear.py\", line 52, in forward\n return self._backend.Linear()(input, self.weight)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/linear.py\", line 10, in forward\n output.addmm_(0, 1, input, weight.t())\nRuntimeError: matrix and matrix expected at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMathBlas.cu:235\n<ECODE> Anyway, thanks again for your help. Looking forward for your reply.", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "I think it’s just wrong input number of fc-layer and usage of cat() fucntion <SCODE>import torch\nhidden_dim = 10\n\nx = torch.randn(hidden_dim, 1).cuda()\nprint(x.size())\ny = torch.cat((x, x), 1)\nprint(y.size())\ny = torch.cat((x, x), 0)\nprint(y.size())\n<ECODE> result is below so, fixed code is <SCODE>self.hidden2hidden = nn.Linear(2 * self.hidden_dim, self.hidden_dim)\n...\nnode.calculate_result = self.hidden2hidden(torch.cat((node.children[0].calculate_result, node.children[1].calculate_result), 0))\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jhjungCode" }, { "contents": "Thanks for your help. However, the problem occurs in the following code: <SCODE>if not node.val is None:\n if self.cuda_flag:\n node.calculate_result = self.word2hidden(\n self.embedding(Variable(torch.LongTensor([node.word_id]).cuda())))\n else:\n node.calculate_result = self.word2hidden(\n self.embedding(Variable(torch.LongTensor([node.word_id]))))\n return node.calculate_result\n<ECODE> And, I’ve found a puzzling phenomenon, if the above code have been changed to: <SCODE>if not node.val is None:\n if self.cuda_flag:\n variable =Variable(torch.LongTensor([node.word_id]).cuda())\n else:\n variable = Variable(torch.LongTensor([node.word_id]))\n node.calculate_result = self.word2hidden(self.embedding(variable))\n return node.calculate_result \n<ECODE>", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "You should check the connectivity between network layers Try it <SCODE>self.embedding(Variable(torch.LongTensor(node.word_id).cuda())))\n<ECODE> or add squeeze variable <SCODE>variable = variable.squeeze()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jhjungCode" }, { "contents": "Anyway, thanks for your help.", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "You don’t need to save anything for backprop, autograd will take care of that, and my solution is valid. The problems you’re having are only due to giving inputs of invalid sizes to different modules. You can print them inside your module and see if they are what you expect, and what matches the requirements specified in the docs.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Yes,you’re right. The computation in case “not node.val is None” is correct. Problem is in the computation of the other case. I’m trying to fix it. Thank you very much!", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "<SCODE>if not node.val is None:\n if self.cuda_flag:\n variable =Variable(torch.LongTensor([node.word_id]).cuda())\n else:\n variable = Variable(torch.LongTensor([node.word_id]))\n result = self.word2hidden(self.embedding(variable))\n node.calculate_result = result.data\n return node.calculate_result\n<ECODE> <SCODE>while not isinstance(var, Variable):\n var = var[0]\n<ECODE> After I do the following change, everything works again except the weights of model hasn’t been updated. No.1, change ni RootAlign: <SCODE>class RootAlign(nn.Module):\n def __init__(self, word_embedding, config):\n super(RootAlign, self).__init__()\n self.rnn = VanillaRecursiveNN(word_embedding, config['hidden_dim'], config['cuda_flag'])\n self.linear = nn.Linear(config['hidden_dim'] * 2, config['relation_num'])\n\n def forward(self, p_tree, h_tree):\n p_tree.postorder_traverse(self.rnn)\n h_tree.postorder_traverse(self.rnn)\n\n p_result = Variable(p_tree.calculate_result)\n h_result = Variable(h_tree.calculate_result)\n out = F.log_softmax(self.linear(F.sigmoid(\n torch.cat((p_result, h_result), 1))))\n return out\n<ECODE> No.2 Change in VanillarRecursiveNN: <SCODE>def forward(self, node):\n if not node.val is None:\n if self.cuda_flag:\n result = self.word2hidden(self.embedding(\n Variable(torch.LongTensor([node.word_id]).cuda())))\n else:\n result = self.word2hidden(self.embedding(\n Variable(torch.LongTensor([node.word_id]))))\n node.calculate_result = result.data\n return result\n else:\n assert len(node.children) == 2\n l_result = Variable(node.children[0].calculate_result)\n r_result = Variable(node.children[1].calculate_result)\n result = self.hidden2hidden(torch.cat((l_result, r_result), 1))\n node.calculate_result = result.data\n return result\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Well, I guess that this is the key to address the problem. I’ll try immediately.", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "I have an additional question. How to batch tree data when training model? Every tree has their own structure. How can I batch them under the current implementation of Recursive model? ps: The time of every epoch is about 4.5 hr on SNLI.(GPU: Titan X) It takes too long.", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "jekbradbury" }, { "contents": "Your implementation is cool~ I’ll learn from it and try to batch data on my model. Thanks very much.", "isAccepted": false, "likes": null, "poster": "ShawnGuo" } ]
false
Trying to backward through the graph second time, but the buffers have already been freed
null
[ { "contents": "I just changed the RNN class to RWA from the practical pytorch’s char-rnn-classification example. <SCODE><ipython-input-214-4bee22e367f7> in train(categroy_tensor, line_tensor)\n 7 loss = criterion(output, category_tensor)\n 8 print(\"loss:\" , loss)\n----> 9 loss.backward()\n 10 \n 11 for p in rnn.parameters():\n\n/usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py in backward(self, gradient, retain_variables)\n 143 'or with gradient w.r.t. the variable')\n 144 gradient = self.data.new().resize_as_(self.data).fill_(1)\n--> 145 self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n 146 \n 147 def register_hook(self, hook):\n\n/usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/basic_ops.py in backward(self, grad_output)\n 37 \n 38 def backward(self, grad_output):\n---> 39 a, b = self.saved_tensors\n 40 return grad_output.mul(b), grad_output.mul(a)\n 41 \n\nRuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time.\n<ECODE> The code I changed looks like below: <SCODE>import torch.nn as nn\nfrom torch.autograd import Variable\nimport torch.nn.functional as F\n\nclass RWA(nn.Module):\n def __init__(self, input_size, hidden_size, output_size):\n super(RWA, self).__init__()\n \n self.max_steps = 1\n self.batch_size = 1\n self.hidden_size = hidden_size\n \n self.n = Variable(torch.Tensor(self.batch_size, hidden_size), requires_grad=True)\n self.d = Variable(torch.Tensor(self.batch_size, hidden_size), requires_grad=True)\n \n self.x2u = nn.Linear(input_size, hidden_size)\n self.c2g = nn.Linear(input_size + hidden_size, hidden_size)\n self.c2q = nn.Linear(input_size + hidden_size, hidden_size)\n self.out = nn.Linear(hidden_size, output_size)\n \n def forward(self, input, hidden):\n h = F.tanh(hidden)\n \n for i in range(len(input)):\n combined = torch.cat((input[i], h), 1)\n \n \n u = self.x2u(input[i])\n g = self.c2g(combined)\n q = self.c2q(combined)\n q_greater = F.relu(q)\n scale = torch.exp(-q_greater)\n a_scale = torch.exp(q-q_greater)\n self.n = (self.n * scale) + ((u * F.tanh(g)) * a_scale)\n self.d = (self.d * scale) + a_scale\n h = F.tanh(torch.div(self.n, self.d))\n output = self.out(h)\n return output, h\n\n def init_hidden(self):\n return Variable(torch.randn(1, self.hidden_size))\n\nn_hidden = 128\nrwa = RWA(n_letters, n_hidden, n_categories)\nprint(\"n_letters:\", n_letters, \"n_hidden:\", n_hidden, \"n_categories:\", n_categories)\nprint(rwa)\nn_letters: 57 n_hidden: 128 n_categories: 18\nRNN (\n (x2u): Linear (57 -> 128)\n (c2g): Linear (185 -> 128)\n (c2q): Linear (185 -> 128)\n (out): Linear (128 -> 18)\n)\n<ECODE> <SCODE>def train (categroy_tensor, line_tensor):\n hidden = rwa.init_hidden()\n rwa.zero_grad()\n output, hidden = rwa(line_tensor, hidden)\n loss = criterion(output, category_tensor)\n print(\"loss:\" , loss)\n loss.backward()\n \n for p in rwa.parameters():\n p.data.add_(-learning_rate, p.grad.data)\n return output, loss.data[0]\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "keon" }, { "contents": "Author of the RWA model here. I saw your post, and I wanted to let you know a flaw has been discovered in my code. The flaw deals with the numerical stability of the RWA model. If left uncorrected it prevents the model from forming long-term memories. Once you fix your code you may discover the issue you are having go away. Maybe so, maybe not.", "isAccepted": false, "likes": 1, "poster": "jostmey" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "jekbradbury" } ]
false
How to speed up for loop in customized RNN?
null
[ { "contents": "Hi, there, I am working on a new RNN unit implementation. Since the formulation is totally different with existing RNN units, I implemented everything from scratch. In order to process information in each time stamp, I used a for loop to loop through time stamps. It looks like the codes below. Unfortunately, it is much slower then its theano counterpart. I am wondering is there a special way to do the for loop in RNNs which can be faster than the naive way. So, in a more general case, is there any special way to speed up general for loop process in pytorch? <SCODE>class RNN(nn.Module):\n def __init__(self, input_size, hidden_size, output_size):\n super(RNN, self).__init__()\n self.hidden_size = hidden_size\n \n self.i2h = nn.Linear(input_size+hidden_size, hidden_size)\n self.h2o = nn.Linear(input_size+hidden_size, output_size)\n self.tanh = nn.Tanh()\n\n def forward(self, X):\n time_steps = X.size(0)\n batch_size = X.size(1)\n hidden = Variable(torch.zeros(batch_size, self.hidden_size))\n outputs = []\n hiddens = []\n for t in range(time_steps):\n x_input = X[t]\n hidden_input = hidden\n inp = torch.cat( (x_input,hidden_input), 1 )\n hidden = self.tanh(self.i2h(inp))\n output = self.h2o(inp)\n outputs.append(output)\n hidden.append(hidden)\n return torch.cat(hiddens, 1), torch.cat(outputs, 1)<ECODE>", "isAccepted": false, "likes": 6, "poster": "qianguih" }, { "contents": "I’m afraid there’s not a lot you can do at the moment. We know what issues are slowing down small RNN cells and will be fixing them once we finish the autograd refactor.", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "One thing I noticed is that if you implement an RNN with complex control flow it’s actually faster to run most of it on the CPU and only offload big matrix operations to the GPU. It’s not surprising perhaps, and it applies to all frameworks. Not saying that this is the case here, just a sidenote.", "isAccepted": false, "likes": 2, "poster": "Veril" }, { "contents": "Yes, that’s because of the kernel launch latency. Dispatching the nonlinearities is often slower than computing them.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "qianguih" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "qianguih" }, { "contents": "Removing the biases might help a bit. I now looked at the cell once again and I don’t think it should be slow. I think I’ll need to take a look at it in the profiler.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "The input is TxBxD. T is the time steps (usually equals 4096). B is the batch size and I usually set it to 2. D is the dimension of features (which is usually 256). The dimension of hidden states is 128 and output feature dimension is 128 too.", "isAccepted": false, "likes": null, "poster": "qianguih" }, { "contents": "Is there anyway to maintain the speed that the cudnn LSTM has while adding these new features in? I don’t know if autograd has been refactored yet.", "isAccepted": false, "likes": 2, "poster": "NickShahML" }, { "contents": "cuDNN is very well optimized low-level CUDA C and/or assembly code that is specific to exactly the LSTM variant that they implement, and no framework can maintain that same speed while allowing flexibility to modify it in the ways you describe. But PyTorch will continue to work on optimization of use cases like this, and while right now the speed loss will probably be somewhere between 2x and 5x, it should get better over time.", "isAccepted": false, "likes": 1, "poster": "jekbradbury" }, { "contents": "Hello, I am also trying to implement RNN from scratch, LSTM to be specific, I am using two for loops, one for sequence time steps and other for layers, I am not sure how will autograd work on this. Can you confirm that it backprops through all time steps and layers? Also, instead of matrix multiplication in LSTM I am using convolutions, and I cannot see past the conv function in the loss.backward() as it has no attribute --> previous_functions?. Please guide. Thank you,", "isAccepted": false, "likes": null, "poster": "pjavia" }, { "contents": "Any updates on improving speed for customized RNN? I am also working with non traditional RNN so I can not use predefined cuDNN cells. I converted some of my old lua-torch codes to pytorch and they are 3 times slower. I really enjoy pytorch so I hope something can be done about it. Thanks! <SCODE>class LSTMCell(nn.Module):\n \n def __init__(self, input_size, hidden_size):\n super(LSTMCell, self).__init__()\n self.hidden_size=hidden_size\n self.lin = nn.Linear( input_size+hidden_size , 4*hidden_size )\n \n def forward(self, x, state0):\n h0,c0=state0\n x_and_h0 = torch.cat((x,h0), 1)\n u=self.lin(x_and_h0)\n i=F.sigmoid( u[ : , 0*self.hidden_size : 1*self.hidden_size ] )\n f=F.sigmoid( u[ : , 1*self.hidden_size : 2*self.hidden_size ] )\n g=F.tanh( u[ : , 2*self.hidden_size : 3*self.hidden_size ] )\n o=F.sigmoid( u[ : , 3*self.hidden_size : 4*self.hidden_size ] )\n c= f*c0 + i*g\n h= o*F.tanh(c)\n return (h,c)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "tlaurent" }, { "contents": "I’m not sure it is effective or not in this case but Numba or Cython may speed up the loop.", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "no unfortunately it is not a problem with the loop. I run the cell 1000 times and it take 4.6 sec in total. Looking at the details of each run, each of them takes in average, 0.0045 sec. So the loop account for very little. Thanks though!", "isAccepted": false, "likes": null, "poster": "tlaurent" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "tlaurent" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ngimel" }, { "contents": "Thanks for the detailed answer! Things make much more sense.", "isAccepted": false, "likes": null, "poster": "tlaurent" }, { "contents": "Any updates at this moment?", "isAccepted": false, "likes": 3, "poster": "vBaiCai" } ]
false
datas.ImageFolder could be as torch.utils.data.TensorDataset?
null
[ { "contents": "<SCODE> Thank you for opening my new conversation.I have little chances to speak English ,so my English is bad and maybe the way that I express is not appropriate.I am sorry for this.\n I put some pictures in folder and I use the ImageFolder to get my datasat.What i would like to know is if the dataset could bu used in net without changing to torch.utils.data.TensorDataset ?<ECODE>", "isAccepted": false, "likes": null, "poster": "vibe" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" } ]
false
Torch.add() error
null
[ { "contents": "When I am using torch.add(), I have encountered the following error", "isAccepted": false, "likes": null, "poster": "Shiyu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "So - I will be using torch.mul, but the question is in place, and someone has to either implement the correct function or fix the documentation", "isAccepted": false, "likes": null, "poster": "YankuHorn" }, { "contents": "<SCODE>In [1]: import torch\n\nIn [2]: a=torch.randn(10)\n\nIn [3]: torch.add(a, 0.1, a)\nOut[3]:\n\n 0.9079\n 1.1069\n-1.5801\n-0.3657\n-0.6019\n-0.5571\n 0.5797\n 0.2054\n-0.9112\n-0.7749\n[torch.FloatTensor of size 10]\n\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Now it is working, but doesn’t do anything: The code I wrote in a file: <SCODE>import torch\n\nq =torch.randn(10)\nprint \"1 \",q\ntorch.add(q, 0.1, q)\nprint \"2 \",q\nexit()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "YankuHorn" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "OK, :embarrassed:, i was in a hurry - looks like it’s working now. Thanks.", "isAccepted": false, "likes": null, "poster": "YankuHorn" } ]
false
Function for Trace
null
[ { "contents": "Hello, I am trying to write a Function for torch.trace (seems not to exist so far). Here is my code: <SCODE>class CustomTrace(torch.autograd.Function):\n \n def forward(self, mat):\n self.save_for_backward(mat)\n return torch.Tensor([torch.trace(mat)])\n \n def backward(self, g):\n mat, = self.saved_tensors\n \n return torch.mul(torch.eye(int(mat.size()[0])),g[0])\n<ECODE> When I try to check my result, its precision is not optimal, i.e. <SCODE>input = (torch.autograd.Variable(torch.randn(2,2).double(), requires_grad = True),)\ntorch.autograd.gradcheck(CustomTrace(), input, eps = 1e-4, atol = 1e-3)\n<ECODE> returns true. However, <SCODE>input = (torch.autograd.Variable(torch.randn(2,2).double(), requires_grad = True),)\ntorch.autograd.gradcheck(CustomTrace(), input, eps = 1e-6, atol = 1e-4)\n<ECODE> returns false. I would hope that my math is right. Is there any problem with my code or is this for some reason to be expected? Thanks a lot!", "isAccepted": false, "likes": null, "poster": "DasPantom" }, { "contents": "You dont need to save the entire mat for backwards, only mat.size()\n \nmat.size()[0] -> mat.size(0)\n you dont handle trace for non-square inputs. the output return type is not the same as the input return type, because you use torch.Tensor. Instead use mat.new()\n Overall, here’s a modified version that handles non-square inputs, and implements the other suggestions. <SCODE>class CustomTrace(torch.autograd.Function):\n \n def forward(self, input):\n self.isize = input.size()\n return input.new([torch.trace(input)])\n \n def backward(self, grad_output):\n isize = self.isize\n grad_input = grad_output.new(isize).copy_(torch.eye(*isize))\n grad_input.mul_(grad_output[0])\n return grad_input\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Thanks a lot! Amazing to see how much these few lines of code can be improved. Thanks for pointing out that trace also works for non-square matrices (the mathematician in me assumed that trace only works for square matrices). I will try to implement this. Edit: I see. It already works for non-square matrices. Thanks a lot.", "isAccepted": false, "likes": null, "poster": "DasPantom" }, { "contents": "correlation_product = lambda f, d: torch.mul(d, torch.dot(f, d).expand_as(d)) <SCODE>print(gradcheck(correlation_product, (f, d), eps=1e-6, atol=1e-3))\nprint(gradcheck(correlation_product, (f, d), eps=1e-6, atol=1e-2))\nprint(gradcheck(correlation_product, (f, d), eps=1e-6, atol=1e-1))\n<ECODE>", "isAccepted": false, "likes": null, "poster": "tachim" }, { "contents": "you should ideally do gradchecks in double precision. Float precision might not be enough for finite difference to agree with analytical gradient.", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "That was exactly the problem – thanks!", "isAccepted": false, "likes": null, "poster": "tachim" }, { "contents": "Just mentioning for everyone that gets here using a search engine.", "isAccepted": false, "likes": 3, "poster": "Florian_1990" } ]
false
What’s the channel order of images for resnet model in model_zoo?
vision
[ { "contents": "BGR or RGB? And also what is the scale of input images? I’ve seen tutorials with both [0,255] and [0,1] scales.", "isAccepted": false, "likes": null, "poster": "Yufeng_Ma" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Cysu" }, { "contents": "here is a tutorial <SCODE>normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n\nbatch_size = 256 \ntrain_loader = torch.utils.data.DataLoader(\n datasets.ImageFolder(traindir,\n transforms.Compose([\n transforms.RandomSizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n normalize,])),\n batch_size=batch_size,\n shuffle=True,\n num_workers=4)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "jhjungCode" } ]
false
Selecting/discarding specific blocks from resnet model
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Min-Je_Choi" }, { "contents": "I think this is a answer of your question", "isAccepted": false, "likes": 1, "poster": "jhjungCode" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Min-Je_Choi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Cite the pytorch as a reference
null
[ { "contents": "I implemented my model with pytorch framework for its concise and efficient design, and I wonder if there is any BibTex for citing?", "isAccepted": false, "likes": null, "poster": "111" }, { "contents": "No, we don’t have any papers published yet. You can use a link to our github repository.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Deserialize Lua class error
null
[ { "contents": "Hi, How do I fix this?", "isAccepted": false, "likes": null, "poster": "srv902" }, { "contents": "Load your model in Lua, convert all cudnn modules to nn modules, load it in PyTorch", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Data loader error after transforming data
null
[ { "contents": "Hi guys, I was wondering if someone can help me out on this one. My problem is the following. I load the mnist dataset using the data loader. I transform the data to numpy to do some operations and transform it back to torch.Tensor. then I do the following: <SCODE>train = torch.utils.data.TensorDataset(train_data, train_ds.train_labels)\ntrain_loader = torch.utils.data.DataLoader(train, batch_size=args.batch_size, shuffle=True)\n<ECODE> <SCODE>TypeError Traceback (most recent call last)\n/home/user/mnist.py in <module>()\n 141\n 142 for epoch in range(1, args.epochs + 1):\n--> 143 train(epoch)\n 144 test(epoch)\n 145\n\n/home/mnist.py in train(epoch)\n 111 data, target = Variable(data), Variable(target)\n 112 optimizer.zero_grad()\n--> 113 output = model(data)\n 114 loss = F.nll_loss(output, target)\n 115 loss.backward()\n\n/home/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)\n 208\n 209 def __call__(self, *input, **kwargs):\n--> 210 result = self.forward(*input, **kwargs)\n 211 for hook in self._forward_hooks.values():\n 212 hook_result = hook(self, input, result)\n\n/home/ in forward(self, x)\n 88\n 89 def forward(self, x):\n---> 90 x = F.relu(self.linear1(x))\n 91 x = F.linear_drop(x)\n 92 x = F.relu(self.linear2(x))\n\n/home/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)\n 208\n 209 def __call__(self, *input, **kwargs):\n--> 210 result = self.forward(*input, **kwargs)\n 211 for hook in self._forward_hooks.values():\n 212 hook_result = hook(self, input, result)\n\n/home/lib/python2.7/site-packages/torch/nn/modules/linear.pyc in forward(self, input)\n 51 return self._backend.Linear()(input, self.weight)\n 52 else:\n---> 53 return self._backend.Linear()(input, self.weight, self.bias)\n 54\n 55 def __repr__(self):\n\n/home/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/linear.pyc in forward(self, input, weight, bias)\n 8 self.save_for_backward(input, weight, bias)\n 9 output = input.new(input.size(0), weight.size(0))\n---> 10 output.addmm_(0, 1, input, weight.t())\n 11 if bias is not None:\n 12 # cuBLAS doesn't support 0 strides in sger, so we can't use expand\n\nTypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.DoubleTensor, torch.FloatTensor), but expected one of:\n * (torch.DoubleTensor mat1, torch.DoubleTensor mat2)\n * (torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)\n * (float beta, torch.DoubleTensor mat1, torch.DoubleTensor mat2)\n * (float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2)\n * (float beta, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)\n * (float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)\n * (float beta, float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2)\n * (float beta, float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)\n\n> /home/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/linear.py(10)forward()\n 8 self.save_for_backward(input, weight, bias)\n 9 output = input.new(input.size(0), weight.size(0))\n---> 10 output.addmm_(0, 1, input, weight.t())\n 11 if bias is not None:\n 12 # cuBLAS doesn't support 0 strides in sger, so we can't use expand\n<ECODE>", "isAccepted": true, "likes": null, "poster": "kirk86" }, { "contents": "The type of data from dataloader seem to be float64, so, you can change this float32 like below <SCODE># if img is numpy\nimg = img.astype('float32')\n# if img is torch.tensor\nimg = img.float()\n<ECODE>", "isAccepted": true, "likes": null, "poster": "jhjungCode" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "kirk86" } ]
true
Masked loss function (solved)
vision
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "bodokaiser" } ]
false
Training and using RNNs
null
[ { "contents": "I understand there might be two questions here, one related to general LSTM networks, the other about specific PyTorch implementation.", "isAccepted": false, "likes": null, "poster": "AndreaCogliati" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Is it possible to use third party torch7 packages in pytorch?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "kirk86" }, { "contents": "No, we don’t support loading Lua modules in Python", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "kirk86" }, { "contents": "No there’s not, but I’m pretty sure you can find something in a (much richer) Python ecosystem. You can easily convert between tensors and numpy arrays.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "kirk86" }, { "contents": "I’m afraid that won’t work. PyTorch uses a slightly different version of C backends (0 vs 1-based indexing), and when you load Lua packages with lutorpy, it will resolve the Lua backend symbols to 0-based ones, likely leading to errors in weird places.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "kirk86" }, { "contents": "No, PyTorch packages have been redesigned and rewritten from scratch.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "kirk86" }, { "contents": "I believe the functions you’re looking for are implemented in scipy and/or scikit-learn, and you can use them by calling .numpy and .from_numpy on Torch tensors.", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "kirk86" } ]
false
Merge two variable length sequence together?
null
[ { "contents": "Hi, all How can I merge two variable sequences together? Like the example below, with word and image token sequence (batch_first=False) and their length Any hints here? Thanks so much for your help.", "isAccepted": false, "likes": 2, "poster": "Yufeng_Ma" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for the note. Yeah. torch.cat can help. But what I further need to do is to copy the last few tokens’ embedding to previous slots if w_len is less than the max_len (20 here) in first token sequence. That may needs a for loop. So do you know any other efficient ways of dong this. Thanks for your precious time.", "isAccepted": false, "likes": null, "poster": "Yufeng_Ma" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "BTW, do we need to fill them with 0’s after ‘torch.cat’ if we’ve provided the length for each sequence? I don’t think RNN will compute sequence tokens after the timestep of sequence length , right?", "isAccepted": false, "likes": null, "poster": "Yufeng_Ma" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Does your problem is similar to mine?", "isAccepted": false, "likes": 1, "poster": "leoribeiro" }, { "contents": "Hi, Does this fit your requirement? This may be not the efficient way, but it works. <SCODE>seq_a = torch.tensor([5,6,7,8,0,0])\nseq_b = torch.tensor([6,9,1,2,3,4,0,0])\nresult = torch.cat([seq_a[seq_a.nonzero()], seq_b[seq_b.nonzero()]])\nzeros = len(seq_a) - seq_a.nonzero().numel() + len(seq_b) - seq_b.nonzero().numel()\nresult = torch.cat([result, torch.zeros(zeros).unsqueeze(-1).long()]).squeeze(-1)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "MariosOreo" }, { "contents": "I think this works, but it seems only works for one-dimension tensor", "isAccepted": false, "likes": null, "poster": "Allan_Jie" }, { "contents": "<SCODE>x = torch.tensor(\n [[ -2.0813, -1.3431, 0],\n [ 1.7704, 0, 0],\n [ 0.1229, 1.2149, 0.5455]])\ny= torch.tensor(\n [[ 1.6999, -0.7491, -1.9227, 0],\n [ 0.4784, -0.2545, 0, 0],\n [ -1.5127, -0.1439, -0.1829, -0.3703]])\n\n## first we concat the tensor\nconcated_tensor = torch.cat([x,y], dim=-1)\n## The key is to create this index:\nindex = torch.tensor([[2,3,4,5], [1,2,3,4], [3,4,5,6]])\n## Finally we call\nconcated_tensor.scatter_(1, index, y)\n<ECODE> We finally obtain something like this: <SCODE>tensor([[ -2.0813, -1.3431, 1.6999, -0.7491, -1.9227, 0, 0],\n [ 1.7704, 0.4784, -0.2545, 0,0,0,0],\n [ 0.1229, 1.2149, 0.5455, -1.5127, -0.1439, -0.1829, -0.3703]])\n<ECODE> <SCODE>tmp_idx = torch.bincount((x!=0).nonzero()[:,0], minlength=3) ##3 is batch size\n## tmp_idx = tensor([2, 1, 3])\n## Create a ones tensor\npad_ones = torch.ones(y.size(0), y.size(1)-1)\n##>>> pad_ones\n##tensor([[1., 1., 1.],\n ## [1., 1., 1.],\n ## [1., 1., 1.]])\nupdate_index = torch.cat([num.unsqueeze(-1), pad_ones.long()], dim=-1)\nindex = torch.cumsum(update_index,dim=1)\n<ECODE> You will end up with this index: <SCODE>tensor([[2, 3, 4, 5],\n [1, 2, 3, 4],\n [3, 4, 5, 6]])\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Allan_Jie" }, { "contents": "<SCODE>x = torch.tensor(\n [[ -2.0813, -1.3431, 0],\n [ 1.7704, 0, 0],\n [ 0.1229, 1.2149, **0**]])\ny= torch.tensor(\n [[ 1.6999, -0.7491, -1.9227, 0],\n [ 0.4784, -0.2545, 0, 0],\n [ -1.5127, -0.1439, -0.1829, -0.3703]])\n<ECODE> this output will be: <SCODE>tensor([[-2.0813, -1.3431, 1.6999, -0.7491, -1.9227, 0.0000, 0.0000],\n [ 1.7704, 0.4784, -0.2545, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.1229, 1.2149, -1.5127, -0.1439, -0.1829, -0.3703, -0.3703]])\n<ECODE> while the correct output should be: <SCODE>tensor([[-2.0813, -1.3431, 1.6999, -0.7491, -1.9227, 0.0000, 0.0000],\n [ 1.7704, 0.4784, -0.2545, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.1229, 1.2149, -1.5127, -0.1439, -0.1829, -0.3703, 0]])\n<ECODE> (see bottom right). The solution is to zero-out the remainder of the sequence after the scatter operation: <SCODE>def length_to_mask(length, max_len=None):\n assert len(length.shape) == 1, 'Length shape should be 1 dimensional.'\n max_len = length.max() if not max_len else max_len\n return torch.arange(max_len)[None, :].to(length.device) < length[:, None]\n\n...\nmerged_len = length_to_mask(len_x + lens_y, max_len=concated_tensor.shape[1])\nconcated_tensor.scatter_(1, index, y)\nconcated_tensor[~merged_len] = 0\n<ECODE> Hope that helps!", "isAccepted": false, "likes": 1, "poster": "Felix_Kreuk" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Allan_Jie" } ]
false
Grad from model param undefined
null
[ { "contents": "<SCODE>from __future__ import print_function\n\nimport argparse\nimport os\nimport sys\n\nimport torch\nimport torch.multiprocessing as mp\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nclass MyModel(torch.nn.Module):\n def __init__(self):\n super(MyModel, self).__init__()\n self.main = nn.Sequential(\n nn.Linear(256,6),nn.Softmax())\n def forward(self,x):\n return self.main(x)\n\ndef train(model):\n # This for loop will break sharing of gradient buffers. It's not\n # necessary but it reduces the contention, and has a small memory cost\n # (equal to the total size of parameters).\n for param in model.parameters():\n param.grad.data = param.grad.data.clone()\n # Construct data_loader, optimizer, etc.\n for data, labels in data_loader:\n input1 = torch.ones(256)\n optimizer.zero_grad()\n loss_fn(model(input1), torch.zeros(6)).backward()\n optimizer.step() # This will update the shared parameters\n\nif __name__ == '__main__':\n num_processes = 4\n model = MyModel()\n # NOTE: this is required for the ``fork`` method to work\n model.share_memory()\n processes = []\n for rank in range(num_processes):\n p = mp.Process(target=train, args=(model,))\n p.start()\n processes.append(p)\n for p in processes:\n p.join()\n<ECODE> I get the following error: <SCODE>Process Process-4:\nTraceback (most recent call last):\n File \"/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py\", line 258, in _bootstrap\n self.run()\n File \"/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py\", line 114, in run\n self._target(*self._args, **self._kwargs)\n File \"/home/jtremblay/code/pytorch-a3c/grad_example.py\", line 25, in train\n param.grad.data = param.grad.data.clone()\nAttributeError: 'NoneType' object has no attribute 'data'\nProcess Process-2:\nTraceback (most recent call last):\n File \"/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py\", line 258, in _bootstrap\n self.run()\n File \"/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py\", line 114, in run\n self._target(*self._args, **self._kwargs)\n File \"/home/jtremblay/code/pytorch-a3c/grad_example.py\", line 25, in train\n param.grad.data = param.grad.data.clone()\nAttributeError: 'NoneType' object has no attribute 'data'\nProcess Process-3:\nTraceback (most recent call last):\n File \"/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py\", line 258, in _bootstrap\n self.run()\n File \"/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py\", line 114, in run\n self._target(*self._args, **self._kwargs)\n File \"/home/jtremblay/code/pytorch-a3c/grad_example.py\", line 25, in train\n param.grad.data = param.grad.data.clone()\nAttributeError: 'NoneType' object has no attribute 'data'\nProcess Process-1:\nTraceback (most recent call last):\n File \"/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py\", line 258, in _bootstrap\n self.run()\n File \"/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py\", line 114, in run\n self._target(*self._args, **self._kwargs)\n File \"/home/jtremblay/code/pytorch-a3c/grad_example.py\", line 25, in train\n param.grad.data = param.grad.data.clone()\nAttributeError: 'NoneType' object has no attribute 'data'\n<ECODE> I know I could create them manually but I am not sure this is a wanted behaviour.", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "So if you just remove the cloning your code should work fine.", "isAccepted": false, "likes": null, "poster": "colesbury" }, { "contents": "You can remove the initial for loop now. It’s no longer needed.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "That is fantastic, do you think we could update the documentation?", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "Yes, it should definitely be updated! Could you please send a PR?", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Loading weights for CPU model while trained on GPU
null
[ { "contents": "This is not a very complicated issue, but I am not sure what is the best way to load the weights into the cpu when the model was trained on a GPU, thus here is my solution: <SCODE>model = torch.load('mymodel')\nself.model = model.cpu().double()\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "jtremblay" }, { "contents": "<SCODE>torch.load('my_file.pt', map_location=lambda storage, loc: storage)\n<ECODE>", "isAccepted": false, "likes": 23, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "achaiah" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Perfect, thanks for the clarification. I ended up using the conversion suggested in your linked post.", "isAccepted": false, "likes": null, "poster": "achaiah" }, { "contents": "Thanks for the suggestion!", "isAccepted": false, "likes": null, "poster": "jadore801120" }, { "contents": "torch.load('my_file.pt', map_location=lambda storage, loc: storage)", "isAccepted": false, "likes": 3, "poster": "tralgu" }, { "contents": "I have a related question, I have a shared model trained on GPU, and another process needs this model for inference on CPU. So I use a shared model and use the following command to load this shared model <SCODE> cpu_model.load_state_dict(gpu_model.cpu().state_dict())\n<ECODE> however, this won’t work and returns CUDA error(3), initialization error, what happened?", "isAccepted": false, "likes": 4, "poster": "Peter_Ham" }, { "contents": "If I load model with your hack and set model.train(‘True’) and trying even inference it fails. So it do not work for all cases (finetune on CPU after training on GPU not working).", "isAccepted": false, "likes": null, "poster": "Marat" }, { "contents": "<SCODE>use_cuda = torch.cuda.is_available()\nDEVICE = torch.device('cuda' if use_cuda else 'cpu') # 'cpu' in this case\n\ncpu_model = your_model()\ncpu_model.load_state_dict(torch.load(path_to_your_saved_gpu_model, map_location=DEVICE)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "TSLsun" }, { "contents": "thanks, worked for me", "isAccepted": false, "likes": null, "poster": "A_Rza_SH" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "prashanth" } ]
false
Continuous action A3C
reinforcement-learning
[ { "contents": "Hi, I wonder if anyone has got A3C working with continuous actions? I guessed it would be a good idea to ask first before trying to do it, as there’s probably a good reason why no one’s got it to work yet? So for example Open AIs pendulum, as only got a state/observation vector of 3, so there’s no need for any conv’s in the Actor-Critic module, basically I’m trying, <SCODE>lstm_out = 256\nenc_in = 3 # for pendulum\nenc_hidden = 200\nenc_out = lstm_out\n\nclass ActorCritic(nn.Module):\n\n def __init__(self , lstm_in ):\n super(ActorCritic, self).__init__( ) \n self.fc_enc_in = nn.Linear(enc_in,enc_hidden) # enc_input_layer\n self.fc_enc_out = nn.Linear(enc_hidden,enc_out) # enc_output_layer \n self.lstm = nn.LSTMCell(lstm_in, lstm_out)\n self.actor_mu = nn.Linear(lstm_out, 1)\n self.actor_sigma = nn.Linear(lstm_out, 1)\n self.critic_linear = nn.Linear(lstm_out, 1)\n self.train()\n\n def forward(self, inputs):\n \n x, (hx, cx) = inputs\n\n x = F.relu(self.fc_enc_in(x))\n x = self.fc_enc_out(x)\n\n hx, cx = self.lstm(x, (hx, cx))\n x = hx\n\n return self.critic_linear(x), self.actor_mu(x), self.actor_sigma(x), (hx, cx)\n<ECODE> The initialisation code in main.py, then looks like, <SCODE>env = gym.envs.make(\"Pendulum-v0\")\nlstm_in = 3 \nglobal_model = ActorCritic( lstm_in )\nglobal_model.share_memory()\nlocal_model = ActorCritic( lstm_in )\n<ECODE> And the training code is where I get confused (as usual) ???, <SCODE>env = gym.envs.make(\"Pendulum-v0\")\ns0 = env.reset()\ndone = True\nstate = torch.from_numpy(s0).float().unsqueeze(0) \nvalue, mu, sigma, (hx, cx) = local_model((Variable(state), (hx, cx)))\n\n#mu = mu.clamp(-1, 1) # constain to sensible values \nSoftplus=nn.Softplus() \nsigma = Softplus(sigma + 1e-5) # constrain to sensible values\nnormal_dist = torch.normal(mu, sigma) \n\nprob = normal_dist\nlog_prob = torch.log(prob)\nentropy = 0.5 * (torch.log(2. * np.pi * sigma ) + 1.)\n\n##--------------------------------------------------------------\n# TODO Calculate the Gaussian neg log-likelihood, log(1/sqrt(2sigma^2pi)) - (x - mu)^2/(2*sigma^2)\n# See - https://www.statlect.com/fundamentals-of-statistics/normal-distribution-maximum-likelihood\n#\nlog_prob = torch.log(torch.pow( torch.sqrt(2. * sigma * np.pi) , -1)) - (normal_dist - mu)*(normal_dist - mu)*torch.pow((2. * sigma), -1)\n##--------------------------------------------------------------\n\naction = Variable( prob.data )\n\n#action=[0,]\nstate, reward, done, _ = env.step([action.data[0][0]])\n\n<ECODE> References,", "isAccepted": false, "likes": 2, "poster": "AjayTalati" }, { "contents": "How do you do a logarithm, in PyTorch? <SCODE>>>> nnlog = nn.Log()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: module 'torch.nn' has no attribute 'Log'\n<ECODE>", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "Well, I came back to this after a few days, and I’m still stuck. So any advice will make you a genius in my view? Here’s a post of my code as simple as I could make it as a big blob, I keep getting this error, <SCODE>File \"main_single.py\", line 174, in <module>\nvalue_loss = value_loss + advantage.pow(2)\nAttributeError: 'numpy.ndarray' object has no attribute 'pow'\n<ECODE> Any ideas what I’ve got wrong? Thanks a lot for your help, Best, Ajay", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "reward is probably returned from gym as a numpy object (I guess a scalar?) so I think you have to convert it?", "isAccepted": false, "likes": 1, "poster": "jekbradbury" }, { "contents": "What seems to help a little is, changing the code to <SCODE> for t in reversed(range(len(rewards))):\n R = torch.mul(R, args.gamma) \n R = torch.add(R, rewards[t])\n advantage = R - values[t]\n value_loss = value_loss + advantage.pow(2)\n<ECODE> Now I get the error, <SCODE> File \"main_single.py\", line 185, in <module>\n (policy_loss + 0.5 * value_loss).backward()\n File \"/home/ajay/anaconda3/envs/pyphi/lib/python3.6/site-packages/torch/autograd/variable.py\", line 158, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"/home/ajay/anaconda3/envs/pyphi/lib/python3.6/site-packages/torch/autograd/stochastic_function.py\", line 13, in _do_backward\n raise RuntimeError(\"differentiating stochastic functions requires \"\nRuntimeError: differentiating stochastic functions requires providing a reward\n<ECODE>", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "… this is padding to make this post 20 characters or more", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "andrewliao11" }, { "contents": "I never managed to get it working very well, (I tried it on non-mujoco stuff), so went back to experimenting with the discrete actions version. Do you plan on experimenting with shared RMSProp? A3C is a great tool - you can apply it to a lot of stuff - it should be really helpful to you in the future! Kind regards, Ajay", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "andrewliao11" }, { "contents": "<SCODE>action = (mu + sigma_sq.sqrt()*Variable(eps)).data\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "alexis-jacq" } ]
false
DQN tutorial with no cheat
null
[ { "contents": "<SCODE>resize = T.Compose([T.ToPILImage(), T.Scale(40, interpolation=Image.CUBIC), T.ToTensor()])\n\n# This is based on the code from gym.\nscreen_width = 600\ndef get_cart_location():\n world_width = env.x_threshold * 2\n scale = screen_width / world_width\n return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART\n \n \ndef get_screen():\n screen = env.render(mode='rgb_array').transpose((2, 0, 1)) # transpose into torch order (CHW)\n # Strip off the top and bottom of the screen\n '''\n # this is the trick :\n screen = screen[:, 160:320]\n view_width = 320\n cart_location = get_cart_location()\n if cart_location < view_width // 2:\n slice_range = slice(view_width)\n elif cart_location > (screen_width - view_width // 2):\n slice_range = slice(-view_width,None)\n else:\n slice_range = slice(cart_location - view_width // 2, cart_location + view_width // 2)\n # Strip off the edges, so that we have a square image centered on a cart\n screen = screen[:, :, slice_range]\n '''\n # Convert to float, rescare, convert to torch tensor (this doesn't require a copy)\n screen = np.ascontiguousarray(screen, dtype=np.float32) / 255\n screen = torch.from_numpy(screen)\n # Resize, and add a batch dimension (BCHW)\n return resize(screen).unsqueeze(0)\n\nenv.reset()\nplt.imshow(get_screen().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none')\nplt.show()\n<ECODE> … I can’t obtain any learning. I tried several different combinations of parameters, and I also try to change the structure of the network, but no way to reach any acceptable result. I there a way to make it work ?", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Haha, I take your response as a challenge! I will find a way.", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "Hey, Did you find a way to make the agent learn without the trick?", "isAccepted": false, "likes": null, "poster": "arc_HITMAN" } ]
false
Resuming training with optimState
null
[ { "contents": "Is that intentional?", "isAccepted": false, "likes": null, "poster": "karandwivedi42" }, { "contents": "it’s not intentional. Just not implemented.", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
How to change the default device of GPU? device_ids[0]
null
[ { "contents": "<SCODE>if output_device is None:\noutput_device =device_ids[0]\n<ECODE> to <SCODE>if output_device is None:\noutput_device =device_ids[1]\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "lynnea718" }, { "contents": "<SCODE>CUDA_VISIBLE_DEVICES=1,2 python myscript.py\n<ECODE> so your script will only see GPUs number 1 and 2, and won’t touch the other GPUs", "isAccepted": false, "likes": 25, "poster": "fmassa" }, { "contents": "We have 8 gpu. I used the devices[1,2] ,the rest is used by others", "isAccepted": false, "likes": null, "poster": "lynnea718" }, { "contents": "", "isAccepted": false, "likes": 8, "poster": "colesbury" }, { "contents": "So in the Python script, How to choose the GPU devices. Especially, I want to change GPU devices in the training processing", "isAccepted": false, "likes": null, "poster": "cold_wind" }, { "contents": "I also want to know how to choose the GPU device in the python script.", "isAccepted": false, "likes": null, "poster": "uzeful" }, { "contents": "ex: <SCODE>with torch.cuda.device(1):\n w = torch.FloatTensor(2,3).cuda()\n # w was placed in device_1 by default.\n<ECODE> <SCODE>w = torch.FloatTensor(2,3).cuda(2)\n# w was placed in device_2\n\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "Jing" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "uzeful" }, { "contents": "<SCODE>CUDA_VISIBLE_DEVICES=3; python test.py\n<ECODE> <SCODE>import torch\nprint(torch.cuda.current_device())\n<ECODE> the above script still shows that current device is 0.", "isAccepted": false, "likes": 16, "poster": "jdhao" }, { "contents": "two things you did wrong:", "isAccepted": false, "likes": 15, "poster": "SimonW" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jdhao" }, { "contents": "That controls what devices CUDA exposes and PyTorch can’t do nothing in this regards.", "isAccepted": false, "likes": null, "poster": "SimonW" }, { "contents": "Hi, you can specify used gpu in python script as following:", "isAccepted": false, "likes": 3, "poster": "hszhao" }, { "contents": "it cannot work for me, it always use the first(ie, 0) gpu", "isAccepted": false, "likes": null, "poster": "jianzhong" }, { "contents": "Thanks, this is the easiest way to solve this problem.", "isAccepted": false, "likes": null, "poster": "kaiyuyue" }, { "contents": "It shouldn’t happen. That is a CUDA flag. Once set, PyTorch will never have access to the excluded device(s).", "isAccepted": false, "likes": null, "poster": "SimonW" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "moonlightlane" }, { "contents": "This is a very useful solution, especially you are going to run with someone else’s code without specifying the cuda id.", "isAccepted": false, "likes": null, "poster": "maplewizard" }, { "contents": "<SCODE>torch.cuda.set_device(device)\n\nSets the current device.\n\nUsage of this function is discouraged in favor of device. In most cases it’s better to use CUDA_VISIBLE_DEVICES environmental variable.\n\nParameters:\tdevice (torch.device or int) – selected device. This function is a no-op if this argument is negative.\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Sun_ShiJie" } ]
false
When cuda() is called, a list of modules cannot shift to the GPU
null
[ { "contents": "Hi, below is my definition of network: <SCODE>class CNN_Text(nn.Module):\n \n def __init__(self, args):\n super(CNN_Text,self).__init__()\n self.args = args\n \n V = args.embed_num\n D = args.embed_dim\n C = args.class_num\n Ci = 1\n Co = args.kernel_num\n Ks = args.kernel_sizes\n\n self.embed = nn.Embedding(V, D)\n self.convs1 = [nn.Conv2d(Ci, Co, (K, D)) for K in Ks]\n\n self.dropout = nn.Dropout(args.dropout)\n self.fc1 = nn.Linear(len(Ks)*Co, C)\n<ECODE> However, when i called cuda() on the model, the modules in the list self.convs1 will not shift to the GPU, how to solve this problem, any ideas ?", "isAccepted": false, "likes": null, "poster": "xinwei_he" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "thanksm but it gives me the error <SCODE>AttributeError: 'module' object has no attribute 'ModuleList'\n<ECODE> is it a problem related to the version of pytorch?", "isAccepted": false, "likes": null, "poster": "xinwei_he" }, { "contents": "Many Thanks, it solves my problem. And Updating my pytorch makes ModuleList available.", "isAccepted": false, "likes": null, "poster": "xinwei_he" } ]
false
State_dict does not contain keys for conv layers in list
null
[ { "contents": "<SCODE>import torch.nn as nn\nclass Classifier_Module(nn.Module):\n\n def __init__(self,dilation_series,padding_series):\n super(Classifier_Module, self).__init__()\n\t self.conv2d_list = []\n\t for dilation,padding in zip(dilation_series,padding_series):\n\t self.conv2d_list.append(nn.Conv2d(2048,5,kernel_size=3,stride=1, padding =padding, dilation = dilation,bias = True))\n\n \n def forward(self, x):\n\t out = self.conv2d_list[0](x)\n\t for i in range(len(self.conv2d_list)-1):\n\t out = self.conv2d_list[i+1](x)+out\n return out\n\n\nclass Module1(nn.Module):\n def __init__(self):\n super(Module1, self).__init__()\n self.layer = self._make_pred_layer(Classifier_Module, [6,12,18,24],[6,12,18,24])\n\n def _make_pred_layer(self,block, dilation_series, padding_series):\n \treturn nn.Sequential(block(dilation_series,padding_series))\n\n def forward(self, x):\n x = self.layer(x)\n return x\n<ECODE> The state dictionary of the net does not contain any keys corresponding to the conv2d layers of the Classifier_Module. <SCODE>import sample\nmodel = getattr(sample,'Module1')()\nprint model # 1 does not show conv2d list\nfor keys in model.state_dict().keys():\n print keys #2 does not show con2d list\nprint model.layer._modules['0'].conv2d_list # this shows the conv2d list\n\n<ECODE> How can I fix this issue? Is there any other way to perform a similar function without writing code for each conv2d layer?", "isAccepted": false, "likes": null, "poster": "isht7" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "isht7" }, { "contents": "Yes, you can return multiple outputs form a module", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
How to get the newest pytorch?
null
[ { "contents": "I uninstall pytorch from anaconda and tried to compile the source code from github because I want to use the latest part “nn.init”. But things go strange. My python part cannot find torch packages, which means, when we I try to type in Python 3.6.0 | Anaconda 4.3.0 can not find torch module. I have no choice but to install it from Anaconda again and I still cannot use nn.init part. What should I do? p.s. Is there any else way to setup the weight initialization? Thanks.", "isAccepted": false, "likes": null, "poster": "Ke_Bai" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "Hope to be useful to others.", "isAccepted": false, "likes": null, "poster": "Ke_Bai" } ]
false
Linear Regression with polynomials - transformation problem
null
[ { "contents": "<SCODE>from __future__ import print_function\nfrom itertools import count\n\nimport torch\nimport torch.autograd\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\ntrain_data = torch.Tensor([\n [40, 6, 4],\n [44, 10, 4],\n [46, 12, 5],\n [48, 14, 7],\n [52, 16, 9],\n [58, 18, 12],\n [60, 22, 14],\n [68, 24, 20],\n [74, 26, 21],\n [80, 32, 24]])\ntest_data = torch.Tensor([\n [6, 4],\n [10, 5],\n [4, 8]])\n\nx_train = train_data[:,1:3]\ny_train = train_data[:,0]\n\nPOLY_DEGREE = 4\ninput_size = 2\noutput_size = 1\n\n\ndef make_features(x):\n \"\"\"Builds features i.e. a matrix with columns [x, x^2, x^3, x^4].\"\"\"\n x = x.unsqueeze(1)\n return torch.cat([x ** i for i in range(1, POLY_DEGREE+1)], 1)\n\n\n\ndef poly_desc(W, b):\n \"\"\"Creates a string description of a polynomial.\"\"\"\n result = 'y = '\n for i, w in enumerate(W):\n result += '{:+.2f} x^{} '.format(w, len(W) - i)\n result += '{:+.2f}'.format(b[0])\n return result\n\n\ndef get_batch():\n \"\"\"Builds a batch i.e. (x, f(x)) pair.\"\"\"\n x = make_features(x_train)\n\n return Variable(x), Variable(y_train)\n\n\n# Define model\nfc = torch.nn.Linear(input_size, output_size)\n\nfor batch_idx in range(1000):\n # Get data\n batch_x, batch_y = get_batch()\n\n # Reset gradients\n fc.zero_grad()\n\n # Forward pass\n output = F.smooth_l1_loss(fc(batch_x), batch_y)\n loss = output.data[0]\n\n # Backward pass\n output.backward()\n\n # Apply gradients\n for param in fc.parameters():\n param.data.add_(-0.1 * param.grad.data)\n\n # Stop criterion\n if loss < 1e-3:\n break\n\nprint('Loss: {:.6f} after {} batches'.format(loss, batch_idx))\nprint('==> Learned function:\\t' + poly_desc(fc.weight.data.view(-1), fc.bias.data))\n# print('==> Actual function:\\t' + poly_desc(W_target.view(-1), b_target))<ECODE>", "isAccepted": false, "likes": 1, "poster": "pelopo" }, { "contents": "I changed it a bit and now I don’t get any errors, but neither any values for loss or predicted values. I tried to replicate the case with sklearn and all is ok. I think I’m missing something simple, but just can’t see it. Any help would be appreciated. Code: <SCODE>import sklearn.linear_model as lm\nfrom sklearn.preprocessing import PolynomialFeatures\nimport torch\nimport torch.autograd\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\n\ntrain_data = torch.Tensor([\n [40, 6, 4],\n [44, 10, 4],\n [46, 12, 5],\n [48, 14, 7],\n [52, 16, 9],\n [58, 18, 12],\n [60, 22, 14],\n [68, 24, 20],\n [74, 26, 21],\n [80, 32, 24]])\ntest_data = torch.Tensor([\n [6, 4],\n [10, 5],\n [4, 8]])\n\nx_train = train_data[:,1:3]\ny_train = train_data[:,0]\n\nPOLY_DEGREE = 3\ninput_size = 2\noutput_size = 1\n\npoly = PolynomialFeatures(input_size * POLY_DEGREE, include_bias=False)\nx_train_poly = poly.fit_transform(x_train.numpy())\n\n\nclass Model(torch.nn.Module):\n\n def __init__(self):\n super(Model, self).__init__()\n self.fc = torch.nn.Linear(poly.n_output_features_, output_size)\n \n def forward(self, x):\n return self.fc(x)\n \nmodel = Model() \ncriterion = torch.nn.MSELoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.001)\n\nlosses = [] # Added\n\nfor i in range(1000):\n optimizer.zero_grad()\n outputs = model(Variable(torch.Tensor(x_train_poly)))\n loss = criterion(outputs, Variable(y_train))\n losses.append(loss.data[0])\n loss.backward() \n optimizer.step()\n if loss.data[0] < 1e-4:\n break \n\nprint('n_iter', i)\nprint(loss.data[0])\nplt.plot(losses)\nplt.show()\n<ECODE> and below is the sklearn code that works <SCODE>regr = lm.LinearRegression()\npoly = PolynomialFeatures(4, include_bias=False)\nX_poly = poly.fit_transform(x_train.numpy())\n\nregr.fit(x_train.numpy(), y_train.numpy())\npred = regr.predict(test_data.numpy())\npred\n<ECODE> Output", "isAccepted": false, "likes": 1, "poster": "pelopo" }, { "contents": "I can’t see what’s broken, but I like this nice pedagogical example! Hope you get it working!", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "Im keep getting loss:nan for every loss.data[0]", "isAccepted": false, "likes": null, "poster": "Sangwon_Jake" } ]
false
‘Criterion’ object has no attribute ‘cuda’!
null
[ { "contents": "Hello there, I wrote below customized Cost Function for my project: My codes to benefit from these written cost are: Thanks!", "isAccepted": false, "likes": 1, "poster": "mderakhshani" }, { "contents": "calling torch.FloatTensor will always put your tensor in the CPU. You probably want to get the device in which the input tensor is by doing something like pred_out.new(), which will generate a new tensor in the same device as pred_out. you are using np.ndarray in parts of your code. While it’s possible to do it, it will avoid parts of your code to run in the GPU Performing for loops for the computations that you want will probably be slower in the GPU than in the CPU. You’d probably want to wrap some inner loops using only batched mathematical operations in tensors so that it can be efficiently computed in the GPU, so that instead of accepting one bounding box, you accept a bunch of them at the same time.", "isAccepted": false, "likes": 2, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "<SCODE>output = output.cpu()\nloss = criterion(output, target)\n<ECODE> and that should work out of the box.", "isAccepted": false, "likes": 1, "poster": "fmassa" } ]
false
Word Language Model training question
null
[ { "contents": "Given the sequence of characters: “a” “b” “c” “d” … “z” and bptt equal to 3 and ignoring batching for simplicity: first sequence: src=“a”,“b”,“c”; trg=“d” second sequence: src=“d”, “e”, “f”; trg=“g” Perhaps, I am wrong but doesn’t it mean that an amount of data proportional to the value of bptt isn’t being used during training (in the example above sequences src=“b” “c” “d”, trg=“e” and src=“c” “d” “e”, trg=“f” aren’t in the training set)?", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "b,c,d target=“e” is covered by carrying the hidden state forward between sequences.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "In other words, the parameter bptt seems to be tweaking two things: how many steps back to include in the rnn computational graph. how many examples to take into account for estimate perplexity on. but perhaps I am understanding things in a wrong way…", "isAccepted": false, "likes": null, "poster": "emanjavacas" } ]
false
Pytorch feature requests
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "kirk86" }, { "contents": "I think feature requests can be sent to the github issues", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "kirk86" } ]
false
Problem migrating code from Torch to PyTorch
null
[ { "contents": "I have a code I wrote in Torch which I am migrating to PyTorch. Its a regression-based learning problem. In Torch, after defining my network, I define concatenate my inputs along the 2nd axis as follows <SCODE> inputs = torch.cat({inputs[1], inputs[2], inputs[3], inputs[4], \n inputs[5], inputs[6], inputs[7], inputs[8],\n inputs[9]}, 2)\n<ECODE> And the network is as defined: <SCODE> (lstm1): LSTM(9, 9)\n (lstm2): LSTM(9, 6)\n (lstm3): LSTM(6, 6)\n (drop): Dropout (p = 0.3)\n (fc1): Linear (6 -> 3)\n (fc2): Linear (3 -> 3)\n<ECODE> In torch, I can forward call and backward call okay as the inputs tensor id of batch size [100, 9] torch.*Tensor. I construct the network the same way in PyTorch and concatenate my nine inputs along the 1st axis as <SCODE>\tinputs = torch.cat((base_in, base_out, left_in, \n\t\t\t\t\t\tleft_out, right_in, right_out, \n\t\t\t\t\t\tz, pitch, yaw), 1)\n<ECODE> I define my model in PyTorch as <SCODE>class LSTMModel(nn.Module):\n def __init__(self, nFeatures, nCls, nHidden, nineq=12, neq=0, eps=1e-4, \n noutputs=3,numLayers=1):\n super(LSTMModel, self).__init__()\n \n self.nFeatures = nFeatures\n self.nHidden = nHidden\n self.nCls = nCls\n self.nineq = nineq\n self.neq = neq\n self.eps = eps\n\n self.cost = nn.MSELoss(size_average=False)\n self.noutputs = noutputs\n # self.neunet = nn.Sequential()\n self.lstm1 = nn.LSTM(nHidden[0],nHidden[0],num_layers=numLayers)\n self.lstm2 = nn.LSTM(nHidden[0],nHidden[1],num_layers=numLayers)\n self.lstm3 = nn.LSTM(nHidden[1],nHidden[2],num_layers=numLayers)\n self.drop = nn.Dropout(0.3)\n self.fc1 = nn.Linear(nHidden[2], noutputs)\n self.fc2 = nn.Linear(noutputs, noutputs)\n\n self.M = Variable(torch.tril(torch.ones(nCls, nCls)))\n self.L = Parameter(torch.tril(torch.rand(nCls, nCls)))\n self.G = Parameter(torch.Tensor(nineq/2, nCls).uniform_(-1,1))\n\n \"\"\"\n define constraints, z_i, and slack variables, s_i,\n for six valves. z_i and c_i are learnable parameters\n \"\"\"\n self.z0 = Parameter(torch.zeros(nCls))\n self.s0 = Parameter(torch.ones(nineq/2))\n self.z0p = Parameter(torch.zeros(nCls))\n self.s0p = Parameter(torch.ones(nineq/2))\n\n def forward(self, x):\n nBatch = x.size(0)\n # LSTM-dropout-LSTM-dropout-lstm-dropout-FC-QP-FC\n x = x.view(nBatch, -1)\n x = self.lstm1(x)\n x = self.drop(x)\n x = self.lstm2(x)\n x = self.drop(x)\n x = self.lstm3(x)\n x = self.drop(x)\n x = self.fc1(x)\n<ECODE> But my calls to forward gives me runtime errors like so <SCODE>RuntimeError: matrices expected, got 1D, 2D tensors at /home/robotec/Documents/NNs/locuclab-pytorch/torch/lib/TH/generic/THTensorMath.c:1224\n<ECODE> I wonder what I must be doing wrong.", "isAccepted": false, "likes": null, "poster": "lakehanne" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE> neunet = nn.Sequential()\n local rnn \n rnn = nn.FastLSTM(9, 9, 5)\n neunet:add(rnn) \n rnn = nn.FastLSTM(9, 6, 5)\n neunet:add(rnn) \n rnn = nn.FastLSTM(6, 3, 5)\n neunet:add(rnn) \n neunet:add(nn.Dropout(0.3))\n neunet:add(nn.Linear(3, 3, bias)) \n neunet = nn.Sequencer(neunet, 3)\n<ECODE> <SCODE>class LSTMModel(nn.Module):\n def __init__(self, noutputs=3,numLayers=3):\n super(LSTMModel, self).__init__()\n \n self.nHidden = [9, 6, 6]\n self.cost = nn.MSELoss(size_average=False)\n self.noutputs = noutputs\n self.num_layers = numLayers\n \n self.lstm0 = nn.LSTM(self.nHidden[0], self.nHidden[0], self.num_layers, \n batch_first= True, dropout=0.3)\n self.lstm1 = nn.LSTM(self.nHidden[1], self.nHidden[1], self.num_layers, \n batch_first= True, dropout=0.3)\n self.lstm2 = nn.LSTM(self.nHidden[2], self.nHidden[2], self.num_layers, \n batch_first= True, dropout=0.3)\n self.fc = nn.Linear(self.nHidden[2], noutputs)\n\n def forward(self, x):\n\n h0 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[0])) \n c0 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[0]))\n \n #Now propagate the rnn \n #rnn layer 1\n out0, _ = self.lstm0(x, (h0, c0)) \n \n #second rnn layer\n h1 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[1])) \n c1 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[1]))\n out1, _ = self.lstm1(out0, (h1, c1)) \n \n #third rnn layer\n h2 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[2])) \n c2 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[2]))\n out2, _ = self.lstm2(out1, (h2, c2)) \n \n #hidden layer of last time step\n out = self.fc(out2[:, -1, :]) \n return out\n<ECODE> So I forward a DoubleTensor defined as <SCODE>RuntimeError: matrices expected, got 1D, 2D tensors at /home/robotec/Documents/NNs/locuclab-pytorch/torch/lib/TH/generic/THTensorMath.c:1224\n<ECODE>", "isAccepted": false, "likes": null, "poster": "lakehanne" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "lakehanne" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "Yes, it is a time series. Could you give a minimal working example of how I can correctly form a PyTorch model and a time series input tensor based off the snippet I wrote above? Sorry to be a pain.", "isAccepted": false, "likes": null, "poster": "lakehanne" }, { "contents": "If x is 50000 by 9, then what’s your time dimension, what’s your batch dimension, and what’s your feature dimension?", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "mini-batch dimension is 100, feature dimension is 3, time dimension is 5.", "isAccepted": false, "likes": null, "poster": "lakehanne" }, { "contents": "Then you need to pass in a tensor of dimension 5 x 100 x 3", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "lakehanne" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "sahar_khosravi" } ]
false
Clarification - Using backward() on non-scalars
null
[ { "contents": "Hey, Would anyone be able to point me in the direction of an example if one exists? If not, say I had the following super simple example: x = Variable(torch.FloatTensor([[2,1]]), requires_grad=True) M = Variable(torch.FloatTensor([[1,2],[3,4]])) y = torch.mm(x,M)", "isAccepted": false, "likes": 7, "poster": "Mattie" }, { "contents": "<SCODE>x = Variable(torch.FloatTensor([[2,1]]), requires_grad=True)\nM = Variable(torch.FloatTensor([[1,2],[3,4]]))\ny = torch.mm(x, M)\njacobian = torch.FloatTensor(2, 2).zero_()\ny.backward(torch.FloatTensor([[1, 0]]), retain_variables=True)\njacobian[:,0] = x.grad.data\nx.grad.data.zero_()\ny.backward(torch.FloatTensor([[0, 1]]), retain_variables=True)\njacobian[:,1] = x.grad.data\n<ECODE> torch.autograd.backward([y], [torch.FloatTensor([[1, 0]])], retain_variables=True)", "isAccepted": false, "likes": 14, "poster": "colesbury" }, { "contents": "Fab, that’s a great help. Thanks for your time.", "isAccepted": false, "likes": 1, "poster": "Mattie" }, { "contents": "I am a bit confused by this. In this example, isn’t that the “loss” is y? Is that correct? Thanks a lot!", "isAccepted": false, "likes": null, "poster": "linlin" }, { "contents": "is used instead of Thank you in advance!", "isAccepted": false, "likes": null, "poster": "jsm" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "jdhao" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jsm" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jdhao" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "linlin" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yottabytt" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jdhao" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "saan77" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Jolyon" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Huimin_ZENG" } ]
false
Cache datasets pre-processing
null
[ { "contents": "", "isAccepted": false, "likes": 1, "poster": "bodokaiser" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "Is there maybe something like a cache proxy which just wraps a custom python class and caches each calls?", "isAccepted": false, "likes": null, "poster": "bodokaiser" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Yep. That’s what I was looking for!", "isAccepted": false, "likes": 1, "poster": "bodokaiser" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "EKami" }, { "contents": "joblib does an amazing job in caching to disk. This removes a lot of hassle and works great for me: <SCODE>from joblib import Memory\ncachedir = '/data/cache/'\nmemory = Memory(cachedir, verbose=0, compress=True)\n\n@memory.cache\ndef preprocess_file(file, ..params ):\n ... load data and do time consuming pre=processing\n sample['data'] = data\n sample['labels'] = labels\n return sample\n\n\nclass MyData(Dataset):\n def __init__(self, data_path, ... parameters):\n self.files = list of files\n\n def __getitem__(self, index):\n sample = preprocess_file(self.files[index], parameters....) \n return sample\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "gonzales2010" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Amirhj" } ]
false
Sharding model across GPUs
null
[ { "contents": "On a similar topic, give a GAN setting with a generator and a discriminator and two GPUs, what is the recommendation to speed-up the computation, given the dependency between discriminator and generator?", "isAccepted": false, "likes": 1, "poster": "claudiomartella" }, { "contents": "<SCODE>class MyModel(nn.Module):\n def __init__(self, split_gpus):\n self.large_submodule1 = ...\n self.large_submodule2 = ...\n\n self.split_gpus = split_gpus\n if split_gpus:\n self.large_submodule1.cuda(0)\n self.large_submodule1.cuda(1)\n\n def forward(self, x):\n x = self.large_submodule1(x)\n if split_gpus:\n x = x.cuda(1) # P2P GPU transfer\n return self.large_submodule2(x)\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "ajdroid" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "claudiomartella" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "@claudiomartella", "isAccepted": false, "likes": null, "poster": "ajdroid" } ]
false
Customized RNN cell which can accept packsequence
null
[ { "contents": "Hi, I saw the packsequence class which presumably can speed up the RNN computation by ignoring the paddings. My question is, if I want to use my own cell, how can I build a RNN module which can accept Packsequence. I’ve looked up the code, but it seems nontrivial. I feel like it should be easily down by a wrapper. Is there any example use case or any suggestions?", "isAccepted": false, "likes": 1, "poster": "ruotianluo" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
Out of memory when optimizer.zero_grad() is called
null
[ { "contents": "I’m asking because I wonder why .zero_grad() would cause memory out. From my understanding this op is just to set param.grad.data to zero, why extra memory would be required?", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "That’s strange. I can’t think of why that would happen.", "isAccepted": false, "likes": null, "poster": "colesbury" }, { "contents": "Is there a small script you can give to reproduce this? I am happy to look into what’s happening.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I may need a while to reduce the code as small as possible. The error trace back is as follows: <SCODE>THCudaCheck FAIL file=/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory\nWarning: out of memory\nWarning: out of memory\n\nepoch = 0, loss =2.78738046, ER_train = 100.00, ER_batch = 100.00, time = 2.90s(2.90|0.00), progress = 0.00%, time remained = 1781.43h\nepoch = 0, loss =2.77562714, ER_train = 98.44, ER_batch = 96.88, time = 0.73s(0.73|0.00), progress = 0.00%, time remained = 1983.91h\nepoch = 0, loss =2.74634695, ER_train = 97.40, ER_batch = 95.31, time = 1.40s(1.40|0.00), progress = 0.04%, time remained = 5.93h\n\nWarning: out of memory\n\nTraceback (most recent call last):\n File \"DIC_train_pytorch.py\", line 397, in <module>\n optimizer.zero_grad()\n File \"/home/David/App/anaconda3/lib/python3.5/site-packages/torch/optim/optimizer.py\", line 136, in zero_grad\n param.grad.data.zero_()\nRuntimeError: cuda runtime error (2) : out of memory at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMath.cu:35\n<ECODE> In the above trace back logs, “Warning: out of memory” is printed by my code to warn me that an out of memory exception (exactly the exception as shown in the last line of the above log) is catched. This exception would be raised by pytorch when input train data batch is big. After catching the exception, I’ll reduce the batch size and try the training procedure again. The corresponding code snippet is as <SCODE> optimizer.zero_grad()\n try:\n if device >= 0:\n score = model(Variable(torch.from_numpy(X)).cuda(device))\n else:\n score = model(Variable(torch.from_numpy(X)))\n except RuntimeError as e:\n if e.args[0].startswith('cuda runtime error (2) : out of memory'):\n print('Warning: out of memory')\n cached_data.extend(split_train_data([X, Y]))\n continue\n else:\n raise e<ECODE>", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "<SCODE>CUDA_LAUNCH_BLOCKING=1 python script.py\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "The track back is the same, though.", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "We’re probably missing a check somewhere so the error pops up only there. You’re likely working under a super heavy memory pressure, and the model doesn’t fit. What’s the last operation you do (loss fn + last op before)? Did you try reducing the batch size?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I have the same error. Tried running the model with CUDA_LAUNCH_BLOCKING=1 but still the error pops up at optimizer.zero_grad(). Can anyone help me out? I can post the model and training snippet if needed. Thanks", "isAccepted": false, "likes": null, "poster": "AdityaAS" } ]
false
Embedding returns an error that I dont understand
null
[ { "contents": "<SCODE>a = torch.Tensor(10)\na = Variable(a, requires_grad=True)\nembedding = nn.Embedding(10, 100)\noutput = embedding(a)\n<ECODE> <SCODE>---------------------------------------------------------------------------\nAssertionError Traceback (most recent call last)\n<ipython-input-23-11b80614bb73> in <module>()\n 2 a = Variable(a, requires_grad=True)\n 3 embedding = nn.Embedding(10, 100)\n----> 4 output = embedding(a)\n\n/home/zeng/code/tensorfold/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)\n 200 \n 201 def __call__(self, *input, **kwargs):\n--> 202 result = self.forward(*input, **kwargs)\n 203 for hook in self._forward_hooks.values():\n 204 hook_result = hook(self, input, result)\n\n/home/zeng/code/tensorfold/lib/python2.7/site-packages/torch/nn/modules/sparse.pyc in forward(self, input)\n 92 padding_idx, self.max_norm, self.norm_type,\n 93 self.scale_grad_by_freq, self.sparse\n---> 94 )(input, self.weight)\n 95 \n 96 def __repr__(self):\n\n/home/zeng/code/tensorfold/lib/python2.7/site-packages/torch/nn/_functions/thnn/sparse.pyc in forward(self, indices, weight)\n 42 def forward(self, indices, weight):\n 43 assert indices.dim() <= 2\n---> 44 assert not self.needs_input_grad[0], \"Embedding doesn't \" \\\n 45 \"compute the gradient w.r.t. the indices\"\n 46 \n\nAssertionError: Embedding doesn't compute the gradient w.r.t. the indices\n<ECODE>", "isAccepted": false, "likes": null, "poster": "zeng" }, { "contents": "Hi,", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "Then, how to get some of the embeddings updated and some fixed?", "isAccepted": false, "likes": null, "poster": "cdjhz" }, { "contents": "If you want to achieve that, I think you will need to set the gradients corresponding to these embeddings to 0 manually after performing the backward pass.", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Janinanu" }, { "contents": "Hi,", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "Thank you very much! After some adjustments, my code now works.", "isAccepted": false, "likes": null, "poster": "Janinanu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "SHAILESH_KUMAR" } ]
false
Difference of methods between torch.nn and functional
null
[ { "contents": "", "isAccepted": false, "likes": 10, "poster": "zuoxingdong" }, { "contents": "", "isAccepted": false, "likes": 17, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "stared" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "How about the difference between torch.nn and torch.autograd.Function? thank you for answering my newbie question.", "isAccepted": false, "likes": null, "poster": "An_Tran" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "ptrblck" } ]
false
How to implement a running mean logic in pytorch?
null
[ { "contents": "I’m trying to understand the philosophy of pytorch, and want to make sure what’s the right way to implement running mean logic like in batch normalization with pytorch.", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "Yes. you are on the right track.", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
Exploiting sparsity in batch operations?
null
[ { "contents": "If I have a batch of sparse input matrices, and a dense batch of matrices : <SCODE>> mat1 = torch.zeros(4, 3, 5)\n> mat1[1][1][1] = 1; mat1[2][2][2] = 1\n> mat2 = torch.rand(4, 5, 6)\n> torch.bmm(mat1, mat2)\n<ECODE> Exploiting sparsity is quite an optimisation. If this isn’t available yet, which users might I liaise with to help out?", "isAccepted": false, "likes": 1, "poster": "cjmcmurtrie" }, { "contents": "Also, torch.smm(mat1_i, mat2) is also implemented, for sparse * dense -> sparse operations.", "isAccepted": false, "likes": 1, "poster": "ebetica" }, { "contents": "<SCODE>import torch\nimport torch.autograd as ann\n\n\nmat2 = ann.Variable(torch.rand(4, 5, 6), requires_grad=True)\nmats = [ann.Variable(torch.zeros(4, 3, 5), requires_grad=True) for _ in range(3)]\nfor i in range(len(mats)):\n result = torch.bmm(mats[i], mat2)\n print result.size()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "cjmcmurtrie" }, { "contents": "Something like this: <SCODE>import torch\n\nx = torch.rand(5,6)\n# Sparse matrix of (0, 1) = 1; (2, 1) = 2, (3, 4) = 3\nsparse = torch.sparse.FloatTensor(\n torch.LongTensor([[0, 2, 3], [1, 1, 4]]), # Indicies\n torch.FloatTensor([1, 2, 3])) # values\n\nprint(x)\nprint(sparse.to_dense())\nprint(torch.mm(sparse, x))\n# This won't actually save space or compute, since it's so dense,\n# but it will be a sparse tensor representation.\nprint(torch.smm(sparse, x))\n<ECODE> Simply construct a list of your sparse tensors, and loop over them to do the batch mm.", "isAccepted": false, "likes": 1, "poster": "ebetica" }, { "contents": "Thanks once more! One last thing: <SCODE>sparse = torch.sparse.FloatTensor(\n torch.LongTensor([[0, 2, 3], [1, 1, 4]]), # Indicies\n torch.FloatTensor([1, 2, 3])) # values\n<ECODE> Seems strange to me as you don’t define the sizes of the sparse matrix - it seems to arbitrarily pick the indices of the corner value as the size. What is the logic here?", "isAccepted": false, "likes": null, "poster": "cjmcmurtrie" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ebetica" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "cjmcmurtrie" }, { "contents": "By the way, is anyone working on wrapping cusparse?", "isAccepted": false, "likes": 1, "poster": "Ilya_Kostrikov" }, { "contents": "martin raison is working on it, are you interested in collaborating?", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Yes, sure! I’m not sure whether I’m familiar with pytorch internals enough to be able help. But I can try anyway.", "isAccepted": false, "likes": null, "poster": "Ilya_Kostrikov" }, { "contents": "I’d like to collaborate on writing a wrapper for cusparse, if you folks still need a hand.", "isAccepted": false, "likes": null, "poster": "siddharthachandra" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thanks, looks good. Let me check it out.", "isAccepted": false, "likes": null, "poster": "siddharthachandra" } ]
false
What’s the equivalence of theano’s inc_subtensor()?
null
[ { "contents": "To change the values of a subset elements of a tensor, in theano we have inc_subtensor(), what is the equivalence in pytorch?", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "Yes, I noticed there is torch.index_select() function. However this function returns a new tensor not a view, so if I do <SCODE>t2 = torch.index_select(t1, axis, index)\nt2 += 1.0\n<ECODE>", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "you can do standard numpy-like indexing: Try this: <SCODE>t1 = torch.randn(10, 5)\nt2 = t1[:, 3]\nt2.fill_(0)\nprint(t1)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "I think index_add_ is what you are looking for.", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "Thanks, that’s exactly what I’m looking for.", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "david-leon" } ]
false
Easy way to loading localization dataset
vision
[ { "contents": "Hi, I’d like to train model with class label and bounding box label at the same time, (e.g. CUB200 2011 dataset, ImageNet 12 CLSLOC dataset) and may preprocess with cropping/scaling (and they should be applied to both image and bounding box label.) Thanks!", "isAccepted": false, "likes": null, "poster": "Yunhun_Jang" }, { "contents": "Hey,", "isAccepted": false, "likes": null, "poster": "amdegroot" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Yunhun_Jang" } ]
false
How to train model with multithreading
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "xwgeng" }, { "contents": "You can look at the hogwild training example we have:", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "xwgeng" }, { "contents": "You can’t share data loaders among processes, it’s going to be very tricky (because of how Python’s multiprocessing works).", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Thanks for the reply!", "isAccepted": false, "likes": null, "poster": "xwgeng" }, { "contents": "By passing the model as an argument to say the training function, is this not creating a separate local copy of the model for each process? I ask because I followed the Hogwild! example with my own network/data and have found that while each process tends to have the same optimization behavior, there is variance in the weights at the end of training, at each epoch that I have a checkpoint for. I do expect variance in things such as loss over time because the processes should be loading different random subsets of the same data, but it doesn’t make sense to me why the weights would be different if calling model.share_memory() should be placing the model object in a shared memory segment where there should only be one copy of weights that are manipulated. The only explanation that I can think of as to why my weights/biases would vary across processes would be if the processes are actually only using a local copy of the model…but perhaps I am seriously misunderstanding something here.", "isAccepted": false, "likes": 1, "poster": "williamdjones" }, { "contents": "to revise this slightly…I believe I understand what the “problem” actually may be. Since it is more likely that the processes finish each epoch at different times rather than all simultaneously at a point in time, and each process is calling optimizer.step() asynchronously then saving the parameters at the end of an epoch…it would make sense that the values of the weights would be slightly different due to this. So at the beginning of the training, all processes should have the same initial weights…but over the course of training it is likely that the processes will not actually have copies of the model that are identical to those of the other processes, but the parameters will generally be more or less similar.", "isAccepted": false, "likes": null, "poster": "williamdjones" } ]
false
How to add an ignore_label in loss layers?
null
[ { "contents": "", "isAccepted": false, "likes": 1, "poster": "Chenqi_Qi" }, { "contents": "For example: <SCODE>import torch\nimport torch.nn as nn\n\nnClasses = 10\nignore_classes = torch.LongTensor([4, 7]) # ignore class 5 and 8\nweights = torch.ones(nClasses)\nweights[ignore_classes] = 0.0\nloss = nn.NLLLoss2d(weights)\n<ECODE>", "isAccepted": false, "likes": 6, "poster": "smth" } ]
false
How to retrieve hidden states for all time steps in LSTM or BiLSTM?
null
[ { "contents": "According to the docs of nn.LSTM outputs: <SCODE>output (seq_len, batch, hidden_size * num_directions): tensor containing the output features (h_t) from the last layer of the RNN, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence.\nh_n (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t=seq_len\nc_n (num_layers * num_directions, batch, hidden_size): tensor containing the cell state for t=seq_len\n<ECODE> If I want to get the hidden states for all t which means t =1, 2, …, seq_len, How can I do that? One approach is looping through an LSTM cell for all the words of a sentence and get the hidden state, cell state and output. I am doing a language modeling task using LSTM where I need the hidden state representations of all the words of a sentence. Any help would be appreciated!", "isAccepted": false, "likes": 9, "poster": "wasiahmad" }, { "contents": "To get individual hidden states, you have to indeed loop over for each individual timestep and collect the hidden states.", "isAccepted": false, "likes": 6, "poster": "smth" }, { "contents": "Isn’t output containing all the hidden states (at least h)?", "isAccepted": false, "likes": null, "poster": "ruotianluo" }, { "contents": "Thanks smth, I was also thinking about that. Just one more thing to confirm, in case if I loop over the individual timestep, will it be inefficient in terms of time complexity and will it change the loss computation for the entire network? Though i think it will not.", "isAccepted": false, "likes": null, "poster": "wasiahmad" }, { "contents": "No, the output only contains output of the last time step.", "isAccepted": false, "likes": 2, "poster": "wasiahmad" }, { "contents": "It won’t change loss computation, but it will likely be much slower.", "isAccepted": false, "likes": 1, "poster": "ngimel" }, { "contents": "no, output contains the hidden states for each time step, but only for the last layer in a stacked model, or the only layer in a single layer model.", "isAccepted": false, "likes": 10, "poster": "sinhasam" }, { "contents": "I’m providing my code. Can you suggest my how should I go on? Although, it is in torch, I guess the idea should be similar.", "isAccepted": false, "likes": null, "poster": "Abhishek_Arya" }, { "contents": "print(lstm:getHiddenState(1)) – not sure what function will this be, but if you print out lstm and recent_hidden, it will on the last row of lstm, the values should be the same as recent_hidden. Hope it helps", "isAccepted": false, "likes": 1, "poster": "chunzhu" }, { "contents": "RNNs are inherently sequential. They are auto-regressive, meaning the input for timestep t contains the output for timestep (t-1), meaning you have to first calculae the output for timestap (t-1). This is one reason why ‘attention is all you need’ is quite interesting.", "isAccepted": false, "likes": null, "poster": "hughperkins" }, { "contents": "That is assuming that the LSTM has only one layer. It isn’t the case for a stacked LSTM.", "isAccepted": false, "likes": 1, "poster": "sinhasam" }, { "contents": "I experiment with using a for loop to collect every hidden state. The loss doesn’t change, but runtime is ~10x more (with a sequence length of 30). <SCODE> embedding = self.drop(self.emb(inputs))\n \n b_len = embedding.size(0)\n outputs = Variable(torch.cuda.FloatTensor(b_len, embedding.size(1), self.hidden_size))\n for i in range(b_len):\n \n cur_emb = embedding[i:i+1, :]#.view(1, embedding.size(1), embedding.size(2))\n o, hidden = self.gru_1(cur_emb, hidden)\n outputs[i, :, :] = o\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Shawn-Shan" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "KaiyangZhou" }, { "contents": "Why not try GRU, output all the cells of LSTM will cost you huge time with step-by-step LSTMCell", "isAccepted": false, "likes": 1, "poster": "gmyofustc" }, { "contents": "GRU and LSTM are essentially the same here. If you want to collect GRU hidden states, you have to loop through it as well. Only difference is GRU has 3/4 of LSTM parameters", "isAccepted": false, "likes": null, "poster": "Shawn-Shan" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "gmyofustc" }, { "contents": "Isn’t the last time step the last hidden state too? Sorry I’m a noob who likes to ask a lot of questions.", "isAccepted": false, "likes": 1, "poster": "rchavezj" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Armin_A" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Salvador_Medina" } ]
false
How can I share the variable in LSTM and LSTMCell
null
[ { "contents": "Since I am doing machine translation, so I want to train my network with LSTM module. But in testing phase, I want to copy the variable in LSTM to LSTMCell and roll-out the sequence by feeding the last lstm output to the next input.", "isAccepted": false, "likes": null, "poster": "Ching_Kit_Wong" }, { "contents": "For this, you dont need to convert LSTM to LSTMCell from my understanding, you just need to make the timesteps of LSTM to be 1 and repeatedly use it.", "isAccepted": false, "likes": 2, "poster": "smth" } ]
false
Quick Detach() Question
null
[ { "contents": "If I call detach() on a variable after calling backward(retain_variables=True) on a variable further along the graph, will that free the buffers of the graph that led to the detach()'d variable? Example: net1 and net2 are modules, x is an input variable. I want to retain the state of net2 for processing later down the line, but I just want to accumulate gradients in net1 for the time being, no need to retain the graph. out1 = net1(x) out2 = net2(out1) loss = some_loss_function(out2) loss.backward(retain_variables=True) out1.detach_() Will this clear the buffers of net1? Thanks, Andy", "isAccepted": false, "likes": 1, "poster": "ajbrock" }, { "contents": "Hi, <SCODE>out1 = net1(x)\n\nin2 = Variable(out1.data, requires_grad=True)\nout2 = net2(in2)\n\nloss = some_loss_function(out2)\n\nloss.backward(retain_variables=True)\nout1.backward(in2.grad.data)\n\n# You can call loss.backward again here\n# But not out1.backward !\n<ECODE>", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "Hmm, assuming detach() doesn’t free the graph, your way makes a lot of sense, and I think I could wrap that up into this chunk of code neatly. Thanks! Edit: a slightly more elegant way to do this (still following your same idea) might be to copy out1 to in2 with the out-of-place detach: in2 = out1.detach() I’ll try out both and report back.", "isAccepted": false, "likes": null, "poster": "ajbrock" }, { "contents": "If in2 is detached, you can’t call loss.backward at all, since net2(in2) are not taken into account by the computation of the gradient. When a variable is detached, the backward computations will not visit the branches that start from this variable (all the operations done on it).", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "<SCODE>import torch\nx = torch.randn(10, 5, requires_grad = True)\nw = x * x\na = w*w\n\na.sum().backward()\nprint(x.grad.sum())\n\nw = x * x\na = w * w\nw.detach_()\n\na.sum().backward()\nprint(x.grad.sum())\n<ECODE> in this case, the output is: <SCODE>tensor(27.7943)\ntensor(55.5887)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "111319" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "Thanks for replay. But why sometimes inplace operator cause a error in backward and sometimes not?", "isAccepted": false, "likes": null, "poster": "111319" }, { "contents": "Which error do you sometimes see and sometimes not?", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "Ou, I mean there are some inplace operators will cause an error in backward, and some other operators will not. Back to the original question, is there a method that can detach a sub graph after a forward?", "isAccepted": false, "likes": null, "poster": "111319" }, { "contents": "No there isn’t I’m afraid. We don’t allow modifying the graph.", "isAccepted": false, "likes": 1, "poster": "albanD" } ]
false
Problem compiling pytorch in WITH_DISTRIBUTED=1 mode
null
[ { "contents": "Hello everyone, I am using a debian 8.6.0, cmake 3.7.0, and python-3.5.2 and I’m really very lost … THE ERROR: <SCODE>/usr/include/string.h:66:14: note: ‘memset’\n extern void *memset (void *__s, int __c, size_t __n) __THROW __nonnull ((1));\n ^\nCMakeFiles/THD.dir/build.make:398: recipe for target 'CMakeFiles/THD.dir/master_worker/master/THDStorage.cpp.o' failed\nmake[2]: *** [CMakeFiles/THD.dir/master_worker/master/THDStorage.cpp.o] Error 1\n/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp: In member function ‘virtual void thd::DataChannelMPI::send(const thd::Scalar&, int)’:\n/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:329:52: error: invalid conversion from ‘const void*’ to ‘void*’ [-fpermissive]\n MPI_UINT8_T, dst_rank, 0, MPI_COMM_WORLD);\n ^\nIn file included from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.hpp:5:0,\n from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:1:\n/usr/lib/openmpi/include/mpi.h:1384:20: note: initializing argument 1 of ‘int MPI_Send(void*, int, MPI_Datatype, int, int, MPI_Comm)’\n OMPI_DECLSPEC int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest,\n ^\n/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp: In member function ‘virtual void thd::DataChannelMPI::send(thpp::Tensor&, int)’:\n/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:340:52: error: invalid conversion from ‘const void*’ to ‘void*’ [-fpermissive]\n MPI_UINT8_T, dst_rank, 0, MPI_COMM_WORLD);\n ^\nIn file included from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.hpp:5:0,\n from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:1:\n/usr/lib/openmpi/include/mpi.h:1384:20: note: initializing argument 1 of ‘int MPI_Send(void*, int, MPI_Datatype, int, int, MPI_Comm)’\n OMPI_DECLSPEC int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest,\n ^\n/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp: In member function ‘virtual THDGroup thd::DataChannelMPI::newGroup(const std::vector<int>&)’:\n/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:476:56: error: invalid conversion from ‘const int*’ to ‘int*’ [-fpermissive]\n MPI_Group_incl(world_group, ranks.size(), ranks.data(), &ranks_group);\n ^\nIn file included from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.hpp:5:0,\n from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:1:\n/usr/lib/openmpi/include/mpi.h:1269:20: note: initializing argument 3 of ‘int MPI_Group_incl(MPI_Group, int, int*, ompi_group_t**)’\n OMPI_DECLSPEC int MPI_Group_incl(MPI_Group group, int n, int *ranks,\n ^\n/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:479:66: error: ‘MPI_Comm_create_group’ was not declared in this scope\n MPI_Comm_create_group(MPI_COMM_WORLD, ranks_group, 0, &new_comm);\n ^\nCMakeFiles/THD.dir/build.make:422: recipe for target 'CMakeFiles/THD.dir/master_worker/master/THDTensor.cpp.o' failed\nmake[2]: *** [CMakeFiles/THD.dir/master_worker/master/THDTensor.cpp.o] Error 1\nCMakeFiles/THD.dir/build.make:158: recipe for target 'CMakeFiles/THD.dir/base/data_channels/DataChannelMPI.cpp.o' failed\nmake[2]: *** [CMakeFiles/THD.dir/base/data_channels/DataChannelMPI.cpp.o] Error 1\nCMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/THD.dir/all' failed\nmake[1]: *** [CMakeFiles/THD.dir/all] Error 2\nMakefile:127: recipe for target 'all' failed\nmake: *** [all] Error 2\n<ECODE> I follow this instructions: Using Python 3 (Python 3.4) Install build dependencies Essentials <SCODE>sudo apt-get update\nsudo apt-get install git build-essential\nccache\n\nsudo apt-get install ccache\nexport CC=\"ccache gcc\"\nexport CXX=\"ccache g++\"\n<ECODE> CMake The default CMake version in Debian’s repositories is too old. Ubuntu 16.10 has version 3.5.2 and it works fine. <SCODE>wget https://cmake.org/files/v3.7/cmake-3.7.0.tar.gz\ntar xf cmake-3.7.0.tar.gz\nrm cmake-3.7.0.tar.gz\ncd cmake-3.7.0\n./bootstrap\nmake\nsudo make install\ncd ..\n<ECODE> Install THD dependencies Asio C++ Library <SCODE>sudo apt-get install libasio-dev\n<ECODE> MPI implementation <SCODE>sudo apt-get install mpich\n<ECODE> Set up Python <SCODE>sudo apt-get install python3-dev python3-pip\n<ECODE> Set up virtual environment <SCODE>sudo pip3 install virtualenv\nvirtualenv venv\nsource venv/bin/activate\n<ECODE> Install PyTorch <SCODE>export LD_LIBRARY_PATH=\"$LD_LIBRARY_PATH:$HOME/pytorch-dist/torch/lib\"\n\ngit clone https://github.com/apaszke/pytorch-dist/\ncd pytorch-dist\npip3 install -r requirements.txt\nWITH_DISTRIBUTED=1 python3 setup.py build_deps\nWITH_DISTRIBUTED=1 python3 setup.py develop\n<ECODE> Thanks a lot for your help. Dani", "isAccepted": false, "likes": null, "poster": "dvaldes" }, { "contents": "Thanks, we’ll look into it. However, note that the distributed package is still in pre-alpha and will be likely slow or can break in weird ways. We’ll notify everyone once it’s ready for use.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for your answer, Adam . Exactly, which are the requirements for pytorch to work? I mean, versions of the operating system, gcc, cmake, cuda, etc. etc. We are a little lost, but we think that the problem in compilation comes from cuda… Thanks a lot!", "isAccepted": false, "likes": null, "poster": "dvaldes" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "The problem is because of MPI. Installing it might help.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you guys for all your work. If I may, I would like to remind you also of users like me where in most of the cases we don’t have root privileges on the machines we operate upon (e.g. clusters). So, whenever the pytorch.distribute package rolls out, I would like to request, if possible to also make it available for easy installation via anaconda or pip which might take care also of the dependencies. Thanks again, and keep up the good work! Cheers.", "isAccepted": false, "likes": null, "poster": "kirk86" } ]
false
Trying to Save and Load resnet50 net
null
[ { "contents": "ERROR PyTorch 0.1.11", "isAccepted": false, "likes": null, "poster": "nikmentenson" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I edited the code above to include all of the model, including training code. Thanks again", "isAccepted": false, "likes": null, "poster": "nikmentenson" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Hi again, I managed to figure it out, thanks for the help!", "isAccepted": false, "likes": null, "poster": "nikmentenson" } ]
false
Simple LSTM example
null
[ { "contents": "Hello I am trying to do a simple test, I want to show it a number at t=0 and then I want it to output that number k step in the future. Meanwhile the network is going to be shown zeros. But I am getting an error when I am doing backward. I am not sure how to read the error message. Here is the code I wrote: <SCODE>import argparse\nimport gym\nimport numpy as np\nfrom itertools import count\nfrom collections import namedtuple\nimport os \n\nimport torch\nimport random\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torch.autograd as autograd\nfrom torch.autograd import Variable\nimport torchvision.transforms as T\nimport cv2\nimport pickle\nimport glob\nimport time\nimport subprocess\nfrom collections import namedtuple\nimport resource\nimport math\n\nclass Policy(nn.Module):\n def __init__(self):\n super(Policy, self).__init__()\n self.fc1 = nn.Linear(5,5)\n self.lstm = nn.LSTMCell(5, 2)\n self.fc2 = nn.Linear(2,1)\n\n def forward(self, x, hidden):\n y = self.fc1(x)\n hx,cx = self.lstm(y,hidden)\n y = self.fc2(hx)\n return y, hx,cx\n\nmodel = Policy()\noptimizer = optim.Adam(model.parameters(),lr=1)\n\nstep = 10 \n\nfor i in range(100):\n yhat = Variable(torch.zeros(step,1))\n target = Variable(torch.zeros(step,1))\n target[-1,0] = 1\n cx = Variable(torch.zeros(1,2))\n hx = Variable(torch.zeros(1,2))\n hidden= [hx,cx]\n\n for j in range(step):\n x = Variable(torch.zeros(1,5))\n if j is 0:\n x += 1\n y, hx,cx = model(x,hidden)\n print (hx.data.numpy())\n hidden = (hx,cx)\n yhat[j] = y.clone()\n\n print ('done - Hoping the last value should be zero')\n \n #learning \n optimizer.zero_grad()\n error = ((yhat-target)*(yhat-target)).mean()\n error.backward()\n optimizer.step()\n<ECODE> Here is the error I get, <SCODE>RuntimeError: matrices expected, got 1D, 2D tensors at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488756735684/work/torch/lib/TH/generic/THTensorMath.c:1224\n<ECODE> I am sure I am just using something like a silly person.", "isAccepted": false, "likes": 1, "poster": "jtremblay" }, { "contents": "Can you show us a full stack trace? Something has an invalid size, but I don’t know where.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>done - the last output should be one\nTraceback (most recent call last):\n File \"/home/jtremblay/code/Personal-git/dqn/simpleLstm.py\", line 84, in <module>\n error.backward()\n File \"/home/jtremblay/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py\", line 146, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"/home/jtremblay/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/linear.py\", line 22, in backward\n grad_input = torch.mm(grad_output, weight)\nRuntimeError: matrices expected, got 1D, 2D tensors at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488756735684/work/torch/lib/TH/generic/THTensorMath.c:1224\n[Finished in 0.7s with exit code 1]\n[cmd: ['/home/jtremblay/anaconda2/bin/python', '-u', '/home/jtremblay/code/Personal-git/dqn/simpleLstm.py']]\n[dir: /home/jtremblay/code/Personal-git/dqn]\n[path: /home/jtremblay/anaconda/bin]\n<ECODE> Here is the full stack, sorry I should have added the whole thing.", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "What version are you using?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>>>> import torch\n>>> torch.__version__\n'0.1.10+ac9245a'\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "I ran the code on my machine and it worked. I only removed the cv2 import, rest of it was exactly same. Output: Version: <SCODE>>>> torch.__version__\n'0.1.10+16a133e'\n<ECODE> OUTPUT: <SCODE>[[ 0.22190067 0.113309 ]]\n[[ 0.22374019 0.17195135]]\n[[ 0.24251971 0.20971343]]\n[[ 0.25022674 0.23256381]]\n[[ 0.25297049 0.24633361]]\n[[ 0.25373745 0.2547116 ]]\n[[ 0.25378832 0.25985831]]\n[[ 0.25362667 0.26304504]]\n[[ 0.25343812 0.26503012]]\n[[ 0.2532804 0.26627228]]\ndone - Hoping the last value should be zero\n[[ 7.61594176e-01 -6.59305393e-19]]\n[[ 9.63655114e-01 -1.15941839e-06]]\n[[ 9.94877398e-01 -7.77833122e-07]]\n[[ 9.99219239e-01 -7.20439346e-07]]\n[[ 9.99815702e-01 -7.12471092e-07]]\n[[ 9.99897778e-01 -7.11375492e-07]]\n[[ 9.99909043e-01 -7.11224914e-07]]\n[[ 9.99910653e-01 -7.11203825e-07]]\n[[ 9.99910831e-01 -7.11200983e-07]]\n[[ 9.99910891e-01 -7.11200926e-07]]\ndone - Hoping the last value should be zero\n[[ 0.76159418 -0. ]]\n[[ 9.64020252e-01 -4.48575378e-12]]\n[[ 9.95050907e-01 -2.20972351e-12]]\n[[ 9.99326706e-01 -1.97817370e-12]]\n[[ 9.99906898e-01 -1.94826468e-12]]\n[[ 9.99985516e-01 -1.94424338e-12]]\n[[ 9.99996126e-01 -1.94369932e-12]]\n[[ 9.99997556e-01 -1.94362603e-12]]\n[[ 9.99997735e-01 -1.94361671e-12]]\n[[ 9.99997795e-01 -1.94361302e-12]]\ndone - Hoping the last value should be zero\n[[ 0.76159418 -0. ]]\n[[ 9.64026868e-01 -3.94646712e-17]]\n[[ 9.95054364e-01 -1.57264918e-17]]\n[[ 9.99329090e-01 -1.36518715e-17]]\n[[ 9.99909043e-01 -1.33884757e-17]]\n[[ 9.99987543e-01 -1.33531386e-17]]\n[[ 9.99998152e-01 -1.33483426e-17]]\n[[ 9.99999583e-01 -1.33477065e-17]]\n[[ 9.99999762e-01 -1.33476039e-17]]\n[[ 9.99999821e-01 -1.33476039e-17]]\ndone - Hoping the last value should be zero\n[[ 0.76159418 0. ]]\n[[ 9.64027464e-01 -7.70289075e-22]]\n[[ 9.95054662e-01 -2.58641162e-22]]\n[[ 9.99329209e-01 -2.18775464e-22]]\n[[ 9.99909163e-01 -2.13788550e-22]]\n[[ 9.99987662e-01 -2.13120985e-22]]\n[[ 9.99998271e-01 -2.13030714e-22]]\n[[ 9.99999702e-01 -2.13018117e-22]]\n[[ 9.99999881e-01 -2.13016502e-22]]\n[[ 9.99999940e-01 -2.13016072e-22]]\ndone - Hoping the last value should be zero\n[[ 0.76159418 0. ]]\n[[ 9.64027584e-01 -3.21531385e-26]]\n[[ 9.95054722e-01 -9.34462056e-27]]\n[[ 9.99329269e-01 -7.73195391e-27]]\n[[ 9.99909222e-01 -7.53276268e-27]]\n[[ 9.99987721e-01 -7.50614017e-27]]\n[[ 9.99998331e-01 -7.50253251e-27]]\n[[ 9.99999762e-01 -7.50207491e-27]]\n[[ 9.99999940e-01 -7.50200250e-27]]\n[[ 1.00000000e+00 -7.50200250e-27]]\ndone - Hoping the last value should be zero\n[[ 0.76159418 0. ]]\n[[ 9.64027584e-01 -2.75070681e-30]]\n[[ 9.95054722e-01 -7.05951502e-31]]\n[[ 9.99329329e-01 -5.73105970e-31]]\n[[ 9.99909222e-01 -5.56879258e-31]]\n[[ 9.99987721e-01 -5.54712352e-31]]\n[[ 9.99998331e-01 -5.54420359e-31]]\n[[ 9.99999762e-01 -5.54382320e-31]]\n[[ 9.99999940e-01 -5.54375925e-31]]\n[[ 1.00000000e+00 -5.54375925e-31]]\ndone - Hoping the last value should be zero<ECODE>", "isAccepted": false, "likes": null, "poster": "pranav" }, { "contents": "This is interesting, it is still not working on my end even without cv2. Are you running pytorch with cuda 8.0?", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "Yes, I have CUDA 8.0 and Python 2.7", "isAccepted": false, "likes": null, "poster": "pranav" }, { "contents": "Why can’t I run this script for checking the version? Here’s what I got: <SCODE>>>> torch.__version__\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'module' object has no attribute '__version__'\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Xiaoyu_Liu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Xiaoyu_Liu" }, { "contents": "I tried on my laptop and with a clean pytorch install (using conda) and I still get the error with the grads. It is weird, I am trying to make sense out of the problem and it seemed that the last layer (the fully connected) wants to do a backward with size two. But the output is of size one. The backward function in the linear class if I print the following variables: <SCODE> def backward(self, grad_output):\n print (grad_output)\n input, weight, bias = self.saved_tensors\n\n grad_input = grad_weight = grad_bias = None\n print(self.needs_input_grad)\n \n if self.needs_input_grad[0]:\n print ('back')\n # print (self)\n print (grad_output,weight)\n<ECODE> I get <SCODE>back\n(\n1.00000e-02 *\n -6.6986\n[torch.FloatTensor of size 1]\n, \n 0.6127 0.6033\n[torch.FloatTensor of size 1x2]\n)\n<ECODE> Where the first variable is equal to the loss calculated by error = (yhat-target).pow(2).mean(), I am confused as to why the backward pass is expecting something of size 1x2. Here is the code <SCODE>import numpy as np\nimport torch\nimport random\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torch.autograd as autograd\nfrom torch.autograd import Variable\nimport torchvision.transforms as T\n\nclass Policy(nn.Module):\n def __init__(self):\n super(Policy, self).__init__()\n self.fc1 = nn.Linear(5,5)\n self.lstm = nn.LSTMCell(5, 2)\n self.fc2 = nn.Linear(2,1)\n\n def forward(self, x, hidden):\n y = self.fc1(x)\n hx,cx = self.lstm(y,hidden)\n y = self.fc2(hx)\n\n return y, hx,cx\n\n\nmodel = Policy()\noptimizer = optim.Adam(model.parameters())\n\nstep = 1\n\nfor i in range(100):\n yhat = Variable(torch.zeros(step,1))\n target = Variable(torch.zeros(step,1))\n target[-1,0] = 1\n cx = Variable(torch.zeros(1,2))\n hx = Variable(torch.zeros(1,2))\n hidden= [hx,cx]\n\n for j in range(step):\n x = Variable(torch.zeros(1,5))\n if j is 0:\n x += 1\n x = Variable(x.data)\n y, hx,cx = model(x,hidden)\n # print (hx.data.numpy())\n hidden = (hx,cx)\n print ('y',y)\n print ('hidden',hidden)\n yhat[j] = y\n\n print ('done - the last output should be one')\n #learning \n optimizer.zero_grad()\n error = (yhat-target).pow(2).mean()\n print (error)\n error.backward()\n optimizer.step()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "<SCODE>>>> print 'yhat[j] size: ', yhat[j].size(), 'y size: ', y.size()\nyhat[j] size: torch.Size([1]) y size: torch.Size([1, 1])\n<ECODE> so this is causing issues when backpropping. If you just change that line to: <SCODE>yhat[j] = y[0]\n<ECODE> everything works properly. Full code (same as yours, except for that one line): <SCODE>import numpy as np\nimport torch\nimport random\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torch.autograd as autograd\nfrom torch.autograd import Variable\nimport torchvision.transforms as T\n\nclass Policy(nn.Module):\n def __init__(self):\n super(Policy, self).__init__()\n self.fc1 = nn.Linear(5,5)\n self.lstm = nn.LSTMCell(5, 2)\n self.fc2 = nn.Linear(2,1)\n\n def forward(self, x, hidden):\n y = self.fc1(x)\n hx,cx = self.lstm(y,hidden)\n y = self.fc2(hx)\n\n return y, hx,cx\n\n\nmodel = Policy()\noptimizer = optim.Adam(model.parameters())\n\nstep = 1\n\nfor i in range(100):\n yhat = Variable(torch.zeros(step,1))\n target = Variable(torch.zeros(step,1))\n target[-1,0] = 1\n cx = Variable(torch.zeros(1,2))\n hx = Variable(torch.zeros(1,2))\n hidden= [hx,cx]\n\n for j in range(step):\n x = Variable(torch.zeros(1,5))\n if j is 0:\n x += 1\n x = Variable(x.data)\n y, hx,cx = model(x,hidden)\n # print (hx.data.numpy())\n hidden = (hx,cx)\n print ('y',y)\n print ('hidden',hidden)\n yhat[j] = y[0]\n\n print ('done - the last output should be one')\n #learning \n optimizer.zero_grad()\n error = (yhat-target).pow(2).mean()\n print (error)\n error.backward()\n optimizer.step()\n<ECODE> Gives the following output: <SCODE>('y', Variable containing:\n-0.7611\n[torch.FloatTensor of size 1x1]\n)\n('hidden', (Variable containing:\n-0.1136 0.1655\n[torch.FloatTensor of size 1x2]\n, Variable containing:\n-0.2690 0.3413\n[torch.FloatTensor of size 1x2]\n))\ndone - the last output should be one\nVariable containing:\n 3.1013\n[torch.FloatTensor of size 1]\n\n('y', Variable containing:\n-0.7580\n[torch.FloatTensor of size 1x1]\n)\n('hidden', (Variable containing:\n-0.1126 0.1623\n[torch.FloatTensor of size 1x2]\n, Variable containing:\n-0.2672 0.3351\n[torch.FloatTensor of size 1x2]\n))\ndone - the last output should be one\nVariable containing:\n 3.0906\n[torch.FloatTensor of size 1]\n\n('y', Variable containing:\n-0.7549\n[torch.FloatTensor of size 1x1]\n)\n('hidden', (Variable containing:\n-0.1115 0.1591\n[torch.FloatTensor of size 1x2]\n, Variable containing:\n-0.2654 0.3289\n[torch.FloatTensor of size 1x2]\n))\ndone - the last output should be one\nVariable containing:\n 3.0798\n[torch.FloatTensor of size 1]\n\n....\n\n<ECODE> I trained it for large number of iterations and the loss converged.", "isAccepted": false, "likes": 3, "poster": "pranav" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jtremblay" } ]
false
Debugging DataParallel, no speedup and uneven memory allocation
null
[ { "contents": "With one GPU and a batch size of 14 an epoch on my data set takes about 24 minutes. With 2 GPUs and a batch size of 28 it’s still taking 24 minutes per epoch. Any suggestions on what might be going wrong? Does the batch normalization layer try to normalize across both GPUs and thus add large amounts of extra memory traffic? Please say it doesn’t. Thanks. Top shows 2 CPUs saturated: <SCODE>Tasks: 255 total, 1 running, 254 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 16.3 us, 2.5 sy, 0.1 ni, 81.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\nKiB Mem : 65885928 total, 40001592 free, 11878640 used, 14005696 buff/cache\nKiB Swap: 67017724 total, 67017724 free, 0 used. 52840116 avail Mem \n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n 4622 mmacy 20 0 49.225g 5.809g 977604 S 200.0 9.2 111:32.30 work/vnet.base.\n<ECODE> The memory allocation on the two GPUs is also uneven. If they’re both doing the same operations with the same batch size, why is GPU1 using 1/3rd more memory than GPU0? <SCODE>+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 367.57 Driver Version: 367.57 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 TITAN X (Pascal) Off | 0000:01:00.0 On | N/A |\n| 51% 82C P2 74W / 250W | 7906MiB / 12186MiB | 99% Default |\n+-------------------------------+----------------------+----------------------+\n| 1 TITAN X (Pascal) Off | 0000:02:00.0 Off | N/A |\n| 47% 78C P2 107W / 250W | 10326MiB / 12189MiB | 95% Default |\n+-------------------------------+----------------------+----------------------+\n\n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n| 0 1086 G /usr/lib/xorg/Xorg 105MiB |\n| 0 8469 C work/vnet.base.20170316_0434 7797MiB |\n| 1 8469 C work/vnet.base.20170316_0434 10323MiB |\n+-----------------------------------------------------------------------------+\n\n<ECODE> I also see that parallel_apply in data_parallel relies on python threading, which isn’t very worthwhile given how much of the code has to run under the GIL. The only way to get any sort of reasonable parallelism while using regular python GIL protected code is to run separate python processes. Are other people actually seeing a speedup from DataParallel? I think I’m probably only seeing one thread make progress at a time.", "isAccepted": false, "likes": 1, "poster": "mattmacy" }, { "contents": "No, it doesn’t. I can’t understand what’s the problem. Are you expecting a larger speedup? You’ve doubled the amount of computing power along with the input size, so the time staying constant is a great result. Maybe you have some conditional branching in your model? our multi-GPU code is nearly as fast as it can be. You still need the GIL to execute most of the autograd operations so it doesn’t matter if we don’t use Python threads. In our CNN benchmarks we have perf very similar to that of Caffe2, and faster than Lua Torch, which has very good multi-GPU implementation.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Usually “data parallel” means data operations run in parallel, but here data parallel only means that the forward passes, the fast part, have any parallel component. That’s just not very useful - as larger batches can reduce the convergence rate. So I’m using twice the power, generating twice the heat and am getting no real benefit. The use of python makes parallelism unduly hard but one could run the autograd pass in separate processes just like hogwild, but without sharing the model weights by having each process broadcast the updates from the cost function’s backward pass. Each process would average in the updates from other GPUs.", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "Yes, and the operations run in parallel indeed. Both in forward and in backward. I’m not sure how is that not useful and what’s the problem. Data parallelism is a well known term used to describe weak scaling of these models, and this is exactly what happens in here. It’s very useful, because in some cases you need batch sizes so large, that they don’t fit on a single GPU. The behaviour is exactly what’s wanted in a lot of situations. It doesn’t make it hard, I really don’t understand what’s bothering you. You can do hogwild training, even on GPUs if you wan. No one is stopping you. We even have a hogwild example in the examples repo.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "as adam pointed out, DataParallel is very very well defined in Deep Learning. We split the mini-batches over multiple GPUs, and accumulate the gradients at the end from all the GPUs before doing the optimization step. Getting a linear speedup when doubling the batch size is the best case scenario, and you are hitting that. If there was a confusion in terminology, i hope it’s clarified now.", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ngimel" }, { "contents": "@ngimel in terms of weak-scaling, this seems appropriate right? We cannot expect any better speedup than this theorerically.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ngimel" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Thank you for your time.", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "DataParallel also distributes backward pass, it is hidden in autograd. DataParallel has to broadcast and reduce all the parameters, so parallelization efficiency decreases when you computation time is small and you have a lot of parameters.", "isAccepted": false, "likes": 1, "poster": "ngimel" }, { "contents": "in the backward pass of DataParallel, we reduce the weights from GPU2 onto GPU1. Our DataParallel algorithm is roughly like this: in forward: scatter mini-batch to GPU1, GPU2 replicate model on GPU2 (it is already on GPU1) model_gpu1(input_gpu1), model_gpu2(input_gpu2) (this step is parallel_apply) gather output mini-batch from GPU1, GPU2 onto GPU1 in backward: scatter grad_output and input parallel_apply model’s backward pass reduce GPU2 replica’s gradients onto GPU1 model \nNow there is only a single model again with accumulated gradients from GPU1 and GPU2\n gather the grad_input Maybe your situation is similar?", "isAccepted": false, "likes": 11, "poster": "smth" }, { "contents": "Is it necessary to replicate all of the gradients? Couldn’t you just replicate the output of the backward pass of just the loss function and then average the results? There are no fully connected layers. I guess 3d convolutions with >= 128 channels have an exorbitant number of parameters. I may have made an error in going up to 512 when I only meant to go up to 256, so at least you’ve prompted me to take a closer look at the model. Thanks again.", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "Thanks.", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "mattmacy" }, { "contents": "sweet, that’s great news.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Soumith and Adam, I am having a great time exploring PyTorch! Thanks for the awesome library. I am trying to saturate a 64-core/256-thread CPU in addition to the GPUs. Any pointers on how I can extend Data_parallel.py to create 3 scatters on GPU0, GPU1, and CPU(0-255)?", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "So for DNN’s there is no speedup expected, if the model parameters are large?", "isAccepted": false, "likes": null, "poster": "RicherMans" }, { "contents": "<SCODE>+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 375.66 Driver Version: 375.66 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 GeForce GTX 980 Ti Off | 0000:06:00.0 Off | N/A |\n| 0% 51C P2 167W / 300W | 6070MiB / 6078MiB | 73% Default |\n+-------------------------------+----------------------+----------------------+\n| 1 GeForce GTX 980 Ti Off | 0000:07:00.0 On | N/A |\n| 0% 45C P2 93W / 300W | 1021MiB / 6075MiB | 35% Default |\n+-------------------------------+----------------------+----------------------+\n<ECODE> I have tried putting the linear layers outside of DataParallel to no avail - machine ran out of memory. I understand that the computations on GPU 1 require more memory than those on GPU 2, but I was expecting more memory to be used on GPU 2. Am I at the maximum capacity / performance? Is there anything I can do with this seq-to-seq model to train more batches at a time or shorten training time.", "isAccepted": false, "likes": null, "poster": "bottanski" } ]
false
Compute and report gradient norm of a hidden variable
null
[ { "contents": "Thanks a lot! EDIT: reformulated my question.", "isAccepted": false, "likes": null, "poster": "dima" }, { "contents": "Thank you. It works for parameters. As far as I understand, the gradient for non-leaf nodes is deallocated when it is not needed.", "isAccepted": false, "likes": null, "poster": "dima" }, { "contents": "Veril, unfortunately not: <SCODE> x[0].requires_grad = True\nRuntimeError: you can only change requires_grad flags of leaf variables.\n<ECODE>", "isAccepted": false, "likes": null, "poster": "dima" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "It’s not deallocated, it just hasn’t been allocated. We’ve changed the strategy for allocating gradients in the recent release.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "All optimizers I know about require gradients wrt parameters which are always leaf nodes. So, it shouldn’t be a problem for implementing an optimizer. In my case I need those gradients only for debugging purposes. For those are interested, I ended up doing like this: <SCODE>class LazyContainer(object):\n def __call__(self, g):\n self.grad_norm = g.norm(2)\ncontainer = LazyContainer()\nh.register_hook(container)\n# later\nprint(container.grad_norm)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "dima" }, { "contents": "you can also do like this: <SCODE>A = Variable(torch.ones(2), requires_grad = True)\nB = A*2 \nB.retain_grad()\nC = B.norm()\nC.backward()\nprint B.grad\n<ECODE>", "isAccepted": false, "likes": null, "poster": "xylcbd" } ]
false
Permute elements of a tensor along a dimension
null
[ { "contents": "Hi, I am trying to permute the feature map dimension in a tensor. As a very simplified case, One possible way I found was to do a index_select followed by cat. But, in case of a larger tensor, a lot of intermediate tensors will have to be created. Is there a better way to achieve this?", "isAccepted": false, "likes": null, "poster": "Nabarun_Goswami" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "OK, so this is what I tried: <SCODE>>>> n = np.arange(480).reshape((5,4,4,6))\n>>> a = torch.from_numpy(n)\n>>> perm = torch.LongTensor([0,2,1,3])\n>>> a[:, perm, :, :]\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.\n<ECODE> Shouldn’t perm be a torch.LongTensor? Am I missing something here?", "isAccepted": false, "likes": null, "poster": "Nabarun_Goswami" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Getting the same error even when I remove the trailing :, :.", "isAccepted": false, "likes": null, "poster": "Nabarun_Goswami" }, { "contents": "I can probably do a dimension permute to bring the required dimension to dim0 and do the permutation and dimension permute back to original order. Seems like a hacky way to do it though.", "isAccepted": false, "likes": null, "poster": "Nabarun_Goswami" }, { "contents": "Ah yes, you’re right. We should have full indexing implemented soon.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "harpone" }, { "contents": "This answer is a bit late but I stumbled upon this thread while trying to create a permutation matrix from permutation indexes so I’ll just share what I found works. <SCODE>permutation_indices = ...\nsize = permutation_indices.size(0)\npermutation_matrix = torch.zeros(size, size)\npermutation_matrix[torch.arange(size).long(), permutation_indices] = 1\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "tetratrio" }, { "contents": "Not quite what I (or the OP) was looking for but thanks for the effort!", "isAccepted": false, "likes": null, "poster": "harpone" }, { "contents": "Oh yea thats an even simpler approach. Why didnt think of that. Thanks!", "isAccepted": false, "likes": null, "poster": "tetratrio" }, { "contents": "Yeah as above, permutations can be achieved via matrix multiplication. [a b] * [0 1; 1 0] => [b a], for example.", "isAccepted": false, "likes": 2, "poster": "Tim_Oleskiw" }, { "contents": "I do like this:", "isAccepted": false, "likes": null, "poster": "cuixing158_1" }, { "contents": "Regarding how to permute vectors in PyTorch: there seems to be a function pytorch.permute(), but I can’t find any documentation for it, and when I try it doesn’t seem to work as I might expect (seems to be a no-op).", "isAccepted": false, "likes": null, "poster": "Daniel_Povey" }, { "contents": "<SCODE>x = torch.randn(1, 2, 3)\nprint(x.shape)\n> torch.Size([1, 2, 3])\nx = x.permute(1, 2, 0)\n> torch.Size([2, 3, 1])\n<ECODE> print(x.shape)", "isAccepted": false, "likes": 1, "poster": "ptrblck" }, { "contents": "Did you want something like this? <SCODE>>>> import torch\n>>> x = torch.randn(100, 57, 2)\n>>> idx = torch.randint(57, size=(100, 57))\n>>> permutated_x = x.gather(dim=1, index=idx.unsqueeze(-1).expand(x.shape))\n>>> torch.all(x[0][idx[0]] == permutated_x[0])\ntensor(True)\n>>> torch.all(x[28][idx[28]] == permutated_x[28])\ntensor(True)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "Yuheng_Zhi" } ]
false
Tiramisu DenseNet Implementation
null
[ { "contents": "Hi everyone, If not, I can try to push one implementation out soon. Thanks!", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "abursuc" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "baldassarre.fe" } ]
false
Pytorch for Semantic Segmentation
null
[ { "contents": "", "isAccepted": false, "likes": 3, "poster": "ycszen" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" } ]
false
nn.ReLU(inplace=True) make an error
null
[ { "contents": "<SCODE>RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation\n<ECODE> But this error just appears in some places. Why it occurs?", "isAccepted": false, "likes": null, "poster": "ycszen" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Ok. Thank you. It works after removing it.", "isAccepted": false, "likes": null, "poster": "ycszen" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "osuemer" }, { "contents": "It is the default value. Just don’t override it.", "isAccepted": false, "likes": 2, "poster": "apaszke" } ]
false
What is reshape layer in pytorch?
null
[ { "contents": "In torch7 it seems to be nn.View, but what it is in pytorch? What I want is to add a reshpe layer in nn.Sequential. Thanks.", "isAccepted": true, "likes": 4, "poster": "melody-rain" }, { "contents": "", "isAccepted": true, "likes": 6, "poster": "apaszke" }, { "contents": "Hi, good example. Thanks. Thank you.", "isAccepted": true, "likes": null, "poster": "melody-rain" }, { "contents": "", "isAccepted": true, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": true, "likes": 1, "poster": "ypxie" }, { "contents": "<SCODE>import torch.nn as nn\n\nclass Reshape(nn.Module):\n def __init__(self, *args):\n super(Reshape, self).__init__()\n self.shape = args\n\n def forward(self, x):\n return x.view(self.shape)\n<ECODE>", "isAccepted": true, "likes": 11, "poster": "allenye0119" }, { "contents": "Thanks~ but it is still so many codes, a lambda layer like the one used in keras would be very helpful.", "isAccepted": true, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": true, "likes": 4, "poster": "smth" }, { "contents": "", "isAccepted": true, "likes": 9, "poster": "Shisho_Sama" }, { "contents": "I have to ask why reshaping does not count as “trivial”? Current way of work forces me to separate logic of data flow to two separate places - definition of the nn.Sequential, and forward()", "isAccepted": true, "likes": null, "poster": "Noam_Salomonski" }, { "contents": "For high level DL, those layers are more confusing than beneficial…", "isAccepted": true, "likes": null, "poster": "dsantiago" }, { "contents": "", "isAccepted": true, "likes": 3, "poster": "alexp" } ]
true
Pretrained learning with different input size
null
[ { "contents": "Error Message in pretraining case: <SCODE>Traceback (most recent call last):\n File \"main.py\", line 314, in <module>\n main()\n File \"main.py\", line 68, in main\n model = models.__dict__[args.arch](pretrained=True)\n File \"/home/dylee/.conda/envs/pytorch/lib/python2.7/site-packages/torchvision/models/alexnet.py\", line 57, in alexnet\n model.load_state_dict(model_zoo.load_url(model_urls['alexnet']))\n File \"/home/dylee/.conda/envs/pytorch/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 315, in load_state_dict\n own_state[name].copy_(param)\nRuntimeError: inconsistent tensor size at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488756735684/work/torch/lib/TH/generic/THTensorCopy.c:51\n<ECODE>", "isAccepted": false, "likes": null, "poster": "neuralix" }, { "contents": "No, it’s supported. You must have some bug in your code.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE> own_state[name].copy_(param)\n<ECODE> TO <SCODE>if name != 'classifier.1.weight' and name != 'classifier.1.bias':\n own_state[name].copy_(param)\n<ECODE> By skipping parameter copying. I hope it’s helpful for someone.", "isAccepted": false, "likes": null, "poster": "neuralix" } ]
false
How to load part of pre trained model?
null
[ { "contents": "", "isAccepted": false, "likes": 37, "poster": "albertxavier001" }, { "contents": "I’m afraid not <SCODE> def load_my_state_dict(self, state_dict):\n \n own_state = self.state_dict()\n for name, param in state_dict.items():\n if name not in own_state:\n continue\n if isinstance(param, Parameter):\n # backwards compatibility for serialized parameters\n param = param.data\n own_state[name].copy_(param)\n<ECODE>", "isAccepted": false, "likes": 30, "poster": "chenyuntc" }, { "contents": "You can remove all keys that don’t match your model from the state dict and use it to load the weights afterwards: <SCODE>pretrained_dict = ...\nmodel_dict = model.state_dict()\n\n# 1. filter out unnecessary keys\npretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}\n# 2. overwrite entries in the existing state dict\nmodel_dict.update(pretrained_dict) \n# 3. load the new state dict\nmodel.load_state_dict(pretrained_dict)\n<ECODE>", "isAccepted": false, "likes": 169, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "albertxavier001" }, { "contents": "<SCODE>model.load_state_dict(model_dict)<ECODE>", "isAccepted": false, "likes": 17, "poster": "albertxavier001" }, { "contents": "Yes it should. I edited the snippet.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Thanks. I am looking for it. It seems awesome!!!", "isAccepted": false, "likes": null, "poster": "cham-3" }, { "contents": "Consider the situation where I would like to restore all weights till the last layer. <SCODE># args has the model name, num classes and other irrelevant stuff\nself._model = models.__dict__[args.arch](pretrained = False, \n num_classes = args.classes, \n aux_logits = False)\n\n\nif self.args.pretrained:\n \n print(\"=> using pre-trained model '{}'\".format(args.arch))\n pretrained_state = model_zoo.load_url(model_names[args.arch])\n model_state = self._model.state_dict()\n\n pretrained_state = { k:v for k,v in pretrained_state.iteritems() if k in model_state and v.size() == model_state[k].size() }\n model_state.update(pretrained_state)\n self._model.load_state_dict(model_state)\n<ECODE> Shouldn’t we also be checking if the sizes match before restoring? It looks like we are comparing only the names.", "isAccepted": false, "likes": 4, "poster": "ImgPrcSng" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "smth" }, { "contents": "model.load_state_dict(model_dict) step 3 should look like this", "isAccepted": false, "likes": 5, "poster": "BrianHuang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "chenchr" }, { "contents": "", "isAccepted": false, "likes": 16, "poster": "karttikeya_mangalam" }, { "contents": "Assum following situation: <SCODE>pretrained_dict: ['A', 'B', 'C', 'D']\nmodel_dict: ['A', 'B', 'C', 'E']\n<ECODE> <SCODE>pretrained_dict: ['A', 'B', 'C']\nmodel_dict: ['A', 'B', 'C', 'E']\n<ECODE> The complete snippet is therefore as follow: <SCODE>pretrained_dict = ...\nmodel_dict = model.state_dict()\n\n# 1. filter out unnecessary keys\npretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}\n# 2. overwrite entries in the existing state dict\nmodel_dict.update(pretrained_dict) \n# 3. load the new state dict\nmodel.load_state_dict(model_dict)\n<ECODE>", "isAccepted": false, "likes": 52, "poster": "zeakey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "romina.ui" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ptrblck" }, { "contents": "Is it param = Parameter.data ?", "isAccepted": false, "likes": null, "poster": "sriharsha0806" }, { "contents": "It might be a little late to ask. If I want to skip some layers, like if I train the model with batch normalization, but want to use the trained bn version for that without batch normalization, how can I change the layers’ names? Because otherwise, the name might be different, and it will complain about size mismatching.", "isAccepted": false, "likes": null, "poster": "deJQK" }, { "contents": "This is very useful, thanks", "isAccepted": false, "likes": null, "poster": "kingxueyuf" }, { "contents": "Shouldn’t the # 3 be <SCODE>model.load_state_dict(model_dict)\n<ECODE> instead of <SCODE>model.load_state_dict(pretrained_dict)\n<ECODE> ?", "isAccepted": false, "likes": 1, "poster": "nurlano" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Zichun_Zhang" } ]
false
[resolved] Cuda Runtime Error(30)
null
[ { "contents": "<SCODE>THCudaCheck FAIL file=torch/csrc/cuda/Module.cpp line=109 error=30 : unknown error\nTraceback (most recent call last):\n File \"trainer.py\", line 13, in <module>\n if torch.cuda.is_available():\n File \"/usr/local/lib/python2.7/dist-packages/torch/cuda/__init__.py\", line 30, in is_available\n return torch._C._cuda_getDeviceCount() > 0\nRuntimeError: cuda runtime error (30) : unknown error at torch/csrc/cuda/Module.cpp:109\n\n<ECODE>", "isAccepted": true, "likes": null, "poster": "ycszen" }, { "contents": "There must be something wrong with your driver. Maybe try rebooting?", "isAccepted": true, "likes": 2, "poster": "apaszke" }, { "contents": "OK. I have found the problem. After I update the linux system, the driver become useless. So I will reinstall the driver. Thank you for your reply.", "isAccepted": true, "likes": null, "poster": "ycszen" }, { "contents": "I get this when I put my laptop to sleep while in the middle of training. When I put it to sleep, my script stops and I get this error: <SCODE>THCudaCheck FAIL file=/b/wheel/pytorch-src/torch/lib/THC/generated/../THCReduceAll.cuh line=334 error=4 : unspecified launch failure\nTraceback (most recent call last):\n File \"trytry.py\", line 111, in <module>\n loss = network.loss(prediction, label_batch) + 10*torch.mean(cheat_amount)\n File \"trytry.py\", line 73, in loss\n union = 1e-5 + prediction.sum() + label.sum()\n File \"/usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py\", line 437, in sum\n return Sum(dim)(self)\n File \"/usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/reduce.py\", line 16, in forward\n return input.new((fn(),))\nRuntimeError: cuda runtime error (4) : unspecified launch failure at /b/wheel/pytorch-src/torch/lib/THC/generated/../THCReduceAll.cuh:334\n<ECODE> And afterward I get this: <SCODE>THCudaCheck FAIL file=/b/wheel/pytorch-src/torch/lib/THC/THCGeneral.c line=66 error=30 : unknown error\nTraceback (most recent call last):\n File \"trytry.py\", line 77, in <module>\n network = Net()\n File \"trytry.py\", line 57, in __init__\n self.squeezenet = models.squeezenet1_1(pretrained=True).features.cuda() \n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 147, in cuda\n return self._apply(lambda t: t.cuda(device_id))\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 118, in _apply\n module._apply(fn)\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 124, in _apply\n param.data = fn(param.data)\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 147, in <lambda>\n return self._apply(lambda t: t.cuda(device_id))\n File \"/usr/local/lib/python3.5/dist-packages/torch/_utils.py\", line 65, in _cuda\n return new_type(self.size()).copy_(self, async)\n File \"/usr/local/lib/python3.5/dist-packages/torch/cuda/__init__.py\", line 272, in __new__\n _lazy_init()\n File \"/usr/local/lib/python3.5/dist-packages/torch/cuda/__init__.py\", line 85, in _lazy_init\n torch._C._cuda_init()\nRuntimeError: cuda runtime error (30) : unknown error at /b/wheel/pytorch-src/torch/lib/THC/THCGeneral.c:66\n<ECODE> A reboot has fixed the problem. This is with CUDA 8.0 and an nvidia 1060.", "isAccepted": true, "likes": 3, "poster": "chrisranderson" }, { "contents": "I also have the same issue after the laptop wakes up. I think this is a bug. Tensorflow seems to work fine in such situations.", "isAccepted": true, "likes": 1, "poster": "kindlychung" }, { "contents": "Just had the same failure after wake/sleep for desktop. Pytorch 0.2, ubuntu 16.04", "isAccepted": true, "likes": 6, "poster": "arogozhnikov" }, { "contents": "Same here! GTX1050ti Ubuntu16.04, reboot fixes it, but one short sleep then wake breaks it!", "isAccepted": true, "likes": null, "poster": "hvasbath" }, { "contents": "Same here. ubuntu 16.04 cuda9, pytorch 0.2", "isAccepted": true, "likes": null, "poster": "psavine42" }, { "contents": "Have anyone solved this problem? Thanks.", "isAccepted": true, "likes": null, "poster": "Ke_Bai" }, { "contents": "This also happens on my system Ubuntu 16.04 Nvidia GeForce 940MX PyTorch 0.3.1 running on Python 3.6 Cuda 8.0 CUDNN 7 Any clues? I don’t see why this thread is marked as resolved, if the solution is to restart the laptop every time.", "isAccepted": true, "likes": 1, "poster": "danakianfar" }, { "contents": "Same problem for me too", "isAccepted": true, "likes": null, "poster": "Ste_Millington" }, { "contents": "Run into the same problem", "isAccepted": true, "likes": null, "poster": "Adam_Harrison" }, { "contents": "reboot fixes the problem.", "isAccepted": true, "likes": 1, "poster": "Jimmy_Xiaoke_Shen" }, { "contents": "Same problem some strange stuff after wake up (desktop ubuntu 16.04 cuda 8 1080 gtx)", "isAccepted": true, "likes": null, "poster": "Marat" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "happytaoxiaoli" }, { "contents": "<SCODE>RuntimeError: cuda runtime error (30) : unknown error at \n/opt/conda/conda-bld/pytorch_1524590031827/work/aten/src/THC/THCGeneral.cpp:70\n<ECODE> Ubuntu 18.04, Pytorch 0.4.1, cuda 9.2", "isAccepted": true, "likes": 2, "poster": "waleeka" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "farhat_Ullah" }, { "contents": "<SCODE>sudo rmmod nvidia_uvm\nsudo rmmod nvidia\nsudo modprobe nvidia\nsudo modprobe nvidia_uvm\n<ECODE>", "isAccepted": true, "likes": 4, "poster": "miladiouss" }, { "contents": "Go to NVIDIA Nsight Options and set ‘Enable Crash Detection And Handling = True’. Did the trick for me.", "isAccepted": true, "likes": null, "poster": "Mohamed_Ghadban" }, { "contents": "This always works for me (Win10, Cuda 10.1, Python 3.7.2, PyTorch 1.0.1, NVIDIA GTX 1050 Ti): <SCODE>import torch\ntorch.cuda.current_device()\n<ECODE> but this always fails for me: <SCODE>import torch\ntorch.cuda.is_available()\ntorch.cuda.current_device() # fails here\n<ECODE>", "isAccepted": true, "likes": null, "poster": "AndreiCostinescu" } ]
true
Error when “import torch”
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Jack" }, { "contents": "This part is the problem: <SCODE>RuntimeError: module compiled against API version 0xa but this version of numpy is 0x6\n<ECODE> Your numpy is too old. You need to update it.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Is there a way to upsample a image with non int scale_factor?
null
[ { "contents": "For example, I want upsample my image with scale_factor 1.5.", "isAccepted": false, "likes": null, "poster": "xiaochao_qu" }, { "contents": "Do you need this transform to be differentiable, or just to use it as part of the data loading?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\n\ninp = Variable(torch.randn(10, 3, 24, 24))\nm = nn.UpsamplingBilinear2d(size=(55, 55))\nout = m(inp)\nprint(out.size())\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "I need it to be differentiable, as I am doing a pixel-to-pixel task which requires the output size is different with the input size.", "isAccepted": false, "likes": null, "poster": "xiaochao_qu" }, { "contents": "Thanks. Is this operation differentiable?", "isAccepted": false, "likes": null, "poster": "xiaochao_qu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "xiaochao_qu" }, { "contents": "This appears to give an error saying that the upsampled size is not divisible by the original size when done with UpsampleNearest2d rather than UpsampleBilinear2d", "isAccepted": false, "likes": 1, "poster": "meldefon" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "vinaykumar2491" } ]
false
Help for DQN (implementation of the paper)
reinforcement-learning
[ { "contents": "Sorry, I know it’s really bad to ask help for checking if the implementation is correct. But I am rather new in deep RL, I don’t know if there is any debugging trick or method systematically for checking if my implementation is right. Unlike most supervised learning algorithm, by inspecting loss, we might have some idea about how to tune hyper-parameters (as in cs231n). But we can only see mean reward in RL(as far as I know) and I don’t know how to tune based on that. I tried inspecting the gradients and data before and after the part which I suspected there may contains bugs.", "isAccepted": false, "likes": 1, "poster": "transedward" }, { "contents": "I got similar results with deepmind paper already.", "isAccepted": false, "likes": null, "poster": "transedward" } ]
false
How to assign gradients to model parameters manually?
reinforcement-learning
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "onlytailei" }, { "contents": "to answer your second question, you can create a mask tensor and index elements via this mask tensor: example: <SCODE>import torch\n\nx = torch.randn(2, 3, 2)\n\nmask = x.gt(0.5)\nprint(mask)\n\n# mask is a torch.ByteTensor of same shape and size as x\n\nprint(x[mask])\n<ECODE> Output: <SCODE>\n(0 ,.,.) =\n 0 0\n 0 0\n 0 1\n\n(1 ,.,.) =\n 0 1\n 1 0\n 1 0\n[torch.ByteTensor of size 2x3x2]\n\n\n 1.3140\n 0.8220\n 1.5169\n 0.9264\n[torch.FloatTensor of size 4]\n\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Then I think this method is useless.", "isAccepted": false, "likes": null, "poster": "onlytailei" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "But gather always returns a tensor with same dimension as the input right?", "isAccepted": false, "likes": null, "poster": "onlytailei" }, { "contents": "Yes, that’s the limitation.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Platform not supported - Python 2.7
null
[ { "contents": "Trying to install x86_64 version of Python 2.7 pytorch as advertised: Output of uname -a : For Python 3, similar procedure works without problem.", "isAccepted": false, "likes": null, "poster": "schroeder-dewitt" }, { "contents": "ok, simple mistake - forgot that now pip will be pip3 by default. installing with pip2.7 works. You may want to update install instructions to use pip3 / pip2.7 explicitely!", "isAccepted": false, "likes": null, "poster": "schroeder-dewitt" } ]
false
Tracking down a suspected memory leak
null
[ { "contents": "Hello, thank you for pytorch! Would you have a hint how to approach ever increasing memory use? Thank you! Thomas P.S.: I have adapted the script to set a few defaults differently and use a different dataset, but did not do much with the actual calculation (and I’d be happy to provide my changes if they are of interest). I am using a git checkout of github pytorch master from today. <SCODE> File \"/usr/local/lib/python3.5/dist-packages/torch/backends/cudnn/__init__.py\", line 387\n return array_type(*itr)\n File \"/usr/local/lib/python3.5/dist-packages/torch/backends/cudnn/__init__.py\", line 160\n int_array(tensor.size()), int_array(tensor.stride())))\n File \"/usr/local/lib/python3.5/dist-packages/torch/backends/cudnn/__init__.py\", line 398\n descriptor.set(tensor)\n File \"/usr/local/lib/python3.5/dist-packages/torch/backends/cudnn/rnn.py\", line 242\n fn.cy_desc = cudnn.descriptor(cx) if cx is not None else None\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/_functions/rnn.py\", line 269\n cudnn.rnn.forward(self, input, hx, weight, output, hy)\n File \"/usr/local/lib/python3.5/dist-packages/torch/autograd/function.py\", line 224\n result = self.forward_extended(*nested_tensors)\n File \"/usr/local/lib/python3.5/dist-packages/torch/autograd/function.py\", line 202\n flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/_functions/rnn.py\", line 327\n return func(input, *fargs, **fkwargs)\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/rnn.py\", line 91\n output, hidden = func(input, self.all_weights, hx)\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 202\n result = self.forward(*input, **kwargs)\n File \"deepspeech.pytorch/model.py\", line 48\n x, _ = self.rnn(x)\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 202\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py\", line 64\n input = module(input)\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 202\n result = self.forward(*input, **kwargs)\n File \"deepspeech.pytorch/model.py\", line 94\n x = self.rnns(x)\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 202\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py\", line 59\n return self.module(*inputs[0], **kwargs[0])\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 202\n result = self.forward(*input, **kwargs)\n File \"train_de.py\", line 139\n out = model(inputs)<ECODE>", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "It seems the memory-consumption increases if bidirectional is passed to the nn.LSTM instantiation. I have not found out why, though.", "isAccepted": false, "likes": 1, "poster": "tom" }, { "contents": "I’ve seen this too (memory-consumption increases if bidirectional is passed) - but I can’t remember (no pun intended) how I fixed it? There is a way to get it to work though, as I’ve been using bidirectional’s for the last few weeks!", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "Thanks. If you have gotten nn.LSTM it to work, I can stop trying to stare down nn._functions.rnn . Should you remember what you did to make it later, I’d appreciate a hint.", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "I hope this helps?", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "Oh, here is my torch verison <SCODE>>>> import torch\n>>> torch.__version__\n'0.1.10+ac9245a'\n<ECODE>", "isAccepted": false, "likes": null, "poster": "XingxingZhang" }, { "contents": "Best regards Thomas", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "Ugh, can you please get us a smallest possible snippet that still causes the memory to blow up? That’s the only way we can find and fix these things", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I have added “gc.collect()” in my code and let it run for one or two days to see if I still have the problem.", "isAccepted": false, "likes": null, "poster": "XingxingZhang" }, { "contents": "If anyone is concerned about bi-directional LSTM and memory leak issues in pytorch - maybe a good place to start coding a new project is from the already working tutorial?", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "Ah, thanks. I was going to post a movie sentiment net, but that has the drawback of having the embedding layer. <SCODE># helper function to get rss size, see stat(5) under statm. This is in pages (4k on my linux)\ndef memory_usage():\n return int(open('/proc/self/statm').read().split()[1])\n<ECODE> I saw similar effects with the sentiment analysis one, but I’m not sure I’ll keep saying that unidirectional LSTM always work…", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "So here is a small thing hitting 100 MB extra memory consumption after 12-18 “epochs” with bidirectional LSTM and not without. I don’t think it is the contiguous call + summation as it also happens when I double the input dimension of the 2nd-4th LSTM layer instead.", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "unfortunately, “gc.collect()” didn’t fix the leak in my program. It takes less RAM in the 3rd epoch, but it is still around 45G. I will look into it again.", "isAccepted": false, "likes": null, "poster": "XingxingZhang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "donglixp" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "hope this would help", "isAccepted": false, "likes": null, "poster": "donglixp" } ]
false
Value changed after loading a saved tensor
null
[ { "contents": "<SCODE>A = torch.from_numpy(A_array)\ncheckEmpty(A) # passed\n\ntorch.save(A, 'A_tensor')\nA = torch.load('A_tensor')\ncheckEmpty(A) # failed\n<ECODE> Then to find the first ZERO map: <SCODE>for i in range(A.size()[0]):\n\t\tfor j in range(A.size()[1]):\n\t\t\tif torch.max(A[i][j]) == 0.0:\n\t\t\t\tprint(i, j)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Xiaoyu_Liu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thank you. But how to check the version of currently used Pytorch?", "isAccepted": false, "likes": null, "poster": "Xiaoyu_Liu" }, { "contents": "What OS are you on? I just tried this small snippet on Linux (CentOS7) and on OSX: <SCODE>import torch\n\na = torch.ones(800*3*480*640)\nprint(a.eq(0).sum())\ntorch.save(a, 'a.pth')\nb = torch.load('a.pth')\n\nprint(b.eq(0).sum())\n<ECODE> On Linux it works fine, on OSX i get an error, which i am investigating: <SCODE>0\nTraceback (most recent call last):\n File \"a.py\", line 5, in <module>\n torch.save(a, 'a.pth')\n File \"/Users/soumith/code/pytorch/torch/serialization.py\", line 120, in save\n return _save(obj, f, pickle_module, pickle_protocol)\n File \"/Users/soumith/code/pytorch/torch/serialization.py\", line 192, in _save\n serialized_storages[key]._write_file(f)\nRuntimeError: Unknown error: -1\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thank you, I’m working on Ubuntu 14.04.1", "isAccepted": false, "likes": null, "poster": "Xiaoyu_Liu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Xiaoyu_Liu" }, { "contents": "I tried it on an ubuntu 14.04 as well, but couldn’t reproduce the issue. <SCODE>lsb_release -a\nNo LSB modules are available.\nDistributor ID: Ubuntu\nDescription: Ubuntu 14.04.2 LTS\nRelease: 14.04\nCodename: trusty\n<ECODE> <SCODE>$ locale\nLANG=en_US.UTF-8\nLANGUAGE=\nLC_CTYPE=\"en_US.UTF-8\"\nLC_NUMERIC=\"en_US.UTF-8\"\nLC_TIME=\"en_US.UTF-8\"\nLC_COLLATE=\"en_US.UTF-8\"\nLC_MONETARY=\"en_US.UTF-8\"\nLC_MESSAGES=\"en_US.UTF-8\"\nLC_PAPER=\"en_US.UTF-8\"\nLC_NAME=\"en_US.UTF-8\"\nLC_ADDRESS=\"en_US.UTF-8\"\nLC_TELEPHONE=\"en_US.UTF-8\"\nLC_MEASUREMENT=\"en_US.UTF-8\"\nLC_IDENTIFICATION=\"en_US.UTF-8\"\nLC_ALL=\n<ECODE> I’m not sure locale matters, i am trying to eliminate variables. <SCODE>$ uname -a\nLinux fatbox 3.16.0-37-generic #51~14.04.1-Ubuntu SMP Wed May 6 15:23:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux\n<ECODE> <SCODE>$ df -h\nFilesystem Size Used Avail Use% Mounted on\n**/dev/sda2 355G 302G 35G 90% /**\nnone 4.0K 0 4.0K 0% /sys/fs/cgroup\nudev 5.9G 4.0K 5.9G 1% /dev\ntmpfs 1.2G 1.7M 1.2G 1% /run\nnone 5.0M 0 5.0M 0% /run/lock\nnone 5.9G 124M 5.8G 3% /run/shm\nnone 100M 152K 100M 1% /run/user\n/dev/sda4 96M 29M 68M 30% /boot/efi\n/dev/sdb2 2.7T 2.0T 609G 77% /media/hdd2\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "<SCODE>$ lab_release -a\nNo LSB modules are available.\nDistributor ID:\tUbuntu\nDescription:\tUbuntu 14.04.4 LTS\nRelease:\t14.04\nCodename:\ttrusty\n<ECODE> <SCODE>$ locale\nLANG=en_CA.UTF-8\nLANGUAGE=en_CA:en\nLC_CTYPE=\"en_CA.UTF-8\"\nLC_NUMERIC=\"en_CA.UTF-8\"\nLC_TIME=\"en_CA.UTF-8\"\nLC_COLLATE=\"en_CA.UTF-8\"\nLC_MONETARY=\"en_CA.UTF-8\"\nLC_MESSAGES=\"en_CA.UTF-8\"\nLC_PAPER=\"en_CA.UTF-8\"\nLC_NAME=\"en_CA.UTF-8\"\nLC_ADDRESS=\"en_CA.UTF-8\"\nLC_TELEPHONE=\"en_CA.UTF-8\"\nLC_MEASUREMENT=\"en_CA.UTF-8\"\nLC_IDENTIFICATION=\"en_CA.UTF-8\"\nLC_ALL=\n<ECODE> <SCODE>$ uname -a\nLinux sengled-gpu-1 4.2.0-35-generic #40~14.04.1-Ubuntu SMP Fri Mar 18 16:37:35 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux\n<ECODE> <SCODE>$ df -h\nFilesystem Size Used Avail Use% Mounted on\nudev 32G 4.0K 32G 1% /dev\ntmpfs 6.3G 1.9M 6.3G 1% /run\n/dev/sda2 854G 578G 233G 72% /\nnone 4.0K 0 4.0K 0% /sys/fs/cgroup\nnone 5.0M 0 5.0M 0% /run/lock\nnone 32G 76K 32G 1% /run/shm\nnone 100M 56K 100M 1% /run/user\n/dev/sda1 511M 3.4M 508M 1% /boot/efi\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Xiaoyu_Liu" }, { "contents": "But when I try to install the latest version, this error happens, <SCODE>SSLError: [Errno 1] _ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure\n<ECODE> If you are experienced with it, can you tell me how to solve that? If not, I’ll figure out it somewhere else:smiley:", "isAccepted": false, "likes": null, "poster": "Xiaoyu_Liu" }, { "contents": "Hello, do and try installing the package again.", "isAccepted": false, "likes": null, "poster": "apolis" }, { "contents": "Thanks! But after installing these, the problem can’t be solved yet… Then I refreshed the pytorch website to install again, it works. Magical…", "isAccepted": false, "likes": null, "poster": "Xiaoyu_Liu" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "smth" } ]
false
Can not download the linux pip cud 8.0 compiled pytorch
null
[ { "contents": "I use pip to install however run into this error. Storing debug log for failure in /home/gaop/.pip/pip.log", "isAccepted": false, "likes": null, "poster": "Gao_Peng" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Gao_Peng" }, { "contents": "It must be some kind of a network error at your side (maybe you don’t have some trusted SSL certificates installed)?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Isemel answer : Worked for me", "isAccepted": false, "likes": null, "poster": "Mathieu-Seurin" } ]
false
Combine multiple Variables to one
null
[ { "contents": "x is a Variable which has 3 dims. <SCODE>x.size()[0] is sequence length\nx.size()[1] is batch size.\n<ECODE> I did something below(for LSTM’s input): <SCODE>temp = []\nfor i in xrange(len(x)):\n temp.append(nn.Linear(512, 256)(x[i]))\n<ECODE> As you can see here, I got many Variables stored in temp. But I what I want is to combine these new Variables into one Variable. How can I do it? Many thanks.", "isAccepted": false, "likes": 1, "poster": "melody-rain" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Lyken17" }, { "contents": "cat’s input should be tensors, but temp’s elements are Variables.", "isAccepted": false, "likes": null, "poster": "melody-rain" }, { "contents": "torch.cat also works with Variables.", "isAccepted": false, "likes": 1, "poster": "jekbradbury" } ]
false
How can I print the shape of a tensor inside the forward function?
null
[ { "contents": "I have an attention decoder whose forward function is as follows. <SCODE>def forward(self, input, hidden, encoder_outputs):\n embedded = self.embedding(input).view(1, 1, -1)\n embedded = self.drop(embedded)\n attn_weights = F.softmax(self.attn(torch.cat((embedded[0], hidden[0]), 1)))\n<ECODE> Any suggestion?", "isAccepted": false, "likes": 2, "poster": "wasiahmad" }, { "contents": "", "isAccepted": false, "likes": 5, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "Nabarun_Goswami" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Mithun_Paul" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "kingxueyuf" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "Use print(embedded) to see the shape, or embedded.eval() If you want to see the content, embedded.weight will show you the tensor and if it requires grad.", "isAccepted": false, "likes": null, "poster": "eduamf" } ]
false