title
stringlengths
15
126
category
stringclasses
3 values
posts
list
answered
bool
2 classes
Precision doesn’t improve when training on custom dataset
null
[ { "contents": "<SCODE>> creating model 'alexnet'\nEpoch: [0][0/29]\tTime 11.987 (11.987)\tData 10.121 (10.121)\tLoss 6.9067 (6.9067)\tPrec@1 0.391 (0.391)\tPrec@5 0.781 (0.781)\nEpoch: [0][10/29]\tTime 0.336 (2.764)\tData 0.266 (2.488)\tLoss 6.8902 (6.9003)\tPrec@1 0.000 (0.178)\tPrec@5 1.172 (1.598)\nEpoch: [0][20/29]\tTime 7.898 (2.771)\tData 7.827 (2.578)\tLoss 6.8422 (6.8640)\tPrec@1 0.000 (0.186)\tPrec@5 1.953 (1.581)\nTest: [0/10]\tTime 4.700 (4.700)\tLoss 6.7994 (6.7994)\tPrec@1 0.000 (0.000)\tPrec@5 3.125 (3.125)\n * Prec@1 0.333 Prec@5 1.625\nEpoch: [1][0/29]\tTime 2.927 (2.927)\tData 2.847 (2.847)\tLoss 6.8089 (6.8089)\tPrec@1 0.000 (0.000)\tPrec@5 2.734 (2.734)\nEpoch: [1][10/29]\tTime 0.192 (0.822)\tData 0.025 (0.681)\tLoss 6.7899 (6.7945)\tPrec@1 0.391 (0.320)\tPrec@5 1.953 (1.776)\nEpoch: [1][20/29]\tTime 2.253 (0.824)\tData 2.183 (0.689)\tLoss 6.4336 (6.7144)\tPrec@1 0.391 (0.316)\tPrec@5 3.516 (1.730)\nTest: [0/10]\tTime 3.146 (3.146)\tLoss 6.0892 (6.0892)\tPrec@1 0.000 (0.000)\tPrec@5 0.000 (0.000)\n * Prec@1 0.333 Prec@5 1.667\nEpoch: [2][0/29]\tTime 3.009 (3.009)\tData 2.920 (2.920)\tLoss 6.0913 (6.0913)\tPrec@1 0.391 (0.391)\tPrec@5 1.953 (1.953)\nEpoch: [2][10/29]\tTime 0.189 (0.836)\tData 0.000 (0.681)\tLoss 6.0209 (6.0952)\tPrec@1 0.391 (0.320)\tPrec@5 0.391 (1.562)\nEpoch: [2][20/29]\tTime 2.251 (0.822)\tData 2.181 (0.680)\tLoss 5.9183 (6.0205)\tPrec@1 0.000 (0.223)\tPrec@5 0.781 (1.302)\nTest: [0/10]\tTime 3.046 (3.046)\tLoss 5.9031 (5.9031)\tPrec@1 0.000 (0.000)\tPrec@5 0.000 (0.000)\n * Prec@1 0.333 Prec@5 1.667\nEpoch: [46][0/29]\tTime 2.996 (2.996)\tData 2.915 (2.915)\tLoss 5.7088 (5.7088)\tPrec@1 0.000 (0.000)\tPrec@5 0.781 (0.781)\nEpoch: [46][10/29]\tTime 0.188 (0.844)\tData 0.000 (0.696)\tLoss 5.7168 (5.7085)\tPrec@1 0.000 (0.178)\tPrec@5 1.562 (1.705)\nEpoch: [46][20/29]\tTime 2.090 (0.828)\tData 2.011 (0.685)\tLoss 5.7267 (5.7122)\tPrec@1 0.000 (0.205)\tPrec@5 0.781 (1.562)\nTest: [0/10]\tTime 3.080 (3.080)\tLoss 5.7117 (5.7117)\tPrec@1 0.000 (0.000)\tPrec@5 0.000 (0.000)\n * Prec@1 0.333 Prec@5 1.667\nEpoch: [47][0/29]\tTime 2.943 (2.943)\tData 2.852 (2.852)\tLoss 5.7018 (5.7018)\tPrec@1 0.781 (0.781)\tPrec@5 3.125 (3.125)\nEpoch: [47][10/29]\tTime 0.196 (0.852)\tData 0.000 (0.701)\tLoss 5.7113 (5.7091)\tPrec@1 0.781 (0.355)\tPrec@5 1.953 (1.953)\nEpoch: [47][20/29]\tTime 2.221 (0.838)\tData 2.136 (0.695)\tLoss 5.7153 (5.7120)\tPrec@1 0.000 (0.260)\tPrec@5 1.562 (1.656)\nTest: [0/10]\tTime 3.071 (3.071)\tLoss 5.7107 (5.7107)\tPrec@1 0.000 (0.000)\tPrec@5 0.000 (0.000)\n * Prec@1 0.333 Prec@5 1.667\nEpoch: [48][0/29]\tTime 3.054 (3.054)\tData 2.978 (2.978)\tLoss 5.7045 (5.7045)\tPrec@1 0.391 (0.391)\tPrec@5 2.344 (2.344)\nEpoch: [48][10/29]\tTime 0.182 (0.837)\tData 0.000 (0.689)\tLoss 5.7104 (5.7084)\tPrec@1 0.391 (0.249)\tPrec@5 2.734 (1.847)\nEpoch: [48][20/29]\tTime 1.824 (0.819)\tData 1.753 (0.700)\tLoss 5.7171 (5.7120)\tPrec@1 0.391 (0.242)\tPrec@5 0.781 (1.488)\nTest: [0/10]\tTime 3.084 (3.084)\tLoss 5.7100 (5.7100)\tPrec@1 0.000 (0.000)\tPrec@5 3.125 (3.125)\n * Prec@1 0.333 Prec@5 1.667\nEpoch: [49][0/29]\tTime 3.213 (3.213)\tData 3.137 (3.137)\tLoss 5.7120 (5.7120)\tPrec@1 0.781 (0.781)\tPrec@5 1.172 (1.172)\nEpoch: [49][10/29]\tTime 0.182 (0.869)\tData 0.000 (0.713)\tLoss 5.7154 (5.7094)\tPrec@1 0.000 (0.426)\tPrec@5 0.781 (1.456)\nEpoch: [49][20/29]\tTime 2.013 (0.829)\tData 1.931 (0.696)\tLoss 5.7096 (5.7113)\tPrec@1 0.781 (0.316)\tPrec@5 2.734 (1.376)\nTest: [0/10]\tTime 3.072 (3.072)\tLoss 5.7060 (5.7060)\tPrec@1 0.000 (0.000)\tPrec@5 0.000 (0.000)\n * Prec@1 0.333 Prec@5 1.667\n<ECODE>", "isAccepted": false, "likes": null, "poster": "panovr" }, { "contents": "Lack of convergence can be caused by a lot of things. The optimization is not guaranteed to succeed and actually it’s a bit of a miracle that NNs even work. We can’t help you a lot on that, but I think there are some problems with your dataset or transforms that are applied to it.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>Epoch: [89][0/57]\tTime 1.645 (1.645)\tData 1.525 (1.525)\tLoss 0.3686 (0.3686)\tPrec@1 92.969 (92.969)\tPrec@5 97.656 (97.656)\nEpoch: [89][10/57]\tTime 0.364 (0.528)\tData 0.000 (0.244)\tLoss 0.5915 (0.5153)\tPrec@1 87.500 (87.713)\tPrec@5 93.750 (95.455)\nEpoch: [89][20/57]\tTime 0.510 (0.471)\tData 0.379 (0.190)\tLoss 0.6496 (0.5262)\tPrec@1 85.938 (87.240)\tPrec@5 92.969 (95.126)\nEpoch: [89][30/57]\tTime 0.355 (0.453)\tData 0.000 (0.171)\tLoss 0.4592 (0.5180)\tPrec@1 90.625 (87.903)\tPrec@5 95.312 (95.186)\nEpoch: [89][40/57]\tTime 0.536 (0.446)\tData 0.413 (0.165)\tLoss 0.3770 (0.5072)\tPrec@1 92.188 (88.529)\tPrec@5 97.656 (95.332)\nEpoch: [89][50/57]\tTime 0.369 (0.440)\tData 0.000 (0.166)\tLoss 0.4453 (0.5025)\tPrec@1 89.844 (88.664)\tPrec@5 95.312 (95.374)\nTest: [0/19]\tTime 1.668 (1.668)\tLoss 0.8600 (0.8600)\tPrec@1 81.250 (81.250)\tPrec@5 94.531 (94.531)\nTest: [10/19]\tTime 0.104 (0.463)\tLoss 1.5666 (1.5452)\tPrec@1 67.188 (67.827)\tPrec@5 84.375 (84.659)\n * Prec@1 67.375 Prec@5 84.208\n<ECODE> <SCODE>Epoch: [89][0/29]\tTime 3.110 (3.110)\tData 3.040 (3.040)\tLoss 4.7523 (4.7523)\tPrec@1 5.469 (5.469)\tPrec@5 19.922 (19.922)\nEpoch: [89][10/29]\tTime 0.189 (0.831)\tData 0.070 (0.700)\tLoss 4.7577 (4.8041)\tPrec@1 6.250 (5.611)\tPrec@5 19.141 (17.685)\nEpoch: [89][20/29]\tTime 2.163 (0.831)\tData 2.079 (0.705)\tLoss 4.8331 (4.8019)\tPrec@1 4.688 (5.673)\tPrec@5 19.531 (17.839)\nTest: [0/10]\tTime 3.048 (3.048)\tLoss 4.6815 (4.6815)\tPrec@1 8.203 (8.203)\tPrec@5 23.047 (23.047)\n * Prec@1 7.458 Prec@5 22.833\n<ECODE> You can see with alexnet, after 90 epoch, the precision@1 is 7.458; with resnet18, after 90 epoch, the precision@ is 67.375. So it is a bit strange to see that the precision of alexnet grows so slowly.", "isAccepted": false, "likes": null, "poster": "panovr" }, { "contents": "It’s probably because the AlexNet doesn’t use BatchNorm and it stabilizes the training. I’d look into data normalization, you might be doing something wrong there.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "haoyu" } ]
false
Implementing custom non-zero conv padding
null
[ { "contents": "Is there a way to implement your own padding algorithm? For example use reflect padding instead of the seemingly solely available one: zero padding.", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "That would be a welcome addition. It could certainly be done in a hack-y way where you cut up variables, but that doesn’t seem like it would be very efficient.", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "Numpy has apply_along_axis but it doesn’t look like pytorch has that unless I’m missing something. I’m looking for a way to apply a custom (maybe learned too) convolution onto each feature map individually. So if you have a (64,32,128,128) tensor you want a say, (5,5) filter to be applied to all 64*32 (128,128) images. Maybe I’m not thinking in arrays right, what if I reshape it to (64*32,1,128,128) and do convolutions the usual way? Then reshape back? Seems like it should work, would reshaping it naively break things though?", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "Hmmm yes I think that reshaping them like that would work.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Does it return an enlarged tensor based on the requested padding?", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Convert numpy to PyTorch Dataset
null
[ { "contents": "Hi All, I have a numpy array of modified MNIST, which has the dimensions of a working dataset (Nx28x28), and labels(N,) I want to convert this to a PyTorch Dataset, so I did: <SCODE>train = torch.utils.data.TensorDataset(img, labels.view(-1))\ntrain_loader = torch.utils.data.DataLoader(train, batch_size=64, shuffle=False)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "I tried this too, <SCODE>train = torch.utils.data.TensorDataset(img, labels)\ntrain_loader = torch.utils.data.DataLoader(train, batch_size=64, shuffle=False)\n<ECODE> No dice", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "can you show a result below? <SCODE>print(img.size(), label.size())\n<ECODE> I think increasing a dimension will work, <SCODE>train = torch.utils.data.TensorDataset(img, labels.unsqueeze(1))\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jhjungCode" }, { "contents": "<SCODE>torch.Size([18000, 28, 28]) torch.Size([18000])\n<ECODE> unsqueeze(1) gave the same error <SCODE>---------------------------------------------------------------------------\nAssertionError Traceback (most recent call last)\n<ipython-input-18-f3ca3f765752> in <module>()\n 1 for epoch in range(1, 101):\n----> 2 train(epoch)\n 3 test(epoch, valid_loader)\n\n<ipython-input-17-f91e8ba0f29c> in train(epoch)\n 6 data, target = Variable(data), Variable(target)\n 7 optimizer.zero_grad()\n----> 8 output = model(data)\n 9 loss = F.nll_loss(output, target)\n 10 loss.backward()\n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)\n 208 \n 209 def __call__(self, *input, **kwargs):\n--> 210 result = self.forward(*input, **kwargs)\n 211 for hook in self._forward_hooks.values():\n 212 hook_result = hook(self, input, result)\n\n<ipython-input-15-7f886ceeb28f> in forward(self, x)\n 10 \n 11 def forward(self, x):\n---> 12 x = F.relu(F.max_pool2d(self.conv1(x), 2))\n 13 x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), ))\n 14 # x = F.relu(F.max_pool2d(self.conv3(x), 2))\n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)\n 208 \n 209 def __call__(self, *input, **kwargs):\n--> 210 result = self.forward(*input, **kwargs)\n 211 for hook in self._forward_hooks.values():\n 212 hook_result = hook(self, input, result)\n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.pyc in forward(self, input)\n 233 def forward(self, input):\n 234 return F.conv2d(input, self.weight, self.bias, self.stride,\n--> 235 self.padding, self.dilation, self.groups)\n 236 \n 237 \n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/functional.pyc in conv2d(input, weight, bias, stride, padding, dilation, groups)\n 35 f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False,\n 36 _pair(0), groups)\n---> 37 return f(input, weight, bias) if bias is not None else f(input, weight)\n 38 \n 39 \n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in forward(self, input, weight, bias)\n 30 self.save_for_backward(input, weight, bias)\n 31 if k == 3:\n---> 32 input, weight = _view4d(input, weight)\n 33 output = self._update_output(input, weight, bias)\n 34 if k == 3:\n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in _view4d(*tensors)\n 171 output = []\n 172 for t in tensors:\n--> 173 assert t.dim() == 3\n 174 size = list(t.size())\n 175 size.insert(2, 1)\n\nAssertionError: \n<ECODE>", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "I hope below works <SCODE>img.unsqueeze(1)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jhjungCode" }, { "contents": "Doesn’t work, gives: <SCODE>---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n<ipython-input-24-f3ca3f765752> in <module>()\n 1 for epoch in range(1, 101):\n----> 2 train(epoch)\n 3 test(epoch, valid_loader)\n\n<ipython-input-23-f91e8ba0f29c> in train(epoch)\n 6 data, target = Variable(data), Variable(target)\n 7 optimizer.zero_grad()\n----> 8 output = model(data)\n 9 loss = F.nll_loss(output, target)\n 10 loss.backward()\n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)\n 208 \n 209 def __call__(self, *input, **kwargs):\n--> 210 result = self.forward(*input, **kwargs)\n 211 for hook in self._forward_hooks.values():\n 212 hook_result = hook(self, input, result)\n\n<ipython-input-21-7f886ceeb28f> in forward(self, x)\n 10 \n 11 def forward(self, x):\n---> 12 x = F.relu(F.max_pool2d(self.conv1(x), 2))\n 13 x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), ))\n 14 # x = F.relu(F.max_pool2d(self.conv3(x), 2))\n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)\n 208 \n 209 def __call__(self, *input, **kwargs):\n--> 210 result = self.forward(*input, **kwargs)\n 211 for hook in self._forward_hooks.values():\n 212 hook_result = hook(self, input, result)\n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.pyc in forward(self, input)\n 233 def forward(self, input):\n 234 return F.conv2d(input, self.weight, self.bias, self.stride,\n--> 235 self.padding, self.dilation, self.groups)\n 236 \n 237 \n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/functional.pyc in conv2d(input, weight, bias, stride, padding, dilation, groups)\n 35 f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False,\n 36 _pair(0), groups)\n---> 37 return f(input, weight, bias) if bias is not None else f(input, weight)\n 38 \n 39 \n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in forward(self, input, weight, bias)\n 31 if k == 3:\n 32 input, weight = _view4d(input, weight)\n---> 33 output = self._update_output(input, weight, bias)\n 34 if k == 3:\n 35 output, = _view3d(output)\n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in _update_output(self, input, weight, bias)\n 86 \n 87 self._bufs = [[] for g in range(self.groups)]\n---> 88 return self._thnn('update_output', input, weight, bias)\n 89 \n 90 def _grad_input(self, input, weight, grad_output):\n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in _thnn(self, fn_name, input, weight, *args)\n 145 impl = _thnn_convs[self.thnn_class_name(input)]\n 146 if self.groups == 1:\n--> 147 return impl[fn_name](self, self._bufs[0], input, weight, *args)\n 148 else:\n 149 res = []\n\n/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in call_update_output(self, bufs, input, weight, bias)\n 223 args = parse_arguments(self, fn.arguments[5:], bufs, kernel_size)\n 224 getattr(backend, fn.name)(backend.library_state, input, output, weight,\n--> 225 bias, *args)\n 226 return output\n 227 return call_update_output\n\nTypeError: DoubleSpatialConvolutionMM_updateOutput received an invalid combination of arguments - got (int, torch.DoubleTensor, torch.DoubleTensor, torch.FloatTensor, torch.FloatTensor, torch.DoubleTensor, torch.DoubleTensor, long, long, int, int, int, int), but expected (int state, torch.DoubleTensor input, torch.DoubleTensor output, torch.DoubleTensor weight, [torch.DoubleTensor bias or None], torch.DoubleTensor finput, torch.DoubleTensor fgradInput, int kW, int kH, int dW, int dH, int padW, int padH)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "image data numpy type is float64, just change float32", "isAccepted": false, "likes": null, "poster": "jhjungCode" }, { "contents": "float tensors and double tensors cannot be mixed in one function call.", "isAccepted": false, "likes": null, "poster": "shicai" } ]
false
BLAS performance on macOS vs Linux vs Lua Torch
null
[ { "contents": "<SCODE>Allocation: 5.921\nTorch Blas: 7.277\nNumpy Blas: 7.841\nTorch cuBlas: 0.205\n<ECODE> <SCODE>Allocation: 4.030\nTorch Blas: 21.112\nNumpy Blas: 21.82\nTorch cuBlas: 0.121\n<ECODE> <SCODE>Allocation: 4.557\nTorch Blas: 19.646\nNumpy Blas: 20.155\nTorch cuBlas: 0.155\n<ECODE> <SCODE>openblas_lapack_info:\n NOT AVAILABLE\nlapack_opt_info:\n define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]\n include_dirs = ['/opt/anaconda3/include']\n library_dirs = ['/opt/anaconda3/lib']\n libraries = ['mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'iomp5', 'pthread']\nblas_mkl_info:\n ...\nblas_opt_info:\n ...\nlapack_mkl_info:\n ...\n<ECODE> The code I am using: <SCODE>import time\nimport torch\nimport numpy\ntorch.set_default_tensor_type(\"torch.FloatTensor\")\n\nw = 5000\nh = 40000\nis_cuda = torch.cuda.is_available()\nstart = time.time()\n\na = torch.rand(w,h)\nb = torch.rand(h,w)\na_np = a.numpy()\nb_np = b.numpy()\nif is_cuda:\n a_cu = a.cuda()\n b_cu = b.cuda()\n\nallocation = time.time()\nprint(\"Allocation \", allocation - start)\n\nc = a.mm(b)\nth_blas = time.time()\nprint(\"Torch Blas \", th_blas - allocation)\n\nc = a_np.dot(b_np)\nnp_blas = time.time()\nprint(\"Numpy Blas \", np_blas - th_blas)\n\nif is_cuda:\n c = a_cu.mm(b_cu)\n cu_blas = time.time()\n print(\"Torch cuBlas \", cu_blas - np_blas)\n\nprint(\"Final\", time.time() - start)\n<ECODE> edit: For comparison here are the results of the same script on Lua Torch on the last machine from above: <SCODE>Allocation: \t4.426\nTorch Blas: \t2.777\nTorch cuBlas: \t0.097\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "florin" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Did that, it didn’t help. Is the variable relevant for runtime only? Or I should also try to recompile torch with it exported? MKL seems properly configured: <SCODE>>> mkl.get_max_threads()\n>> 20\n<ECODE> Can someone else run the script above and report the numbers?", "isAccepted": false, "likes": null, "poster": "florin" }, { "contents": "I’m seeing similar behavior. I ran your script from both PyTorch built from source and the Conda installation: built from source: <SCODE>Allocation 6.476427316665649\nTorch Blas 4.414772272109985\nNumpy Blas 2.665677547454834\nTorch cuBlas 0.14812421798706055\nFinal 13.705262184143066\n<ECODE> conda: <SCODE>Allocation 5.521166086196899\nTorch Blas 39.35049605369568\nNumpy Blas 39.40145969390869\nFinal 84.42150139808655\n<ECODE> It looks like something is wrong with Conda.", "isAccepted": false, "likes": 1, "poster": "colesbury" }, { "contents": "Thanks for taking the time to check this. Also for the tip on benchmarking cuda. I’ll try to build it again altghough I did this several times with same result.", "isAccepted": false, "likes": null, "poster": "florin" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "florin" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "florin" }, { "contents": "Ok, MKL is found. We’ll have to look into it. Thanks for the report.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Let me know if there’s anything else I can look into, especially if you can’t reproduce this situation.", "isAccepted": false, "likes": null, "poster": "florin" }, { "contents": "What fixed things for me was adding “iomp5” to FindMKL.cmake: <SCODE>diff --git a/torch/lib/TH/cmake/FindMKL.cmake b/torch/lib/TH/cmake/FindMKL.cmake\nindex e68ae6a..7c9325a 100644\n--- a/torch/lib/TH/cmake/FindMKL.cmake\n+++ b/torch/lib/TH/cmake/FindMKL.cmake\n@@ -50,7 +50,7 @@ ENDIF (\"${SIZE_OF_VOIDP}\" EQUAL 8)\n IF(CMAKE_COMPILER_IS_GNUCC)\n SET(mklthreads \"mkl_gnu_thread\" \"mkl_intel_thread\")\n SET(mklifaces \"gf\" \"intel\")\n- SET(mklrtls)\n+ SET(mklrtls \"iomp5\")\n ELSE(CMAKE_COMPILER_IS_GNUCC)\n SET(mklthreads \"mkl_intel_thread\")\n SET(mklifaces \"intel\")\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "ngimel" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "florin" } ]
false
RuntimeError: cuda runtime error (59)
null
[ { "contents": "I would greatly appreciate any pointers.", "isAccepted": false, "likes": null, "poster": "kundan2510" }, { "contents": "I had an issue with the embedding layer. I had an index which was greater than the vocab-size. Fixing that resolved the issue. I still don’t understand how the error shown is related to that.", "isAccepted": false, "likes": 3, "poster": "kundan2510" }, { "contents": "Because of the asynchronous nature of cuda, the assert might not point to a full correct stack trace pointing to where the assert was triggered from.", "isAccepted": false, "likes": 7, "poster": "smth" }, { "contents": "STORY TIME! I recently re-installed my NVIDIA drivers (someone in the lab broke the server trying to install them, and for the life of me I have no idea what went wrong) and managed to compile the latest PyTorch build from source (which previously had thrown errors related to CUDA, suggesting that PyTorch unsurprisingly doesn’t compile if you’re using old drivers). Yay! It turned out the issue was in the line above, where I call index_select on a different tensor using a variable that can vary widely–the issue was that I clamped my indices incorrectly: out = inp.index_select(2, torch.clamp(index, 0, inp.size()[-1])).", "isAccepted": false, "likes": 2, "poster": "ajbrock" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "Or is the tensor.index_select function just something that’s hard-baked into CUDA that PyTorch doesn’t actually wrap? I might be misunderstanding how it’s put together.", "isAccepted": false, "likes": null, "poster": "ajbrock" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Sorry, I was just trying to do that as an example of indexing outside the allowable array size. Here’s a better example: returns RuntimeError: out of range at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/TH/generic/THTensor.c:379 Is it not possible to insert a value check in the index_select function to ensure that the elements of the second argument are all less than the size of the indexed tensor along that axis?", "isAccepted": false, "likes": null, "poster": "ajbrock" }, { "contents": "It’s impossible, because we’d have to execute max and min over the whole tensor, synchronize the CPU with the GPU, and execute the index_select kernel only if the values are correct. It would be too slow.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Got it, that makes sense.", "isAccepted": false, "likes": null, "poster": "ajbrock" }, { "contents": "This help me, thanks.", "isAccepted": false, "likes": null, "poster": "11129" }, { "contents": "same here, thanks for the hint… I was converting some code from torch to pytorch. Lua is 1 based index but Python is 0 based index. My dataset was created for lua hence it is off by one.", "isAccepted": false, "likes": null, "poster": "euwern" }, { "contents": "I had this error when have created the model with the wrong number of classes:", "isAccepted": false, "likes": null, "poster": "gleb" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "miladiouss" }, { "contents": "I faced with the problem that is very similar: ‘reduce failed to synchronize: device-side assert triggered’,and the problem occurred in the circumstance that I used the BCEloss.finally I found the cause that my output is a mat consists of the elements that is negative numbers,but the implement of BCEloss is about log(),which has no meaning in the negative area, thus I add a sigmoid to constrain the output between 0-1and my problem is solved.", "isAccepted": false, "likes": 1, "poster": "Fengzi" }, { "contents": "Thanks", "isAccepted": false, "likes": null, "poster": "jdenim" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "m0nster" }, { "contents": "I’m not sure if I’m missing the obvious, but how about subtracting 1 from the labels? Best regards Thomas", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "m0nster" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "tom" } ]
false
Backward size mismatch error when adding the bias term
null
[ { "contents": "Hi, I am trying to implement a batch matrix multiplication like the first equation in this image. The weight and bias are defined as a parameter in the model. I am making a copy of the bias term to the entire batch. <SCODE>def batch_matmul_bias(seq, weight, bias, nonlinearity=''):\ns = None\nbias_dim = bias.size()\nfor i in range(seq.size(0)):\n _s = torch.mm(seq[i], weight) \n _s_bias = _s + bias.expand(bias_dim[0], _s.size()[0])\n print _s_bias.size()\n if(nonlinearity=='tanh'):\n _s_bias = torch.tanh(_s_bias)\n _s_bias = _s_bias.unsqueeze(0)\n if(s is None):\n s = _s_bias\n else:\n s = torch.cat((s,_s_bias),0)\nreturn s.squeeze()\n<ECODE> The forward pass works, but when doing the backward pass, I am getting a size mismatch error. <SCODE>RuntimeError: sizes do not match at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.7_1485448159614/work/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu:216\n<ECODE> Can you help me fix it? Thank you.", "isAccepted": false, "likes": null, "poster": "Sandeep42" }, { "contents": "<SCODE>import torch.nn.functional as F\n\ndef batch_matmul_bias(seq, weight, bias, nonlinearity=''):\n s = F.linear(seq, weight, bias)\n if nonlinearity=='tanh':\n s = F.tanh(s)\n return s\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "I just looked at the API carefully, and it looks like Linear supports batch samples and no bias as well, this will save a lot of time. Thank you!", "isAccepted": false, "likes": null, "poster": "Sandeep42" }, { "contents": "Hi,", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "I managed to fix it. For future reference, I was sloppy and did not properly reshape the bias term. Doing a transpose of the bias term is the one I forgot. <SCODE>_s_bias = _s + bias.expand(bias_dim[0], _s.size()[0]).transpose(0,1)\n<ECODE> Thank you for the wonderful effort that you’ve put in here, debugging is a lot easier. BTW: Can you explain this?", "isAccepted": false, "likes": null, "poster": "Sandeep42" }, { "contents": "Is there some straightforward way to reshape while using add_module() ? ATM I have to resort to <SCODE>class RESHAP(nn.Module):\n\tdef __init__(self, nz):\n\t\tsuper(RESHAP, self).__init__()\n\t\tself.nz = nz\n\n\tdef forward(self, input):\n\t\treturn input.view(-1, self.nz, 1, 1)\n\n\tdef __repr__(self):\n\t\treturn self.__class__.__name__ + ' ()'\n<ECODE> main.add_module(‘initial.{0}.reshape’.format(nz/4), RESHAP(nz/4))", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I am looking forward to that. The fact that you can just interactively work with Tensors, and debug them is already a great thing. Thanks!", "isAccepted": false, "likes": null, "poster": "Sandeep42" } ]
false
Gradient for input when evaluate
null
[ { "contents": "Hi, I want to use a pretrained model to supervise another model, which need to set the batch normalization into eval mode. However, with lstm in my pretrained model, it arises follow message: <SCODE>Traceback (most recent call last):\n File \"main.py\", line 153, in val\n cost.backward()\n File \"/home/jrmei/.local/lib/python2.7/site-packages/torch/autograd/variable.py\", line 145, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"/home/jrmei/.local/lib/python2.7/site-packages/torch/autograd/function.py\", line 208, in _do_backward\n result = super(NestedIOFunction, self)._do_backward(gradients, retain_variables)\n File \"/home/jrmei/.local/lib/python2.7/site-packages/torch/autograd/function.py\", line 216, in backward\n result = self.backward_extended(*nested_gradients)\n File \"/home/jrmei/.local/lib/python2.7/site-packages/torch/nn/_functions/rnn.py\", line 199, in backward_extended\n self._reserve_clone = self.reserve.clone()\nAttributeError: 'CudnnRNN' object has no attribute 'reserve'\n<ECODE> Do you have any suggestions? Thanks very much.", "isAccepted": false, "likes": null, "poster": "meijieru" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I see the bug. I’ll fix it today.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\nrnn = nn.Sequential(nn.LSTM(10, 10, 2, bidirectional=True))\nrnn = rnn.cuda()\nrnn.eval()\n\ni = torch.Tensor(2, 1, 10).cuda()\ni = Variable(i, requires_grad=True)\n\no, _ = rnn(i)\no = o.mean()\no.backward(torch.ones(1).cuda())\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "meijieru" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "meijieru" }, { "contents": "It will be today. I will be building binaries late today.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thanks very much. (~~~~~~~~~~~~~~~~~~)", "isAccepted": false, "likes": null, "poster": "meijieru" } ]
false
Bayesian computation in PyTorch
null
[ { "contents": "Hi all, Just discover PyTorch yesterday, the dynamic graph idea is simply amazing! I am wondering if anybody is (or plans to) developing a Bayesian Computation package in PyTorch? Something like PyMC3 (theano) or Edward (tensorflow). I think the dynamic nature of PyTorch would be perfect for dirichlet process or mixture model, and Sequential Monte Carlo etc.", "isAccepted": false, "likes": 3, "poster": "junpenglao" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "implement a selected number of variants of inference algorithms such as variational and MCMC inference (both traditional and scalable versions) separately from anything neural-network related make such framework easily applicable to the existing nn module implement basic functionalities for common statistical distributions\nI will post an update when I have something to show.", "isAccepted": false, "likes": 3, "poster": "stepelu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "junpenglao" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "stepelu" }, { "contents": "I see, thanks for the info - better implementing them from scratch then.", "isAccepted": false, "likes": null, "poster": "junpenglao" }, { "contents": "Just bumping this old thread to see if anyone’s working on variational inference using PyTorch?", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "Wow, Noah Goodman sold out? Never thought that would happen? Guess every man has his price! Now that you mention Uber - yes I remember they’ve been working on Bayesian Optimization for their massive routing problems a long time. I see they signed up Zoubin Ghahramani as head scientist too. I think principled Bayesian computation, overcomes many of the deficiencies of deep learning, and vice versa. Check out Shakir’s NIPs 2016 slides", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "please post here/let me know, when Uber or anyone else make public some sort of Black-Box variational inference engine public for PyTorch. Tensorflow is driving me nuts - once you’ve used PyTorch it’s painful to go back to TF! For example, the main KL inference routines, are well written and there’s not much TF dependency, see", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "yinhao" }, { "contents": "thank you very much - I haven’t seen this library before, it looks very up to date and useful. If you are working with Gaussian Processes, another very useful library is GPflow, So it seems that there are now three variational inference libraries built upon Tensorflow by three different research groups, (Blei Lab, Tsinghua Machine Learning Group, and various contributors to GPflow)? Kind regard, Ajay", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "I started working on different topics shortly after sending my last message so I didn’t make much progress yet. Now, however, I am back on a project which involves generative models and inference so I expect I’ll have more time to be working on this. Before proceeding there are a few preliminary points I’d like to discuss: What is the plan for stochastic graphs? For unbiased gradients, pathwise-type estimators come for free. Classes for statistical distributions? Separate gradient estimators? Assumptions on data shapes? Basically I’m looking for the PyTorch’s core team comments / design suggestions to the points I mentioned above, and on the ones I failed to consider!", "isAccepted": false, "likes": 2, "poster": "stepelu" }, { "contents": "I am interested in contributing. I’m currently looking for a good package to test VI algorithms for big models and pytorch’s dynamic graph would make it a lot easier than working in theano or tensorflow.", "isAccepted": false, "likes": 1, "poster": "joh4n" }, { "contents": "I’ll implement the Gamma, Beta and Dirichlet distributions too next week, sampling aside that would need a C/CUDA implementation (and it’s not trivial to do CUDA-efficiently and in a numerically stable way). I am also working on MCMC: Metropolis-RW, Langevin-RW, HMC, NUTS. Contributions are welcome, for instance: not yet implemented statistical distributions (let me know if you plan to work on any of the above) add missing kl-divergence implementations add mean and variance methods", "isAccepted": false, "likes": 2, "poster": "stepelu" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "emilemathieu" }, { "contents": "Any advances? Highly interested too.", "isAccepted": false, "likes": 1, "poster": "Andres_Masegosa" }, { "contents": "", "isAccepted": false, "likes": 8, "poster": "ngoodman" }, { "contents": "Best regards Thomas", "isAccepted": false, "likes": null, "poster": "tom" } ]
false
Is that possible to save model in Lua “.t7” format to load from Lua Torch?
null
[ { "contents": "Subj, is that possible in a some way?", "isAccepted": false, "likes": null, "poster": "WildChlamydia" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "Sorry, I don’t understand how it will helps to serialize to LuaTorch format", "isAccepted": false, "likes": null, "poster": "WildChlamydia" }, { "contents": "We don’t support that at the moment. PyTorch models are much more flexible than those in Lua, so it’s easier to go the other way around. You could try recoding your model and dumping the weights to HDF5 format.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "WildChlamydia" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Kullback-Leibler Divergence loss function giving negative values
null
[ { "contents": "Hi! Still playing with PyTorch and this time I was trying to make a neural network work with Kullback-Leibler divergence. As long as I have one-hot targets, I think that the results of it should be identical to the results of a neural network trained with the cross-entropy loss. For completeness, I am giving the entire code for the neural net (which is the one used for the tutorial): <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(3, 6, 5)\n self.pool = nn.MaxPool2d(2,2)\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16*5*5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n\n def forward(self, x):\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n x = x.view(-1, 16*5*5)\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n x = F.softmax(x)\n return x\n\nnet = Net()\nnet = net.cuda()\n\ntry:\n del net \n net = Net()\n net = net.cuda() \nexcept NameError:\n net = Net()\n net = net.cuda()\n<ECODE> The only change here, is that in the end, I apply softmax (KL divergence needs the data to be probabilities, and softmax achieves exactly that). Then, I do the training: <SCODE>criterion = nn.KLDivLoss() # use Kullback-Leibler divergence loss\noptimizer = optim.Adam(net.parameters(), lr=3e-4)\nnumber_of_classes = 10\n\nfor epoch in range(5): # loop over the dataset multiple times\n \n running_loss = 0.0\n for i, data in enumerate(trainloader, 0):\n # get the inputs\n inputs, labels = data \n labels_one_hot = convert_labels_to_one_hot(labels, number_of_classes) \n # wrap them in Variable\n inputs, labels = Variable(inputs).cuda(), Variable(labels_one_hot).cuda()\n optimizer.zero_grad()\n \n # forward + backward + optimize\n outputs = net(inputs)\n loss = criterion(outputs, labels)\n loss.backward() \n optimizer.step()\n \n # print statistics\n running_loss += loss.data[0]\n if i % 200 == 199: # print every 200 mini-batches\n print('[%d, %5d] loss: %.3f' % (epoch+1, i+1, running_loss / 200))\n running_loss = 0.0\nprint('Finished Training')\n<ECODE> The only change in this part is that I convert labels to one hot labels. I do that with the following function: <SCODE>def convert_labels_to_one_hot(labels, number_of_classes):\n number_of_observations = labels.size()[0]\n labels_one_hot = torch.zeros(number_of_observations, number_of_classes)\n for i in xrange(number_of_observations):\n label_value = labels[i]\n labels_one_hot[i, label_value] = 1.0\n return labels_one_hot \n<ECODE> Anyway, there is no backprop to this, so this shouldn’t cause problems. In addition, each row of this matrix contains a single 1, with all the other elements being 0, so it is a valid probability. Now, the weird thing is that the loss function is negative. That just shouldn’t happen, considering that KL divergence should always be a nonnegative number. For 5 epochs, the results of the loss function are: <SCODE>[1, 200] loss: -0.019\n[2, 200] loss: -0.033\n[3, 200] loss: -0.036\n[4, 200] loss: -0.038\n[5, 200] loss: -0.040\n<ECODE> Anyone had similar problems in the past? Thanks in advance!", "isAccepted": false, "likes": 2, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "alexis-jacq" }, { "contents": "Nope. Labels are an one hot vector, with 1 for the correct label, and 0 on all the other members (this is the very simple case, in order to see if it performs the same way as with cross entropy loss). Why do you think that labels should be log-probs? KL(P||Q) requires just P and Q to be valid probability distributions, nothing more. From the documentation of pytorch: <SCODE>KL divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions.\nAs with NLLLoss, the input given is expected to contain log-probabilities, however unlike ClassNLLLoss, input is not restricted to a 2D Tensor, because the criterion is applied element-wise.\nThis criterion expects a target Tensor of the same size as the input Tensor.\n<ECODE> Okay, on the other side the outputs of the net should be log-probs, and that is not achieved by softmax, but by log-softmax. Changing the line of code to: x = F.log_softmax(x) seems to make the loss function positive. Now need to do some testing.", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "Yep, and the results of this cost function are very similar to that of cross entropy. Thanks! On a curios note, why outputs should be log-probabilities? Is that just for numerical reasons or something deeper?", "isAccepted": false, "likes": 4, "poster": "Ismail_Elezi" }, { "contents": "I think (but I am not sure, just trying to understand) the reason is the following: If you look at the code beside (in C): sum += *target_data > 0 ? *target_data * (log(*target_data) - *input_data) : 0;", "isAccepted": false, "likes": 1, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "<SCODE> rand_data = torch.randn(1,1000)\n\n a = Variable(rand_data)\n b = Variable(rand_data)\n\n a_lsm = F.log_softmax(a)\n b_sm = F.softmax(b)\n \n criterion = nn.KLDivLoss()\n\n loss = criterion(a_lsm,b_sm)\n\n print(loss)\n<ECODE> But when I run it a few times, it gives me very small numbers as outputs (both positive and negative). Can someone tell me if I am making a mistake here?", "isAccepted": false, "likes": null, "poster": "ani0075" }, { "contents": "<SCODE>r = torch.randn(1, 1000).float()\na = r.clone()\nb = r.clone()\n\nerr1 = a - b\nprint torch.sum(err1)\n>> 0.0\n\nerr2 = torch.log(F.softmax(a)) - F.log_softmax(b)\nprint torch.sum(err2)\n>> 1.0e-05*3.5763<ECODE>", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "what if for each sample , the loss can be either positive or negative, how do I sum the loss over mini-batces and do backpropation?", "isAccepted": false, "likes": null, "poster": "zack.zcy" }, { "contents": "KL cannot be negative. In my case, I had a bug (solved in the first post).", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ani0075" }, { "contents": "Why cannot KL-div become negative ? If I am not making a mistake, the formula is: kl = prob_p (log(prob_p/prob_q); Since we are not sure if (prob_p/prob_q) is greater or smaller than 1, the kl-div can be both positive and negative depending on the input of prob_p and prob_q.", "isAccepted": false, "likes": null, "poster": "coincheung" }, { "contents": "The important point to note here is P and Q are probability distributions, so even though the value for a particular point in the sample space (discrete-case) is negative, the summation over all points in the sample space must be non-negative.", "isAccepted": false, "likes": null, "poster": "ani0075" }, { "contents": "RuntimeError: bool value of Tensor with more than one value is ambiguous MSECMBSS=torch.nn.KLDivLoss(DATA1, Data2)", "isAccepted": false, "likes": null, "poster": "saba" }, { "contents": "The input shapes should work, as seen in this code snippet: <SCODE>a = torch.log_softmax(torch.randn(64, 81), dim=1)\nb = torch.softmax(torch.randn(64, 81), dim=1)\ncriterion = nn.KLDivLoss()\n\nloss = criterion(a, b)\n<ECODE> Could you post an executable code snippet to reproduce this issue?", "isAccepted": false, "likes": 3, "poster": "ptrblck" }, { "contents": "Hi Ptrblck, Would you please help me with that?", "isAccepted": false, "likes": null, "poster": "saba" }, { "contents": "<SCODE> loss = nn.KLDivLoss(reduction='batchmean').cuda()\n loss_value = loss(softmax_pred_S.dim,softmax_pred_T)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Punitha_Valli" }, { "contents": "<SCODE>softmax_pred_S = nn.LogSoftmax(preds_S[0])\n<ECODE> <SCODE>loss_value = loss(softmax_pred_S.dim,softmax_pred_T)\n<ECODE> which is also wrong, since a tensor with log probabilities would be expected, not a dimension value.", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "Unsure if this is what’s causing your problem, but it’s worth keeping this technical caveat in mind, since it can arise in some applications.", "isAccepted": false, "likes": null, "poster": "Roj" } ]
false
How to modify the final FC layer based on the torch.model
null
[ { "contents": "Hi, everyone. I want to use the VGG19 in my own dataset, which has 8 classes.So I want to change the output of the last fc layer to 8. So what should I do to change the last fc layer to fit it. Thank you very much!", "isAccepted": false, "likes": 3, "poster": "zhongtao93" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "panovr" }, { "contents": "Something like: <SCODE>model = torchvision.models.vgg19(pretrained=True)\nfor param in model.parameters():\n param.requires_grad = False\n # Replace the last fully-connected layer\n # Parameters of newly constructed modules have requires_grad=True by default\nmodel.fc = nn.Linear(512, 8) # assuming that the fc7 layer has 512 neurons, otherwise change it \nmodel.cuda()<ECODE>", "isAccepted": false, "likes": 5, "poster": "Ismail_Elezi" }, { "contents": "Thank you, I’ll try it!", "isAccepted": false, "likes": null, "poster": "zhongtao93" }, { "contents": "Don’t forget to retrain the last layer though. At the moment the weights of it have just random numbers, so you must retrain it. You can do the retraining the same way as normal, the only change is that now the weights of the other layers won’t change because I set the requires_grad to False. If you have enough data to do a full training, then simply remove the line which does that, but it will surely take much longer to train.", "isAccepted": false, "likes": 2, "poster": "Ismail_Elezi" }, { "contents": "<SCODE>---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n<ipython-input-9-cbf958b7f4be> in <module>()\n 3 import torch.nn as nn\n 4 import torch.nn.functional as F\n----> 5 model = torchvision.models.vgg19(pretrained=True)\n 6 for param in model.parameters():\n 7 param.requires_grad = False\n\nTypeError: vgg19() takes no arguments (1 given)<ECODE>", "isAccepted": false, "likes": null, "poster": "zhongtao93" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Crazyai" }, { "contents": "Hmm, I didn’t try the code yesterday and just changed the line from a residual network. Now I tried it, and got the same error as you. It seems that you can get just the model, but not the weights of it. Which actually contradicts the documentation when it is written that you can give the pretrained argument.", "isAccepted": false, "likes": 1, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I uninstalled the current torchvision, and rebulid the newest version, and it worked. Thank you very much!", "isAccepted": false, "likes": null, "poster": "zhongtao93" }, { "contents": "My network initialized as below: <SCODE>#VGG19\nimport torchvision\nimport torch.nn as nn\nimport torch.nn.functional as F\nvgg19 = torchvision.models.vgg19(pretrained=True)\nfor param in vgg19.parameters():\n param.requires_grad = False\nrequires_grad=True by default\nvgg19.fc = nn.Linear(1000, 8) neurons, otherwise change it \nvgg19.cuda()\nimport torch.optim as optim\ncriterion = nn.CrossEntropyLoss() \noptimizer = optim.SGD(vgg19.fc.parameters(), lr=0.001, momentum=0.9)#lr 0.001\n<ECODE> <SCODE> inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())\n # zero the parameter gradients\n optimizer.zero_grad()\n # forward + backward + optimize\n outputs = vgg19(inputs)\n #print(type(outputs),type(inputs))\n loss = criterion(outputs, labels)\n #loss = F.nll_loss(outputs, labels)\n loss.backward() \n optimizer.step()\n step += 1\n # print statistics\n running_loss += loss.data[0]\n<ECODE> error RuntimeError: there are no graph nodes that require computing gradients", "isAccepted": false, "likes": 1, "poster": "zhongtao93" }, { "contents": "<SCODE> (classifier): Sequential (\n (0): Dropout (p = 0.5)\n (1): Linear (25088 -> 4096)\n (2): ReLU (inplace)\n (3): Dropout (p = 0.5)\n (4): Linear (4096 -> 4096)\n (5): ReLU (inplace)\n (6): Linear (4096 -> 100)\n )\n<ECODE> To replace the last linear layer, a temporary solution would be <SCODE>vgg19.classifier._modules['6'] = nn.Linear(4096, 8)\n<ECODE>", "isAccepted": false, "likes": 25, "poster": "Cysu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "zhongtao93" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Cysu" }, { "contents": "thank you. what should I change if I want to add dropout in ‘fc’? the last classifier layer has been changed in terms of the number of classes and I want to add dropout before it.", "isAccepted": false, "likes": null, "poster": "micklexqg" }, { "contents": "There’s already a dropout layer before the final FC layer, the code is <SCODE> self.classifier = nn.Sequential(\n nn.Linear(512 * 7 * 7, 4096),\n nn.ReLU(True),\n nn.Dropout(),\n nn.Linear(4096, 4096),\n nn.ReLU(True),\n nn.Dropout(),\n nn.Linear(4096, num_classes),\n )\n<ECODE> you only need to replace the last 4096, num_classes to your own fc layer.", "isAccepted": false, "likes": null, "poster": "chenglu" }, { "contents": "i am trying to convert the classification network to regression netowork by replacing the last layer with number of outputs as one can you please suggest any solution ay help would be greatly appreciated i am new to pytorch", "isAccepted": false, "likes": null, "poster": "Praveen_raja_sekar" }, { "contents": "You can just do exactly this by setting the last layer as: <SCODE>nn.Linear(4096, 1)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "<SCODE>class FineTuneModel(nn.Module):\n def __init__(self, original_model, num_classes):\n super(FineTuneModel, self).__init__()\n # Everything except the last linear layer\n self.features = nn.Sequential(*list(original_model.children())[:-1])\n self.classifier = nn.Sequential(\n nn.Linear(512, 1)\n )\n self.modelName = 'LightCNN-29'\n # Freeze those weights\n for p in self.features.parameters():\n p.requires_grad = False\n\n\n def forward(self, x):\n f = self.features(x) \n f = f.view(f.size(0), -1)\n y = self.classifier(f)\n return y\n\nmodel = FineTuneModel(original_model, args.num_classes)\nprint(model)\n<ECODE> Output: <SCODE>FineTuneModel(\n (features): Sequential()\n (classifier): Sequential(\n (0): Linear(in_features=512, out_features=1, bias=True)\n )\n)\n<ECODE> original model is pretrained resnet model", "isAccepted": false, "likes": null, "poster": "Praveen_raja_sekar" }, { "contents": "<SCODE>class FineTuneModel(nn.Module):\n def __init__(self, original_model, num_classes):\n super(FineTuneModel, self).__init__()\n # Everything except the last linear layer\n self.features = nn.Sequential(*list(original_model.children())[:-1])\n self.classifier = nn.Sequential(\n nn.Linear(512, 1)\n )\n self.modelName = 'LightCNN-29'\n # Freeze those weights\n for p in self.features.parameters():\n p.requires_grad = False\n\n\n def forward(self, x):\n f = self.features(x) \n f = f.view(f.size(0), -1)\n y = self.classifier(f)\n return y\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "Praveen_raja_sekar" } ]
false
Cuda runtime error (2): out of memory
null
[ { "contents": "I wrote some LSTM based code for language modeling: <SCODE> def forward(self, input, hidden):\n emb = self.encoder(input)\n h, c = hidden\n h.data.squeeze_(0)\n c.data.squeeze_(0)\n\n seq_len = input.size(0)\n batch_size = input.size(1)\n output_dim = h.size(1)\n\n output = [] \n for i in range(seq_len): \n h, c = self.rnncell(emb[i], (h, c))\n # self.hiddens: time * batch * nhid\n if i == 0:\n self.hiddens = h.unsqueeze(0)\n else:\n self.hiddens = torch.cat([self.hiddens, h.unsqueeze(0)])\n # h: batch * nhid\n #self.att = h.unsqueeze(0).expand_as(self.hiddens)\n\n self.hiddens = self.hiddens.view(-1, self.nhid)\n b = torch.mm(self.hiddens, self.U).view(-1, batch_size, 1)\n a = torch.mm(h, self.W).unsqueeze(0).expand_as(b)\n att = torch.tanh(a + b).view(-1, batch_size)\n att = self.softmax(att.t()).t()\n self.hiddens = self.hiddens.view(-1, batch_size, self.nhid)\n att = att.unsqueeze(2).expand_as(self.hiddens)\n output.append(torch.sum(att * self.hiddens, 0)) #hidden.data\n\n output = torch.cat(output)\n\n decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))\n decoded = self.logsoftmax(decoded)\n output = decoded.view(output.size(0), output.size(1), decoded.size(1)) \n return output, (h, c)\n<ECODE> And I got error in backward(): Any ideas why it might happen? The memory goes to 5800MB very quickly in the first 10 batches, and then it keeps running with this much memory occupied for another several hundred batches, and then it runs out of memory.", "isAccepted": false, "likes": 2, "poster": "ZeweiChu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "No, I don’t have to keep it. Is it a bad thing to keep unnecessary variables in the model?", "isAccepted": false, "likes": null, "poster": "ZeweiChu" }, { "contents": "if you keep Variables around, the corresponding graph that created these Variables is kept around. Hence the elevated memory usage…", "isAccepted": false, "likes": 4, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "The main part of my code looks like this. <SCODE> def repackage_variable(v, volatile=False):\n return [Variable(torch.from_numpy(h), volatile=volatile).unsqueeze(1) for h in v]\n\n for k in range(len(minbatches)):\n \n minbatch = minbatches[perm[k]]\n x_padded = utils.make_mask(minbatch)\n x_padded = repackage_variable(x_padded, False)\n x_padded = torch.cat(x_padded, 1)\n T = x_padded.size(0)\n B = x_padded.size(1)\n inp = x_padded[:T-1, :].long()\n target = x_padded[1:, :].long().view(-1, 1)\n if use_cuda:\n inp = inp.cuda()\n target = target.cuda() \n\n \n mask = (inp != 0).float().view(-1, 1)\n \n hidden = model.init_hidden(batch)\n model.zero_grad()\n #print(inp.size())\n output, hidden = model(inp, hidden)\n output = output.view(-1, n_vocab)\n \n loss = output.gather(1, target) * mask\n loss = -torch.sum(loss) / torch.sum(mask)\n loss.backward()\n \n\n optimizer.step()\n<ECODE> My question is, at each iteration, since all \"Variable\"s “inp” and “target” are overwritten, will the model state variables like “self.hiddens” also be overwritten? Does the old computation graph still exist in the next iteration? nvidia-smi shows that about 6G of memory is used, but I am only testing on batch size of 50, and the length should be at most 200, why would it take up so much memory? And the memory size increases among iterations from time to time, but it could stay the same for a while. Any clues what might be the reason?", "isAccepted": false, "likes": null, "poster": "ZeweiChu" }, { "contents": "Won’t self.hiddens be cleaned after backward?", "isAccepted": false, "likes": null, "poster": "ruotianluo" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "So, the reference should be cleaned up after self.hiddens is overwritten by next forward? Is it correct?", "isAccepted": false, "likes": null, "poster": "ruotianluo" }, { "contents": "Yes. It won’t be kept there indefinitely, but it still can postpone some frees and increase the overall memory usage.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Any progress on this one? I am facing a similar issue. I have implemented an LSTM and the memory remains constant for about 9000 iterations after which it runs out of memory. I am not keeping any references of the intermediate Variables. I am running this on a 12GB Titan X GPU on a shared server.", "isAccepted": false, "likes": null, "poster": "biswajitsc" }, { "contents": "Finally fixed it. There was problem in my code. I was unaware that x = y[a:b] is not a deep copy of y. I was modifying x, and in turn modifying y, and increasing the size of the data in every iteration. Using x = copy.deepcopy(y[a:b]) fixed it for me.", "isAccepted": false, "likes": 2, "poster": "biswajitsc" }, { "contents": "So did you figure out why your memory usage keeps increasing? I had the exact same question as you did. Thanks.", "isAccepted": false, "likes": null, "poster": "RangoHU" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "hmishfaq" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "John_Zhang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "hmishfaq" } ]
false
Understanding loss function gradients
null
[ { "contents": "I’m trying to understand the interpretation of gradInput tensors for simple criterions using backward hooks on the modules. Here are three modules (two criterions and a model): <SCODE>import torch\nimport torch.nn as nn\nimport torch.optim as onn\nimport torch.autograd as ann\n\n\nclass L1Loss(nn.Module):\n\n def __init__(self):\n super(L1Loss, self).__init__()\n\n def forward(self, input_var, target_var):\n '''\n L1 loss:\n |y - x|\n '''\n return (target_var - input_var).norm()\n\n\nclass CosineLoss(nn.Module):\n\n def __init__(self):\n super(CosineLoss, self).__init__()\n\n def forward(self, input_var, target_var):\n '''\n Cosine loss:\n 1.0 - (y.x / |y|*|x|)\n '''\n return 1.0 - input_var.dot(target_var) / (input_var.norm()*target_var.norm())\n\n\nclass Model(nn.Module):\n\n def __init__(self, mode=None):\n super(Model, self).__init__()\n def hook_func(module, grad_i, grad_o):\n print 'Grad input:', grad_i\n self.input_encoder = nn.Linear(20, 10)\n self.target_encoder = nn.Linear(20, 10)\n if mode == 'cos':\n self.criterion = CosineLoss()\n elif mode == 'l1':\n self.criterion = L1Loss()\n self.criterion.register_backward_hook(hook_func)\n self.optimizer = onn.Adam(self.parameters(), lr=1e-5)\n\n def forward(self, input_var_1, input_var_2):\n return self.input_encoder(input_var_1), self.target_encoder(input_var_2)\n\n def train(self, input_np, target_np):\n input_var = ann.Variable(input_np)\n target_var = ann.Variable(target_np)\n input_encode, target_encode = self.forward(input_var, target_var)\n loss = self.criterion(input_encode, target_encode)\n loss.backward()\n self.optimizer.step()\n return loss.data[0]\n<ECODE> <SCODE>mod = Model(mode='l1')\nfor i in range(5):\n inp = torch.rand(1, 20)\n tar = torch.rand(1, 20)\n loss_val = mod.train(inp, tar)\n print 'Iteration\\t{0}\\tLoss\\t{1}'.format(i, loss_val)\n<ECODE> I see grad input is a single tensor of shape (1, 10): <SCODE>Grad input: (Variable containing:\n-0.2466 -0.0966 0.0659 -0.1954 0.3573 -0.5367 0.5818 0.0758 0.2598 -0.2447\n[torch.FloatTensor of size 1x10]\n,)\n<ECODE> I was expecting two tensors of that shape, one for each input. On the other hand, if I run with the cosine loss: <SCODE>mod = Model(mode='cos')\nfor i in range(5):\n inp = torch.rand(1, 20)\n tar = torch.rand(1, 20)\n loss_val = mod.train(inp, tar)\n print 'Iteration\\t{0}\\tLoss\\t{1}'.format(i, loss_val)\n<ECODE> I find grad input is a single scalar value: <SCODE>Grad input: (Variable containing:\n-1\n[torch.FloatTensor of size 1]\n,)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "cjmcmurtrie" }, { "contents": "We’ll be rolling out some serious autograd refactors in the upcoming week or two, and it should be fixed afterwards.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for the prompt reply Adam. Is there any current workaround for this? My aim particularly concerns the cosine loss. I was experimenting with multiplying the gradients by tensors of feature weights (in other words, some weight-adjusted learning rates for the output features of the encoders).", "isAccepted": false, "likes": null, "poster": "cjmcmurtrie" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks, will give it a try! Looking forward to the refactors. Pytorch is simply awesome - love what you guys have done.", "isAccepted": false, "likes": 2, "poster": "cjmcmurtrie" } ]
false
Questions about ImageFolder
null
[ { "contents": "Which data format of the picture that ImageFolder can read? (pickle, JPG or PNG)", "isAccepted": false, "likes": 3, "poster": "yichuan9527" }, { "contents": "<SCODE>IMG_EXTENSIONS = [\n '.jpg', '.JPG', '.jpeg', '.JPEG',\n '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP',\n]\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "ncullen93" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yichuan9527" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yichuan9527" }, { "contents": "<SCODE>def find_classes(dir):\nclasses = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]\nclasses.sort()\nclass_to_idx = {classes[i]: i for i in range(len(classes))}\nreturn classes, class_to_idx<ECODE>", "isAccepted": false, "likes": null, "poster": "xiahouzuoxin" }, { "contents": "The labels are the sub-folders from the main directory. Say you have mnist images separated by digit like this: <SCODE>main_dir/\n 0/\n img1_digit0.jpg\n img2_digit0.jpg\n 1/\n img3_digit1.jpg\n ....\n ....\n 9/\n ...\n<ECODE> <SCODE>from torchvision.datasets import ImageFolder\nfrom torchvision.transforms import ToTensor\ndata = ImageFolder(root='main_dir', transform=ToTensor())\n<ECODE> Now the unique set of class labels is found easily, but this isn’t the class label for each individual image. <SCODE>print(data.classes) # ['0','1',..'9']\n<ECODE> And the images can be accessed by an integer index: <SCODE>x,y = data[0] # x is the first image as PIL, y is that images class label\n<ECODE> You can easily iterate through these images: <SCODE>for i in range(len(data)):\n x,y = data[i]\n<ECODE> <SCODE>from torch.utils.data import DataLoader\nloader = DataLoader(data)\nfor x, y in loader:\n print(x) # image\n print(y) # image label\n<ECODE> <SCODE>loader = DataLoader(data, shuffle=True)\n<ECODE> <SCODE>from torch.utils.data import RandomSampler\nsampler = RandomSampler(data)\nloader = DataLoader(data, sampler=sampler)\n<ECODE> <SCODE>import numpy as np\ndef my_numpy_loader(filename):\n return np.load(filename)\n\ndata = ImageFolder(root='path/to/numpy/maindir/', loader=my_numpy_loader, transform=ToTensor())\n<ECODE> Example: <SCODE>import numpy as np\ndef my_numpy_loader(filename):\n return np.load(filename)\nfrom torchvision.transforms import Compose, ToPILImage, ToTensor, ...\nmy_transform = Compose([ToPILImage(), ... typical transforms .., ToTensor()])\ndata = ImageFolder(root='path/to/numpy/maindir/', \n loader=my_numpy_loader, transform=my_transform)\n<ECODE> Here, you convert numpy -> pil image -> do some transforms -> convert back to torch tensor…", "isAccepted": false, "likes": 24, "poster": "ncullen93" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ncullen93" }, { "contents": "Dose pytorch have any functions to read the images from pickle file?", "isAccepted": false, "likes": null, "poster": "yichuan9527" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I noticed that ImageFolder can process following structure: <SCODE>root/dog/xxx.png\nroot/dog/xxy.png\n\nroot/cat/123.png\nroot/cat/asd932_.png\n<ECODE> My question is, can it process a deeper /nested folder structure? e.g.: <SCODE>root/mammal/dog/xxx.png\nroot/mammal/cat/xxy.png\n\nroot/avian/pigeon/123.png\nroot/avian/crow/nsdf3.png\n<ECODE> If yes, would the classes be “mammal, avian”, or would they be “dog, cat, pigeon, crow”? Thanks!", "isAccepted": false, "likes": 2, "poster": "FuriouslyCurious" }, { "contents": "<SCODE>class TrainImageFolder(ImageFolder):\n\n def __getitem__(self, index):\n filename = self.imgs[index]# 'avian/crow/nsdf3.png'\n if filename.split('/')[0] =='avian': label = 0\n ### or \n # if filename.split('/')[1] =='dog': label = 0\n return super(TrainImageFolder, self).__getitem__(index)[0], label\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Stefan_Radonjic" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "I have a small dataset of only 1200 images. My idea was to to load dataset, perform lets say Random Crop on it, then load dataset again but this time perform ColorJitter and then merge those two datasets into one which will result in dataset of 2400 pictures. I wanted to create a dataset of atleast 7200 pictures ( which is 6 transformations each time i load dataset) . Do you by any chance have a better idea?", "isAccepted": false, "likes": null, "poster": "Stefan_Radonjic" }, { "contents": "<SCODE>class MyData(Dataset):\n def __init__(self):\n self.data = torch.randn(100)\n \n def __getitem__(self, index):\n return self.data[index]\n \n def __len__(self):\n return len(self.data)\n \ndata1 = MyData()\ndata_loader1 = DataLoader(data1, batch_size=1)\n\nfor batch_idx, data in enumerate(data_loader1):\n print(batch_idx) # prints to idx 99\n \ndata2 = MyData()\nconcat_data = ConcatDataset((data1, data2))\nconcat_loader = DataLoader(concat_data, batch_size=1)\n\nfor batch_idx, data in enumerate(concat_loader):\n print(batch_idx) # prints to idx 199\n \nprint(len(data_loader1)) # 100\nprint(len(concat_loader)) # 200\n<ECODE> I cannot see line 89. Would you please post the code snippet?", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Sumesh_Uploader" }, { "contents": "<SCODE>class MyDataset(Dataset):\n def __init__(self, dataset1, dataset2):\n self.dataset1 = dataset1\n self.dataset2 = dataset2\n \n def __getitem__(self, index):\n x1 = self.dataset1[index]\n x2 = self.dataset2[index]\n \n return x1, x2\n \n def __len__(self):\n return len(self.dataset1)\n\n\ndataset1 = TensorDataset(torch.randn(100, 1))\ndataset2 = TensorDataset(torch.randn(100, 1))\ndataset = MyDataset(dataset1, dataset2)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "ptrblck" }, { "contents": "Hi, Nick Is there way I can edit the IMG_EXTENSIONS so that the ImageFolder would be able to read more image with .gif and .oct-stream extension? Best", "isAccepted": false, "likes": null, "poster": "nimning" }, { "contents": "Then I suppose it’s that the class dog is encoded into [1, 0] as for the first category, And the cat goes to [0, 1], Am I right?", "isAccepted": false, "likes": null, "poster": "Alan_Wu" } ]
false
Model parallelism in pytorch for large(r than 1 GPU) models?
null
[ { "contents": "Hi! I have a model that is too large to fit inside a single TITAN X (even with 1 batch size). I want to split it over several GPUs such that the memory cost is shared between GPUs. That is, place different parts of the same model on different GPUs and train it end-to-end. Questions:", "isAccepted": false, "likes": 4, "poster": "ajdroid" }, { "contents": "<SCODE>class MyModel(nn.Module):\n def __init__(self, split_gpus):\n self.large_submodule1 = ...\n self.large_submodule2 = ...\n\n self.split_gpus = split_gpus\n if split_gpus:\n self.large_submodule1.cuda(0)\n self.large_submodule1.cuda(1)\n\n def forward(self, x):\n x = self.large_submodule1(x)\n if split_gpus:\n x = x.cuda(1) # P2P GPU transfer\n return self.large_submodule2(x)\n<ECODE>", "isAccepted": false, "likes": 18, "poster": "apaszke" }, { "contents": "This was so easy! I love your work with PyTorch. Minimum fuss! Cheers!", "isAccepted": false, "likes": 4, "poster": "ajdroid" }, { "contents": "", "isAccepted": false, "likes": 5, "poster": "D-X-Y" }, { "contents": "<SCODE> def forward(self, x):\n \n x1= self.resnet_1(x[:,0:3,:,:]).cuda(0)\n x2= self.resnet_2(x[:,3:6,:,:]).cuda(1)\n\n flat = torch.cat([x1, x2],1).cuda(???)\n logit = self.fc(flat).cuda(???) \n return logit\n\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "Hengck" }, { "contents": "did you figure out how to do this?", "isAccepted": false, "likes": null, "poster": "isalirezag" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "dimtsi" }, { "contents": "Dear apaszke, I am trying to implement inter-GPU communiation by using pytorch+mpi+gpu. Following are the tested code, which is designed to make sure that process0 runs on GPU0 and process1 runs on GPU1. However, the code can not be run successfully. Do you know why? def run(rank, size): mpirun noticed that process rank 1 with PID 0 on node osherlab exited on signal 11 (Segmentation fault). Your help will be appreciated. Thank you.", "isAccepted": false, "likes": 1, "poster": "LeiGuan" }, { "contents": "Would this also work for one 1 gpu with two sequential steps somehow? If my model is too large to fit on one gpu can I somehow do the forward/backward pass sequtially where I only have one part in gpu memory and somehow cache the other part for the backward pass later. Somehow like this: <SCODE>x = submodule1(x)\n#somehow unload intermediate results of submodule1 from gpu here and cache for later backward pass \n#(and then load on gpu again when needed in backward pass of submodule1)\nx = submodule2(x)\n<ECODE> I could imagine how this works but then I don’t know how I would pass the gradients that come from submodule2 back to submodule1 and initiate the backward pass on submodule1.", "isAccepted": false, "likes": null, "poster": "marcel1991" }, { "contents": "Thx", "isAccepted": false, "likes": null, "poster": "RaayChen" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "shinx" } ]
false
Quetions about train DCGAN
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "yichuan9527" }, { "contents": "I am not totally sure, because I am still learning PyTorch, so take my answer with a pinch of salt. input and label types are Variable. They are not Tensors, instead they are objects that encapsulate the Tensors and all their history (the operations that they have done). You neither can (you get a NameError if you try to do so) - nor want - to resize the Variable objects. What you want to do is to resize the Tensors within the variable and you can do that by using precisely the given code (data function).", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Why is the input tensor resized in every iteration of the loop? Is there a computational overhead in doing this?", "isAccepted": false, "likes": null, "poster": "zip" } ]
false
SGD Nesterov for Optim
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "ritchieng" }, { "contents": "<SCODE>from .optimizer import Optimizer, required\n\n\nclass SGD(Optimizer):\n \"\"\"Implements stochastic gradient descent (optionally with momentum).\n Args:\n params (iterable): iterable of parameters to optimize or dicts defining\n parameter groups\n lr (float): learning rate\n momentum (float, optional): momentum factor (default: 0)\n weight_decay (float, optional): weight decay (L2 penalty) (default: 0)\n dampening (float, optional): dampening for momentum (default: 0)\n nesterov(bool, optional): enables Nesterov momentum (default: False)\n \n Example:\n >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)\n >>> optimizer.zero_grad()\n >>> loss_fn(model(input), target).backward()\n >>> optimizer.step()\n \"\"\"\n\n def __init__(self, params, lr=required, momentum=0, dampening=0,\n weight_decay=0, nesterov=False):\n defaults = dict(lr=lr, momentum=momentum, dampening=dampening,\n weight_decay=weight_decay, nesterov=nesterov)\n if nesterov and (momentum <= 0 and dampening != 0):\n raise ValueError(\"Nesterov momentum requires a momentum and zero dampening\")\n super(SGD, self).__init__(params, defaults)\n\n def step(self, closure=None):\n \"\"\"Performs a single optimization step.\n Arguments:\n closure (callable, optional): A closure that reevaluates the model\n and returns the loss.\n \"\"\"\n loss = None\n if closure is not None:\n loss = closure()\n\n for group in self.param_groups:\n weight_decay = group['weight_decay']\n momentum = group['momentum']\n dampening = group['dampening']\n nesterov = group['nesterov']\n\n for p in group['params']:\n d_p = p.grad.data\n if weight_decay != 0:\n d_p.add_(weight_decay, p.data)\n if momentum != 0:\n param_state = self.state[p]\n if 'momentum_buffer' not in param_state:\n param_state['momentum_buffer'] = d_p.clone()\n else:\n buf = param_state['momentum_buffer']\n buf.mul_(momentum).add_(1 - dampening, d_p)\n if nesterov:\n d_p.add_(momentum, buf)\n else:\n d_p = buf\n\n p.data.add_(-group['lr'], d_p)\n\n return loss\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ritchieng" }, { "contents": "Yes, it’s going to be merged into master soon.", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Royi" }, { "contents": "Accelerated gradient descent is not a momentum method, but it has been shown that it is closely related and the update rule can be rewritten as a momentum-like update rule.", "isAccepted": false, "likes": 1, "poster": "Ismail_Elezi" } ]
false
The weird test dataset performance of pytorch
null
[ { "contents": "Hi, <SCODE>class MyDeepNeural(nn.Module):\ndef __init__(self, p_keep_conv):\n super(MyDeepNeural, self).__init__()\n\n self.conv = nn.Sequential()\n self.conv.add_module('conv1', nn.Conv2d(1, 32, kernel_size=3, padding=1))\n self.conv.add_module('relu1', nn.ReLU())\n self.conv.add_module('pool1', nn.MaxPool2d(kernel_size=2))\n self.conv.add_module('drop1', nn.Dropout(1 - p_keep_conv))\n self.conv.add_module('conv2', nn.Conv2d(32, 64, kernel_size=3, padding=1))\n self.conv.add_module('relu2', nn.ReLU())\n self.conv.add_module('pool2', nn.MaxPool2d(kernel_size=2))\n self.conv.add_module('drop2', nn.Dropout(1 - p_keep_conv))\n self.conv.add_module('conv3', nn.Conv2d(64, 128, kernel_size=3, padding=1))\n self.conv.add_module('relu3', nn.ReLU())\n self.conv.add_module('pool3', nn.MaxPool2d(kernel_size=2))\n self.conv.add_module('drop3', nn.Dropout(1 - p_keep_conv))\n\n self.fc = nn.Sequential()\n self.fc.add_module('fc1',nn.Linear(128*9, 625))\n self.fc.add_module('relu4',nn.ReLU())\n self.fc.add_module('fc2',nn.Linear(625,10))\n self.fc.add_module('softmax',nn.Softmax())<ECODE>", "isAccepted": false, "likes": null, "poster": "waitwaitforget" }, { "contents": "Apparently that’s the model dynamics on your dataset. It happens.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you very much. It solved my problem well.", "isAccepted": false, "likes": null, "poster": "waitwaitforget" } ]
false
Connection timeout for pytorch download?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "temerick" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
A gist of affine transforms in pytorch
vision
[ { "contents": "Hi all, Right now it’s not maximally efficient because i cast to and from numpy… Eventually I’ll implement this all in torch and then it can be performed efficiently on the gpu. I’ve spot-checked these transforms so I’m confident they aren’t blatantly wrong, but if you come across a bug then add a comment in the gist. Short example of the easiest use case <SCODE>import torch\naffine_transform = Affine(rotation_range=30, translation_range=(0.2,0.2), zoom_range=(0.8,1.2))\nx = torch.ones(10,3,50,50)\ny = torch.ones(10,3,50,50)\ndata = TensorDataset(x, y, co_transform=affine_transform)\n\nx_sample, y_sample = data[0] # the transforms should be applied to both x and y\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "ncullen93" }, { "contents": "Nice! We’ll think how could that be integrated into torchvision. Note that numpy conversions are nearly free, because the array/tensor you get after the conversion shares the data with the original object.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I made something simpler: <SCODE>from PIL import Image\nfrom scipy import misc\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import ndimage\nfrom skimage.transform import warp, AffineTransform\n\nf = misc.face(gray=True)\nplt.hist(f.flatten())\n\nclass RandomAffineTransform(object):\n def __init__(self,\n scale_range,\n rotation_range,\n shear_range,\n translation_range\n ):\n self.scale_range = scale_range\n self.rotation_range = rotation_range\n self.shear_range = shear_range\n self.translation_range = translation_range\n\n def __call__(self, img):\n img_data = np.array(img)\n scale_x = np.random.uniform(*self.scale_range)\n scale_y = np.random.uniform(*self.scale_range)\n scale = (scale_x, scale_y)\n rotation = np.random.uniform(*self.rotation_range)\n shear = np.random.uniform(*self.shear_range)\n translation = (\n np.random.uniform(*self.translation_range),\n np.random.uniform(*self.translation_range)\n )\n af = AffineTransform(scale=scale, shear=shear, rotation=rotation, translation=translation)\n img_data1 = warp(img_data, af.inverse)\n return Image.fromarray(img_data1 * 255)\n<ECODE> I ended up adding some randomness into it: <SCODE>from PIL import Image\nfrom scipy import misc\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import ndimage\nfrom skimage.transform import warp, AffineTransform\n\n\nclass RandomAffineTransform(object):\n def __init__(self,\n scale_range,\n rotation_range,\n shear_range,\n translation_range\n ):\n self.scale_range = scale_range\n self.rotation_range = rotation_range\n self.shear_range = shear_range\n self.translation_range = translation_range\n\n def __call__(self, img):\n img_data = np.array(img)\n h, w, n_chan = img_data.shape\n scale_x = np.random.uniform(*self.scale_range)\n scale_y = np.random.uniform(*self.scale_range)\n scale = (scale_x, scale_y)\n rotation = np.random.uniform(*self.rotation_range)\n shear = np.random.uniform(*self.shear_range)\n translation = (\n np.random.uniform(*self.translation_range) * w,\n np.random.uniform(*self.translation_range) * h\n )\n af = AffineTransform(scale=scale, shear=shear, rotation=rotation, translation=translation)\n img_data1 = warp(img_data, af.inverse)\n img1 = Image.fromarray(np.uint8(img_data1 * 255))\n return img1\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "kindlychung" }, { "contents": "Cool! That’s very simple! The gist above is very old… If you see now the affine transforms in my torchsample package are now completely written in pytorch and thus can be included in nn.Module’s and/or run on gpu. Whatever works for you though!", "isAccepted": false, "likes": null, "poster": "ncullen93" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "arogozhnikov" }, { "contents": "Is there a built in way to do it in PyTorch?", "isAccepted": false, "likes": 1, "poster": "Royi" }, { "contents": "imageRotate = lambda mI: mI.rotate((2 * imageRotAngle * np.random.rand(1)) - imageRotAngle) trainSetTransform = transforms.Compose([transforms.RandomCrop(28, padding = imageCropPad), transforms.Lambda(imageRotate), transforms.ToTensor()])", "isAccepted": false, "likes": 1, "poster": "Royi" }, { "contents": "<SCODE>transforms.Compose([\ntransforms.RandomSizedCrop(224),\ntransforms.RandomHorizontalFlip(),\ntransform.Affine(rotation_range=10,translation_range =0.1),\ntransforms.ToTensor()]\n<ECODE> Do you know why? And the version of my pytorch is 0.3.", "isAccepted": false, "likes": null, "poster": "will_soon" } ]
false
How to parallel in GPU when finetuning
null
[ { "contents": "In my Finetune models , I wanna parallel my model, in multi-gpus, my code is shown below: <SCODE>class FinetuneModel(nn.Module):\n def __init__(self, pretrained_model, ngpu = opt.gpuids):\n self.ngpu = ngpu\n super(FinetuneModel, self).__init__()\n self.features = pretrained_model\n self.classifier = nn.Sequential(\n nn.Dropout(),\n nn.Linear(512 * 4 * 4, 2048),\n .... \n)) \n \n def forward(self, x):\n gpuids = None\n if self.ngpu:\n gpuids = range(self.ngpu)\n features = self.features(x)#self.features has already implemented data parallel\n return nn.parallel.data_parallel(self.classifier, features, device_ids=gpuids)\n\n<ECODE> as far as I know , when doing <SCODE>features = self.features(x)#self.features.forward has already implemented data parallel\nscore = nn.parallel.data_parallel(self.classifier, features, device_ids = gpuids)\n<ECODE> <SCODE>score = nn.parallel.data_parallel([self.features,self.classifier], features, device_ids = gpuids)\n<ECODE> which only does one broadcast", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Cysu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "As AlexNet and VGG contain lots of parameters in the FC layers. Syncing params on these layers will have large overhead. It’s faster to compute the FC layers only on one GPU.", "isAccepted": false, "likes": 1, "poster": "Cysu" }, { "contents": "I see. great thanks.", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Quickly loading raw binary files
null
[ { "contents": "I need to load raw int16 binary files on-the-fly during training. In Lua Torch I could just do the following: <SCODE>img = torch.ShortStorage(filename)\n<ECODE> It appears that that functionality does not exist in PyTorch. So instead I am loading into a numpy array as follows: <SCODE>img = np.fromfile(filename, 'int16')\n<ECODE> The problem is that np.fromfile is extremely slow. For my large data files (>100MB), np.fromfile loads four orders of magnitude slower than the old torch.ShortStorage method. How can I get the fast load speeds in PyTorch?", "isAccepted": false, "likes": null, "poster": "abweiss" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Apparently numpy has (fast) memory mapping: <SCODE>img = np.memmap(filename, dtype='int16', mode='r').__array__()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "abweiss" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "RuntimeError: can’t convert a given np.ndarray to a tensor - it has an invalid type. The only supported types are: double, float, int64, int32, and uint8.", "isAccepted": false, "likes": null, "poster": "abweiss" }, { "contents": "Huh, we should add int16 to. I’ll open an issue.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
View Output in PyTorch
null
[ { "contents": "Hi, Is there a way to view output images of the model in PyTorch other than converting to numpy and viewing with matplotlib? Help please.", "isAccepted": false, "likes": null, "poster": "srv902" }, { "contents": "How are you displaying the images from Keras? Maybe they’re in an incorrect range", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Hi, For Keras I am using simple Matplotlib functions. The range is same in both frameworks. The problem was with the dataset and the salt and pepper noise diminishes if I increase the number of epochs. It worked in Keras, but haven’t checked in PyTorch yet.", "isAccepted": false, "likes": null, "poster": "srv902" } ]
false
Failed on GPU with AWS EC2 g2 instance
null
[ { "contents": "I installed pytorch on AWS ec2 g2 instance ( with cuda 7.5)", "isAccepted": false, "likes": null, "poster": "Xixi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks! I will take a try", "isAccepted": false, "likes": null, "poster": "Xixi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mjdietzx" }, { "contents": "This article explains the issue. This is a known issue.", "isAccepted": false, "likes": null, "poster": "riyawilliams" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jyostna_nareshit" } ]
false
ByteTensor not working with F.conv2d?
null
[ { "contents": "print(model(Variable(obs, volatile=True))) I encounter when executing this line obs is a size(1, 4, 84, 84) ByteTensor corresponding to (input_size, channels, height, width) Here is my model definition: <SCODE>class DQN(nn.Module):\n def __init__(self, in_channels=4, num_actions=18):\n super(DQN, self).__init__()\n self.conv1 = nn.Conv2d(in_channels, 32, kernel_size=8, stride=4)\n self.conv2 = nn.Conv2d(32, 64, kernel_size=4, stride=2)\n self.conv3 = nn.Conv2d(64, 64, kernel_size=3, stride=1)\n self.fc4 = nn.Linear(7 * 7 * 64, 512)\n self.fc5 = nn.Linear(512, num_actions)\n\n def forward(self, x):\n x = F.relu(self.conv1(x))\n x = F.relu(self.conv2(x))\n x = F.relu(self.conv3(x))\n x = F.relu(self.fc4(x.view(x.size(0), -1)))\n return self.fc5(x)\n<ECODE> Below are errors: <SCODE>/Users/mac/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\n 208 \n 209 def __call__(self, *input, **kwargs):\n--> 210 result = self.forward(*input, **kwargs)\n 211 for hook in self._forward_hooks.values():\n 212 hook_result = hook(self, input, result)\n\n<ipython-input-25-3131e91fac28> in forward(self, x)\n 17 \n 18 def forward(self, x):\n---> 19 x = F.relu(self.conv1(x))\n 20 x = F.relu(self.conv2(x))\n 21 x = F.relu(self.conv3(x))\n\n/Users/mac/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\n 208 \n 209 def __call__(self, *input, **kwargs):\n--> 210 result = self.forward(*input, **kwargs)\n 211 for hook in self._forward_hooks.values():\n 212 hook_result = hook(self, input, result)\n\n/Users/mac/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/conv.py in forward(self, input)\n 235 def forward(self, input):\n 236 return F.conv2d(input, self.weight, self.bias, self.stride,\n--> 237 self.padding, self.dilation, self.groups)\n 238 \n 239 \n\n/Users/mac/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/functional.py in conv2d(input, weight, bias, stride, padding, dilation, groups)\n 35 f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False,\n 36 _pair(0), groups)\n---> 37 return f(input, weight, bias) if bias is not None else f(input, weight)\n 38 \n 39 \n\n/Users/mac/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/_functions/conv.py in forward(self, input, weight, bias)\n 32 if k == 3:\n 33 input, weight = _view4d(input, weight)\n---> 34 output = self._update_output(input, weight, bias)\n 35 if k == 3:\n 36 output, = _view3d(output)\n\n/Users/mac/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/_functions/conv.py in _update_output(self, input, weight, bias)\n 89 \n 90 self._bufs = [[] for g in range(self.groups)]\n---> 91 return self._thnn('update_output', input, weight, bias)\n 92 \n 93 def _grad_input(self, input, weight, grad_output):\n\n/Users/mac/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/_functions/conv.py in _thnn(self, fn_name, input, weight, *args)\n 148 impl = _thnn_convs[self.thnn_class_name(input)]\n 149 if self.groups == 1:\n--> 150 return impl[fn_name](self, self._bufs[0], input, weight, *args)\n 151 else:\n 152 res = []\n\n/Users/mac/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/_functions/conv.py in call_update_output(self, bufs, input, weight, bias)\n 220 def make_update_output(fn):\n 221 def call_update_output(self, bufs, input, weight, bias):\n--> 222 backend = type2backend[type(input)]\n 223 bufs.extend([input.new(), input.new()])\n 224 output = input.new(*self._output_size(input, weight))\n\n/Users/mac/anaconda/envs/py35/lib/python3.5/site-packages/torch/_thnn/__init__.py in __getitem__(self, name)\n 13 \n 14 def __getitem__(self, name):\n---> 15 return self.backends[name].load()\n 16 \n 17 \n\nKeyError: <class 'torch.ByteTensor'>\n<ECODE>", "isAccepted": false, "likes": null, "poster": "transedward" }, { "contents": "I am not very sure but the model weights must be stored as the “FloatTensor” and pytorch raises an error when the types mismatch. Running same thing with x as “FloatTensor” runs on my machine", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Is there plans to change this? To support fine-tuning quantized models in 8 bit or similar?", "isAccepted": false, "likes": null, "poster": "luke" }, { "contents": "we cannot compute gradients for discrete operations. Hence we dont expect this to change (atleast we dont plan to change this).", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Convergence speed difference compared to that of Torch7
null
[ { "contents": "I implemented the same autoencoders with BCE criterion using pytorch and torch7. When I tried to train them using SGD with momentum, the convergence speeds were almost same within about 2000 iterations. But, the convergence of pytorch becomes quite slower than that of torch7. Has anyone compared the convergence performances ?", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I also tried rmsprop with batch normalization, the same things were observed. The below is a modified rmsprop employed in Torch7. <SCODE>square_avg:mul(alpha)\nsquare_avg:addcmul(1.0-alpha, grad, grad)\navg:sqrt(square_avg+eps)\nparams:mul(1-lr*weight_decay):addcdiv(-lr, grad, avg)\n<ECODE> And, the below is an equivalent rmsprop in pytorch <SCODE>square_avg.mul_(alpha).addcmul_(1 - alpha, grad, grad)\navg = square_avg.add(group['eps']).sqrt()\np.data.mul_(1-group['lr']*weight_decay).addcdiv_(-group['lr'], grad, avg)\n<ECODE> Are they equivalent ? Also, my pytorch code didn’t work at all before 0.19 version due to some run time errors. Is that related to this convergence difference ?", "isAccepted": false, "likes": 1, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for your reply. Is it possible that the convergence becomes slower even if some gradients becomes NaNs or infs ? My model is just autoencoder consisting of convolution, maxpooling, maxunpooling, relu, and sigmoid. Anyway, to compare the gradients, I found that the gradients can be saved in a flatted form by <SCODE>torch.cat([g.grad.view(-1) for g in model.parameters()],0)<ECODE>", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Loading VOC 2012 dataset with Dataloaders
vision
[ { "contents": "VOC 2012 dataset consists of images and their corresponding segmentation maps. I want to apply similar transforms to both the image and its segmentation map while loading. Any suggestions about how to proceed for this task?", "isAccepted": false, "likes": 1, "poster": "Gaurav_Pandey" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "and here are some relevant affine transforms to actually use – you’ll see the transforms must take in two arguments for the input and target images:", "isAccepted": false, "likes": 3, "poster": "ncullen93" }, { "contents": "Thanks guys. That was very helpful.", "isAccepted": false, "likes": null, "poster": "Gaurav_Pandey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "bodokaiser" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "isalirezag" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "HT_Wang" } ]
false
Questions about Dataloader and Dataset
null
[ { "contents": "I run my dataloader like this: <SCODE>dataset= ImageFolder('/home/x/data/pre/train/',\n transform=transforms.Compose([transforms.Scale(opt.image_size),\n transforms.RandomCrop(opt.image_size) ,\n transforms.ToTensor(),\n transforms.Normalize([0.5]*3,[0.5]*3)\n ]))\n \ndataloader=t.utils.data.DataLoader(dataset,opt.batch_size,True,num_workers=opt.workers)\n<ECODE> <SCODE> try:\n img = self.loader(os.path.join(self.root, path))\n except Exception as e:\n index = index - 1 if index > 0 else index + 1 \n return self.__getitem__(index)\n<ECODE>", "isAccepted": false, "likes": 11, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "At first I try: <SCODE>def my_loader(path):\n try: \n return Image.open(path).convert('RGB')\n except Exception as e:\n print e\ndef my_collate(batch):\n \"Puts each data field into a tensor with outer dimension batch size\"\n batch = filter (lambda x:x is not None, batch)\n return default_collate(batch)\n\ndataset= ImageFolder('/home/x/train/',\n transform=transforms.Compose([transforms.ToTensor()]),\n loader = my_loader)\ndataloader=t.utils.data.DataLoader(dataset,4,True,collate_fn=my_collate)\n<ECODE> so then I try this: <SCODE>def my_collate(batch):\n batch = filter (lambda x:x is not None, batch)\n return default_collate(batch)\nclass MyImageFolder(ImageFolder):\n __init__ = ImageFolder.__init__\n def __getitem__(self, index):\n try: \n return super(MyImageFolder, self).__getitem__(index)\n except Exception as e:\n print e\n\ndataset= MyImageFolder('/home/x/train/', transform = transforms.Compose([transforms.ToTensor()..]) )\ndataloader=t.utils.data.DataLoader(dataset, 4, True, collate_fn=my_collate)\n<ECODE>", "isAccepted": false, "likes": 21, "poster": "chenyuntc" }, { "contents": "To solve this particular problem with corrupted images you can just add two lines before your code: <SCODE>from PIL import ImageFile\nImageFile.LOAD_TRUNCATED_IMAGES = True\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "PermanentPon" }, { "contents": "Hi, Chen. Does the batch size decrease when use this way to filter invalid data.", "isAccepted": false, "likes": 1, "poster": "Sun_ShiJie" }, { "contents": "Using nonechucks, your code would look something like this: <SCODE>dataset= ImageFolder('/home/x/data/pre/train/',\n transform=transforms.Compose([transforms.Scale(opt.image_size),\n transforms.RandomCrop(opt.image_size) ,\n transforms.ToTensor(),\n transforms.Normalize([0.5]*3,[0.5]*3)\n ]))\n\nimport nonechucks as nc\ndataset = nc.SafeDataset(dataset)\ndataloader = nc.SafeDataLoader(dataset,opt.batch_size,True,num_workers=opt.workers)\n\n# You can now use `dataloader` as though it was a regular DataLoader without\n# having to worry about the bad samples!\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "msamogh" }, { "contents": "How you get the internal function ‘default_collate’? It is within the ‘dataloder.py’.", "isAccepted": false, "likes": 1, "poster": "nimning" }, { "contents": "<SCODE>import torch\nimport torch.utils.data\nprint(torch.utils.data.dataloader.default_collate)\n<ECODE> Best regards Thomas", "isAccepted": false, "likes": 3, "poster": "tom" }, { "contents": "Note: for python 3, replace batch = filter(lambda x : x is not None, batch) with batch = list(filter(lambda x : x is not None, batch))", "isAccepted": false, "likes": 8, "poster": "zanaa" }, { "contents": "I am getting an error: What am I doing wrong.", "isAccepted": false, "likes": null, "poster": "Shivam_Chandhok" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "weiwei" }, { "contents": "Is there a way to get this to work when the batch size is only 1?", "isAccepted": false, "likes": 3, "poster": "Martin36" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Keyv_Krmn" }, { "contents": "from torch.utils.data.dataloader import default_collate", "isAccepted": false, "likes": 1, "poster": "xu555" }, { "contents": "Did you find any fix?", "isAccepted": false, "likes": null, "poster": "Syzygianinfern0" } ]
false
Run code in cpu
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "yichuan9527" }, { "contents": "I can’t see that line in the link you posted.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yichuan9527" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yichuan9527" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Speed deteriorates when two replicates running with same random seed
null
[ { "contents": "Hi, I met an interesting problem. When I ran two replicates of my model on two GPUs with CUDA_VISIBLE_DEVICES macro and initialized them with same random seed, the speed of the two training process deteriorates after several iterations. I guess it may be related the implementation of the ROI pooling layer. When initialized with the same seed, some GPU operations will lead conflicts. Is there anyone knowing why?", "isAccepted": false, "likes": null, "poster": "yikang-li" }, { "contents": "So if you run a single process it’s ok, but if you start two it will run slower? Maybe the GPUs are competing for the PCIe bandwidth", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you very much. Maybe it is due to the competition of PCle bandwidth. But at the beginning tens of thousands of iterations, there is no such problem. Do you know what I can do to avoid the problem?", "isAccepted": false, "likes": null, "poster": "yikang-li" }, { "contents": "The problem always comes with the low usage of GPU usage.", "isAccepted": false, "likes": null, "poster": "yikang-li" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Abhishek_Pal" } ]
false
Guidelines for assigning num_workers to DataLoader
null
[ { "contents": "I realize that to some extent this comes down to experimentation, but are there any general guidelines on how to choose the num_workers for a DataLoader object? Should num_workers be equal to the batch size? Or the number of CPU cores in my machine? Or to the number of GPUs in my data-parallelized model? Is there a tradeoff with using more workers due to overhead? Also, is there ever a reason to leave num_workers as 0 instead of setting it at least to 1?", "isAccepted": false, "likes": 90, "poster": "abweiss" }, { "contents": "Having more workers will increase the memory usage and that’s the most serious overhead. I’d just experiment and launch approximately as many as are needed to saturate the training. It depends on the batch size, but I wouldn’t set it to the same number - each worker loads a single batch and returns it only once it’s ready. num_workers equal 0 means that it’s the main process that will do the data loading when needed, num_workers equal 1 is the same as any n, but you’ll only have a single worker, so it might be slow", "isAccepted": false, "likes": 43, "poster": "apaszke" }, { "contents": "And I set num_workers = 0,the (RAM, but not GPU) memory remains stable with the increase of epoch. Can you give me some suggestions or instructions about the problem? Thank you so much.", "isAccepted": false, "likes": 4, "poster": "mjchen611" }, { "contents": "I would love to get your advice about the recommended way to deal with my data - I feed my CNN with large batches (256/512/1024…) of small patches of size 50x50. I intend to use the ImageFolder DataLoader for that, but I’m afraid that it would be very uneffective to load from disk a lot of small images in high frequency.", "isAccepted": false, "likes": null, "poster": "YossiB" }, { "contents": "", "isAccepted": false, "likes": 52, "poster": "harsv" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "taiky" }, { "contents": "if the data set is small like cifar10, why doesn’t the whole data set stay in the GPU the whole time? Why would # workers do anything?", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "", "isAccepted": false, "likes": 10, "poster": "ptrblck" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Eric_K" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "If your dataset is really small and you don’t need batching, you can just push the data onto the GPU and simply apply your training procedure.", "isAccepted": false, "likes": 4, "poster": "ptrblck" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "Correct me if you have a different opinion.", "isAccepted": false, "likes": null, "poster": "singleroc" }, { "contents": "", "isAccepted": false, "likes": 17, "poster": "ptrblck" }, { "contents": "Cheers", "isAccepted": false, "likes": 2, "poster": "mnazaal" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "isalirezag" }, { "contents": "Is there any one has met this situation that setting num_workers = 4 could make the train stop? Recently, I tested a RFBnet project, and find when I set num_workers= 4 will stop training at epoch = 2. However, num_workers=0 will be fine.", "isAccepted": false, "likes": 4, "poster": "niez01" }, { "contents": "It seems that during the training process the amount of free RAM continues to reduce. I am using a custom dataset that generates images from strokes (Quick Draw Doodles data), and probably the problem is that the dataset doesn’t work well in multitasking setting. Could somebody give an advice on how to implement a multithread ready dataset?", "isAccepted": false, "likes": null, "poster": "devforfu" }, { "contents": "What’s num_GPU? How to get it on google colab?", "isAccepted": false, "likes": null, "poster": "kuzand" }, { "contents": "Hi, I encountered the similar problem for DataLoader. Sorry to ask the similar question, after reading all your discussion, I am still confused about the relationship between the number_GPU, num_CPU and the num_works. Here is my situation: I have a convolutional Autoencoder (so it’s not a very big network), but I have a very big dataset: For one epoch, I have ` Always, it is very slow to load the data, the training time for each batch is very fast. To load each image, I found that with num_workers=8, it will take longer than num_workers=0. Every time, I will take a long time to load 74 batches and run the training, then reload another 74 batches (the case of num_workers=8). Do you have any suggestions to set the num_workers if the dataset is huge??? Thanks in advance", "isAccepted": false, "likes": 1, "poster": "Junhao_Wen" } ]
false
Debugging autograd functions in pytorch
vision
[ { "contents": "Hi! I wrote below function that I intend to use as the loss function. <SCODE>class MyCriterion(torch.autograd.Function):\n def __init__(self):\n self.alpha = .0005\n \n def forward(self, input, target, epoch, isLabeled):\n loss = F.cross_entropy(input, target)\n\n self.save_for_backward(input, target, epoch, isLabeled, loss)\n print(self.saved_tensors) #returns ()\n if (isLabeled.data > 0).all():\n\n return Variable(loss.data * self.alpha * epoch.data)\n return loss\n\n def backward(self, grad_output):\n \n input, target, epoch, isLabeled, loss, = self.saved_tensors\n grad_input = loss.backward() \n return grad_input\nmy_criterion = MyCriterion()\nx = Variable(torch.randn(11, 10).type(torch.FloatTensor))\ny = Variable(torch.range(1,6, .5).type(torch.LongTensor))\n\na = torch.from_numpy(np.array([0]))\nb = torch.from_numpy(np.array([1]))\nc = torch.from_numpy(np.array([10.0]))\n\nprint(x)\n# print(torch.from_numpy(np.array([10])))\nfirst_loss = my_criterion.forward(x, y, Variable(c.float()), Variable(a))\nprint(my_criterion.backward(first_loss))\n\nsecond_loss = my_criterion.forward(x, y, Variable(c.float()), Variable(b))\nprint(my_criterion.backward(second_loss))\n<ECODE> When I do this, I have below error - <SCODE>---> 18 input, target, epoch, isLabeled, loss, = self.saved_tensors\n 19 grad_input = loss.backward()\n 20 return grad_input\n\nValueError: not enough values to unpack (expected 5, got 0)\n<ECODE> Is there something I am missing? How can I access saved tensors? Is there any other documentation than autograd that has more examples of autograd functions? Thanks a lot!", "isAccepted": false, "likes": null, "poster": "pratheeksha" }, { "contents": "First thing is that you can’t return a Variable from forward - it expects tensors and will automatically wrap them in Variables and connect them up to the existing graph. You shouldn’t unpack and re-pack Variables in the middle of computation, because that will break continuity of history. You need to do something like that: <SCODE>def my_criterion(input, target, epoch, isLabeled):\n if (isLabeled.data > 0).all():\n return loss * alpha * epoch\n return MyCriterion()(input, target, epoch, isLabeled)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "jekbradbury" }, { "contents": "This is very useful, I will try writing it as a module, thanks!", "isAccepted": false, "likes": null, "poster": "pratheeksha" }, { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "pratheeksha" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Introducing the TorchSample package for comprehensive data transforms and sampling
vision
[ { "contents": "Hi all, It also has great support for situations where both the input and target tensors are images (e.g. segmentation datasets). It also supports arbitrary data types. It supersedes the currently available sampling code in the main torchvision/torch codebase. It’s my hope that this will kick-start the community-driven development of the sampling code in the main torch and torchvision packages, and serve as reliable and flexible sampling code in the meantime. NOTE: This package is in no way endorsed by, affiliated with, or otherwise associated with the official Pytorch ecosystem or team.", "isAccepted": false, "likes": 10, "poster": "ncullen93" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ncullen93" }, { "contents": "Could you please give me some details on how to install this package? I have conda installed and have created an virtualenv for it. Thanks!", "isAccepted": false, "likes": null, "poster": "Shi_Shu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Rohit_Keshari" }, { "contents": "If you are using pip, just clone, go to the directory with setup.py and run “pip install -e .” - this will read the setup and install the package (this way you can also easily uninstall it if needed)", "isAccepted": false, "likes": null, "poster": "flock" } ]
false
How to anneal LR of optimizer?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "ritchieng" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "Cysu" }, { "contents": "Or you could just recreate the optimizer", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "If you recreate the optimizer, would it lose its weights, it seems to be? What do you mean by recreate?", "isAccepted": false, "likes": null, "poster": "ritchieng" } ]
false
Potential memory leak
null
[ { "contents": "We have been recently tracking some memory leak on our machine. I created a very simple script which just does this: <SCODE>a = []\nfor i in range(100):\n t = torch.ones(10,1000,1000)\n if cuda:\n t = t.cuda()\n a.append(t)\n<ECODE> and I use a flag to control whether .cuda() is used or not. It seems that the CUDA part is causing some kind of memory leak. In the following data, the used started at 4.8G but they were dead memory that did not show up as used by any process in TOP. After running the cuda version of the loop a few times, 300M more memory were dead. I wonder if anyone has any idea as to what is happening? I am using RedHat 4.4.2, Python 2.7.13 with the newest PyTorch, CUDA 7.5.17. <SCODE>test$ free -h\n total used free shared buffers cached\nMem: 23G 6.0G 17G 9.1M 360M 919M\n-/+ buffers/cache: 4.8G 18G\nSwap: 82G 0B 82G\ntest$ python test.py cpu\ntest$ python test.py cpu\ntest$ python test.py cpu\ntest$ python test.py cpu\ntest$ python test.py cpu\ntest$ free -h\n total used free shared buffers cached\nMem: 23G 6.0G 17G 9.1M 360M 919M\n-/+ buffers/cache: 4.8G 18G\nSwap: 82G 0B 82G\ntest$ python test.py cuda\ntest$ python test.py cuda\ntest$ python test.py cuda\ntest$ python test.py cuda\ntest$ python test.py cuda\ntest$ free -h\n total used free shared buffers cached\nMem: 23G 6.4G 17G 9.1M 360M 919M\n-/+ buffers/cache: 5.2G 18G\nSwap: 82G 0B 82G\n\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Lifeng_Jin" }, { "contents": "Please understand that the kernel does not always have to free memory unless it thinks it needs to, it might be caching some pages for various reasons. This is not a memory leak, unless you literally cannot reclaim memory (i.e. if you try to allocate 18G + 0.4GB, and allocations fail).", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I do understand that. This used memory is never claimed by the kernel anytime, even if it goes into swap mode because it runs out of memory when a memory intensive program is run.", "isAccepted": false, "likes": null, "poster": "Lifeng_Jin" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "OK. Thank you for your help. I will upgrade CUDA and test this again.", "isAccepted": false, "likes": null, "poster": "Lifeng_Jin" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "trdavidson" }, { "contents": "It did. We did a very trivial test creating a lot of tensors either on the GPU or on the CPU and found that it was definitely the GPU. The update solved it.", "isAccepted": false, "likes": null, "poster": "Lifeng_Jin" } ]
false
Computing the gradients for batch renormalization
null
[ { "contents": "I tried to implement Batch Renormalization(arXiv 1702.03275) in PyTorch. The program stop when compute the gradients. Trackback information is attached below: <SCODE>Traceback (most recent call last):\n File \"cifar.py\", line 187, in <module>\n loss.backward()\n File \"/usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py\", line 158, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\nRuntimeError: could not compute gradients for some functions (View, ConvNd)\n<ECODE> My implementation of batch renormalization is shown below: <SCODE>class BatchRenorm2d(nn.Module):\n def __init__(self, channels, eps = 1e-5, rmax=3, dmax=5, lr=0.001):\n super(BatchRenorm2d, self).__init__()\n self.is_train = True\n self.is_unlock = False\n self.eps = eps \n self.channels = channels\n self.rmax = rmax\n self.dmax = dmax\n self.lr = lr\n self.sigma = torch.from_numpy(np.zeros((1, channels, 1, 1), dtype=np.float32)).cuda()\n self.mean = torch.from_numpy(np.zeros((1,channels), dtype=np.float32)).cuda()\n\n def forward(self, x): \n if self.is_train:\n batch_size = x.size()[0]\n feature_shape_size = x.size()[2] * x.size()[3]\n sig_sqr_sum = Variable(torch.zeros(batch_size, self.channels)).cuda()\n mu_b = x.mean(0).mean(2).mean(3).view(1, self.channels)\n xview = x.view(batch_size, self.channels, feature_shape_size)\n\n for j in range(self.channels):\n mu_b_0_j = mu_b[0, j].repeat(feature_shape_size)\n for i in range(batch_size):\n sig_sqr_sum[i,j] = ((xview[i,j] - mu_b_0_j) ** 2).mean()\n sigma_b = sig_sqr_sum.mean(0)\n sigma_b += self.eps\n sigma_b = torch.sqrt(sigma_b)\n if self.is_unlock:\n r = sigma_b.data / self.sigma\n r.clamp_(1.0/rmax, rmax)\n d = (mu_b.data - self.mean) / (self.sigma + torch.sqrt(eps) )\n d.clamp_(-self.dmax, self.dmax)\n else:\n r = torch.zeros(1, self.channels) + 1.0 \n d = torch.zeros(1, self.channels)\n x_hat = Variable(torch.zeros(x.size()).cuda())\n for j in range(self.channels):\n mu_b_0_j = mu_b[0, j].repeat(feature_shape_size).view(x.size()[2], x.size()[3])\n sigma_b_0_j = sigma_b[0, j].repeat(feature_shape_size).view(x.size()[2], x.size()[3])\n for i in range(batch_size):\n x_hat_i_j = x[i,j,:,:].clone()\n x_hat_i_j -= mu_b_0_j\n x_hat_i_j /= sigma_b_0_j\n x_hat_i_j *= r[0, j]\n x_hat_i_j += d[0, j]\n x_hat[i,j,:,:] = x_hat_i_j\n self.mean += self.lr * (mu_b.data - self.mean)\n self.sigma += self.lr * (sigma_b.data - self.sigma)\n else:\n mu_b = Variable(self.mean)\n sigma_b = Variable(self.sigma)\n for j in range(self.channels):\n mu_b_0_j = mu_b[0, j].repeat(feature_shape_size).view(x.size()[2], x.size()[3])\n sigma_b_0_j = sigma_b[0, j].repeat(feature_shape_size).view(x.size()[2], x.size()[3])\n for i in range(batch_size):\n x_hat_i_j = x[i,j,:,:].clone()\n x_hat_i_j -= mu_b_0_j\n x_hat_i_j /= sigma_b_0_j\n x_hat_i_j *= r[0, j]\n x_hat_i_j += d[0, j]\n x_hat[i,j,:,:] = x_hat_i_j\n return x_hat\n<ECODE> What should I do to solve this problem? Thanks.", "isAccepted": false, "likes": null, "poster": "linmx0130" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "Hi, what is the clamp_ function doing? Is it differentiable? Also, can you show me the whole implementation of BatchReNorm layer?", "isAccepted": false, "likes": null, "poster": "Zafarullah_Mahmood" }, { "contents": "Looking at the BatchRenorm paper, the author states that gradients are not propagated through the clip functions. He uses a stop_gradient function to denote this in Algorithm 1. I’m not sure how to do this in PyTorch though. I’d would think that it would be pretty easy to do.", "isAccepted": false, "likes": null, "poster": "Erotemic" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ptrblck" } ]
false
Normalization of CIFAR and MNIST datasets
vision
[ { "contents": "<SCODE># The output of torchvision datasets are PIL Image images of range [0, 1].\n# We transform them to Tensors of normalized range [-1, 1]\ntransform=transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ])\ntrainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=4, \n shuffle=True, num_workers=2)\n<ECODE> But the MNIST example (pytorch/examples/mnist) uses values very different from 0.5 to normalize the data: <SCODE>testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=4, \n shuffle=False, num_workers=2)\n\ndatasets.MNIST('../data', train=True, download=True,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\nbatch_size=args.batch_size, shuffle=True, **kwargs)\n<ECODE> Why are we using two different approaches to normalize MINIST and CIFAR? What is the correct approach to use? I guess we are NOT normalizing the dataset in the first case, but only each image. If this is the case, I think we should clarify this in the PyTorch Tutorial. Thanks, David", "isAccepted": false, "likes": null, "poster": "dlmacedo" }, { "contents": "MNIST is not natural images, it’s data distribution is quite different.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "BarryBA" } ]
false
Elegant implementation of Spatial Pyramid Pooling layer?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "BarclayII" }, { "contents": "<SCODE>def spatial_pyramid_pooling(input, output_size):\n assert input.dim() == 4 and input.size(2) == input.size(3)\n F.max_pool2d(input, kernel_size=input.size(2) // output_size)\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "Hi like in torch for using spatialpyramidpooling i can directly call (inn.SpatialPyramidPooling({8,8},{4,4},{2,2},{1,1})) can i do it similarily in some way in pytorch ? What is the easiest way to do so. Thanks", "isAccepted": false, "likes": null, "poster": "Rao_Shivansh" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "addisonklinke" }, { "contents": "I am implementing spatial pooling in my network but I still confuse about how to implement this? I thin there is some implementation problem in my class because my class only contain one Conv layer <SCODE>class DWConv(nn.Module):\n\n def spatial_pyramid_pool(self, previous_conv, previous_conv_size, out_pool_size=[4, 2, 1]):\n '''\n previous_conv: a tensor vector of previous convolution layer\n previous_conv_size: an int vector [height, width] of the matrix features size of previous convolution layer\n out_pool_size: a int vector of expected output size of max pooling layer\n\n returns: a tensor vector with shape [1 x n] is the concentration of multi-level pooling\n '''\n num_sample = previous_conv.shape[0]\n for i in range(len(out_pool_size)):\n h_wid = int(math.ceil(previous_conv_size[0] / out_pool_size[i]))\n w_wid = int(math.ceil(previous_conv_size[1] / out_pool_size[i]))\n h_pad = (h_wid * out_pool_size[i] - previous_conv_size[0] + 1) / 2\n w_pad = (w_wid * out_pool_size[i] - previous_conv_size[1] + 1) / 2\n maxpool = torch.nn.MaxPool2d((h_wid, w_wid), stride=(h_wid, w_wid), padding=(int(h_pad), int(w_pad)))\n x = maxpool(previous_conv)\n if (i == 0):\n spp = x.view(num_sample, -1)\n else:\n spp = torch.cat((spp, x.view(num_sample, -1)), 1)\n return spp\n \n def __init__(self, dim=768):\n super(DWConv, self).__init__()\n self.dwconv = nn.Conv2d(dim, dim, 3, 1, 1, bias=True, groups=dim)\n\n def forward(self, x, H, W):\n B, N, C = x.shape\n x = x.transpose(1, 2).view(B, C, H, W)\n x = self.dwconv(x)\n x = x.flatten(2).transpose(1, 2)\n x = self.spatial_pyramid_pool(previous_conv_size=x.shape)\n return x\n<ECODE> Traceback <SCODE> x = self.spatial_pyramid_pool(previous_conv_size=x.shape)\nTypeError: spatial_pyramid_pool() missing 1 required positional argument: 'previous_conv'\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Khawar_Islam" } ]
false
Are Tensors column major or row major?
null
[ { "contents": "I’m porting some code from Caffe which column major and it would be helpful to know if torch is similarly column major or is row major.", "isAccepted": false, "likes": 1, "poster": "mattmacy" }, { "contents": "Torch Tensors are row-major.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "This “deformation_layer” isn’t part of upstream Caffe, so maybe it’s only their code to make the use of BLAS cleaner. top[0]->Reshape( out_shape);", "isAccepted": false, "likes": null, "poster": "mattmacy" } ]
false
How to add the submodule’s parameters into model.parameters()?
null
[ { "contents": "<SCODE> def __init__(self, p_keep_conv, p_keep_hidden, n_leaf, n_label, n_tree, n_depth):\n super(DeepNeuralDecisionForest, self).__init__()\n\n self.conv = nn.Sequential()\n self.conv.add_module('conv1', nn.Conv2d(1, 32, kernel_size=3, padding=1))\n self.conv.add_module('relu1', nn.ReLU())\n self.conv.add_module('pool1', nn.MaxPool2d(kernel_size=2))\n self.conv.add_module('drop1', nn.Dropout(1 - p_keep_conv))\n self.conv.add_module('conv2', nn.Conv2d(32, 64, kernel_size=3, padding=1))\n self.conv.add_module('relu2', nn.ReLU())\n self.conv.add_module('pool2', nn.MaxPool2d(kernel_size=2))\n self.conv.add_module('drop2', nn.Dropout(1 - p_keep_conv))\n self.conv.add_module('conv3', nn.Conv2d(64, 128, kernel_size=3, padding=1))\n self.conv.add_module('relu3', nn.ReLU())\n self.conv.add_module('pool3', nn.MaxPool2d(kernel_size=2))\n self.conv.add_module('drop3', nn.Dropout(1 - p_keep_conv))\n\n self._nleaf = n_leaf\n self._nlabel = n_label\n self._ntree = n_tree\n self._ndepth = n_depth\n self._batchsize = 100\n\n self.treelayers = []\n self.pi_e = []\n for i in xrange(self._ntree):\n treelayer = nn.Sequential()\n treelayer.add_module('sub_linear1', nn.Linear(1152, 625))\n treelayer.add_module('sub_relu', nn.ReLU())\n treelayer.add_module('sub_drop1', nn.Dropout(1 - p_keep_hidden))\n treelayer.add_module('sub_linear2', nn.Linear(625, self._nleaf))\n treelayer.add_module('sub_sigmoid', nn.Sigmoid())\n pi = Parameter(self.init_pi())\n self.treelayers.append(treelayer)\n self.pi_e.append(nn.Softmax()(pi))\n\ndef init_pi(self):\n return torch.ones(self._nleaf, self._nlabel)/float(self._nlabel)\n\ndef init_weights(self, shape):\n return torch.randn(shape) * 0.01\n\ndef init_prob_weights(self, shape, minval=-5, maxval=5):\n return torch.Tensor(shape[0], shape[1]).uniform_(minval, maxval)\n\ndef compute_mu(self, flat_decision_p_e):\n n_batch = self._batchsize\n batch_0_indices = torch.range(0, n_batch * self._nleaf - 1, self._nleaf).unsqueeze(1).repeat(1, self._nleaf).long()\n\n in_repeat = self._nleaf / 2\n out_repeat = n_batch\n\n batch_complement_indices = torch.LongTensor(\n np.array([[0] * in_repeat, [n_batch * self._nleaf] * in_repeat] * out_repeat).reshape(n_batch, self._nleaf))\n\n # First define the routing probabilistics d for root nodes\n mu_e = []\n indices_var = Variable((batch_0_indices + batch_complement_indices).view(-1))\n indices_var = indices_var.cuda()\n #indices_var = indices_var.typeas(flat_decision_p_e[0])\n # iterate over each tree\n for i, flat_decision_p in enumerate(flat_decision_p_e):\n mu = torch.gather(flat_decision_p, 0, indices_var).view(n_batch, self._nleaf)\n mu_e.append(mu)\n\n # from the scond layer to the last layer, we make the decison nodes\n for d in xrange(1, self._ndepth + 1):\n indices = torch.range(2 ** d, 2 ** (d + 1) - 1) - 1\n tile_indices = indices.unsqueeze(1).repeat(1, 2 ** (self._ndepth - d + 1)).view(1, -1)\n batch_indices = batch_0_indices + tile_indices.repeat(n_batch, 1).long()\n\n in_repeat = in_repeat / 2\n out_repeat = out_repeat * 2\n\n # Again define the indices that picks d and 1-d for the nodes\n batch_complement_indices = torch.LongTensor(\n np.array([[0] * in_repeat, [n_batch * self._nleaf] * in_repeat] * out_repeat).reshape(n_batch, self._nleaf))\n\n mu_e_update = []\n indices_var = Variable((batch_indices + batch_complement_indices).view(-1))\n indices_var = indices_var.cuda()\n for mu, flat_decision_p in zip(mu_e, flat_decision_p_e):\n mu = torch.mul(mu, torch.gather(flat_decision_p, 0, indices_var).view(\n n_batch, self._nleaf))\n mu_e_update.append(mu)\n mu_e = mu_e_update\n return mu_e\n\ndef compute_py_x(self, mu_e):\n py_x_e = []\n n_batch = self._batchsize\n\n for mu, leaf_p in zip(mu_e, self.pi_e):\n py_x_tree = mu.unsqueeze(2).repeat(1, 1, self._nlabel).mul(leaf_p.unsqueeze(0).repeat(n_batch, 1, 1)).mean(1)\n py_x_e.append(py_x_tree)\n\n py_x_e = torch.cat(py_x_e, 1)\n py_x = py_x_e.mean(1).squeeze()\n return py_x\n\ndef forward(self, x):\n feat = self.conv.forward(x)\n feat = feat.view(-1, 1152)\n self._batchsize = x.size(0)\n #py_x = self.fc.forward(feat)\n flat_decision_p_e = []\n for i in xrange(len(self.treelayers)):\n decision_p = self.treelayers[i].forward(feat)\n decision_p_comp = 1 - decision_p\n decision_p_pack = torch.cat((decision_p, decision_p_comp), 1)\n flat_decision_p = decision_p_pack.view(-1)\n flat_decision_p_e.append(flat_decision_p)\n \n mu_e = self.compute_mu(flat_decision_p_e)\n \n py_x = self.compute_py_x(mu_e)`\n return py_x<ECODE>", "isAccepted": false, "likes": null, "poster": "waitwaitforget" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "Thanks for your reminding.", "isAccepted": false, "likes": null, "poster": "waitwaitforget" }, { "contents": "I met a similar but not same problem. I defined a new module as follow: <SCODE>class RecursiveNN(nn.Module):\n def __init__(self, word_embedding, hidden_dim):\n super(RecursiveNN, self).__init__()\n self.word_dim = word_embedding.embeddings.size(1)\n self.hidden_dim = hidden_dim\n self.embedding = nn.Embedding(word_embedding.embeddings.size(0),\n self.word_dim)\n self.embedding.weight = nn.Parameter(word_embedding.embeddings)\n self.word2hidden = nn.Linear(self.word_dim, self.hidden_dim)\n self.hidden2hidden = nn.Linear(2 * self.hidden_dim, self.hidden_dim)\n\n def forward(self, node):\n if not node.val is None:\n node.calculate_result = self.word2hidden(self.embedding(Variable(torch.LongTensor([node.word_id]))))\n return node.calculate_result\n else:\n assert len(node.children) == 2\n node.calculate_result = self.hidden2hidden(torch.cat((node.children[0].calculate_result,\n node.children[1].calculate_result), 1))\n return node.calculate_result\n<ECODE> And, this module is used by another module whose definition is shown as below: <SCODE>class RootAlign(nn.Module):\n def __init__(self, word_embedding, config):\n super(RootAlign, self).__init__()\n self.rnn = RecursiveNN(word_embedding, config['hidden_dim'])\n self.linear = nn.Linear(config['hidden_dim'] * 2, config['relation_num'])\n\n def forward(self, p_tree, h_tree):\n p_tree.postorder_traverse(self.rnn)\n h_tree.postorder_traverse(self.rnn)\n\n out = F.softmax(self.linear(F.sigmoid(torch.cat((p_tree.calculate_result, h_tree.calculate_result), 1))))\n return out\n<ECODE> I would be very grateful if you could help me", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>for _data in snli.train:\n p_tree = _data['p_tree']\n h_tree = _data['h_tree']\n target = Variable(torch.LongTensor([_data['label']]))\n optimizer.zero_grad()\n output = root_align(p_tree, h_tree)\n loss = F.nll_loss(output, target)\n loss.backward()\n optimizer.step()\n train_loss += loss\n<ECODE> Thank you again for your help.", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Anyway, thanks for your help.", "isAccepted": false, "likes": null, "poster": "ShawnGuo" } ]
false
How to create patches / windows of image dataset?
null
[ { "contents": "I would now like to know where would be the best place to implement this type of preprocessing?", "isAccepted": false, "likes": 1, "poster": "bodokaiser" }, { "contents": "Yeah that’s a tricky one. There’s no way to tell the DataLoader to dynamically skip some inputs and replace them with next ones, because it would violate the guarantee that it will always return the data in the same order as the given by the sampler.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "As you said I think I will need to add another stage which converts the data from HDF5 to patches and then feed that data into pytorch. Thanks in advance!", "isAccepted": false, "likes": null, "poster": "bodokaiser" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "trypag" }, { "contents": "You don’t need to save the patches ahead of time, it should be enough to store the file names and locations to retrieve them later. But if you don’t have a lot of data it might be simpler to preprocess the dataset.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "bodokaiser" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "bodokaiser" }, { "contents": "sure, I emailed you!!", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "bodokaiser" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Tepp" }, { "contents": "See you", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "Anyways, a very simple approach to sampling 2D patches from 3D MRI is to extract a random slice (see my package torchsample for that transform) then just take a random crop. You can use the “MultiSampler” sampler in torchsample to go through each image more than once in a single epoch. No need to extract EVERY possible patch haha just do it randomly at train time. It’s still a fun exercise to reimplement them tho… which paper specifically are you interested in? I think I have implementations of most of them and can share or help you develop the right sampler.", "isAccepted": false, "likes": null, "poster": "ncullen93" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "Yeah you kinda have more data but it’s misleading bc you don’t have more region-specific subject variability. Data is definitely an issue… I train on a lot of subjects (a few thousand) from multiple large datasets… Also, the KEY to MRI segmentation is data augmentation and applying random affine transforms specifically since images differ that way across scanners - hence why I developed torchsample. With good augmentation we have done quite well with ~200 subjects.", "isAccepted": false, "likes": 2, "poster": "ncullen93" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "rogetrullo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Meghal_Dani" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "akbarnejad" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Najeh_Nafti" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fepegar" } ]
false
Encounter the RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
null
[ { "contents": "I am going to define my layer. How ever, I encounter the RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation while running backward(). I found that if i commented the second for loop ‘for j in range(self.number_person):’ or make ‘u_i[:,j,:] = (1 - self.lumbda)*u_i[:,j,:]’, the backward() was fine. I wonder where is inplace operation and why it does not work? ‘p_rnn_feature’ and ‘u_sum’ has been compute before. BTW, this code is run on pytorch 0.19.7ad948f <SCODE>def myNet():\n def __init__():\n #do som init\n def forward():\n #compute p_rnn_feature,u_sum\n p_rnn_feature = Variable(torch.ones(p_rnn_feature.size())).cuda()\n u_sum = Variable(torch.ones(u_sum.size())).cuda()\n for i in range(self.embedding_length):\n u_s = u_s.clone()\n u_i = u_i.clone()\n\n for j in range(self.number_person):\n \n alpha_i = Variable(torch.zeros(batch_size, self.number_person, 1)).cuda()\n comp_mask = Variable(j*torch.ones(valid_person_num.size())).cuda()\n comp_mask = torch.lt(comp_mask, valid_person_num) # (batch_size, 1)\n comp_mask_ui = comp_mask.repeat(1, self.hyper_d)\n tmp_x = torch.cat((p_rnn_feature[:,j,:], u_sum[:,j,:], u_s), 1) # size: (batch_size, 2*rnn_cell_size+hyper_d)\n\n\n u_i[:,j,:] = (1 - self.lumbda)*u_i[:,j,:] + self.lumbda*F.relu(self.u_i_linear(tmp_x))\n u_i[:,j,:] = u_i[:,j,:]*comp_mask_ui.float()\n\n alpha_i[:,j,:] = F.tanh(self.alpha_i_linear(torch.cat((u_i[:,j,:], u_s),1)))\n alpha_i[:,j,:] = alpha_i[:,j,:]*comp_mask.float()\n\n alpha_sum = torch.sum(alpha_i,1)\n alpha_sum = alpha_sum.repeat(1,self.number_person,1)\n\n gate = alpha_i / Variable(torch.max(alpha_sum.data, torch.ones(alpha_sum.size()).cuda())).cuda()\n gate = gate.repeat(1,1,self.hyper_d)\n\n gated_ui_sum = gate*u_i\n gated_ui_sum = torch.sum(gated_ui_sum,1)\n gated_ui_sum = torch.squeeze(gated_ui_sum, dim=1)\n \n tmp_s = torch.cat((u_s, p_feature_sum, gated_ui_sum), 1) # size: (batch_size, hyper_d+rnn_cell_size+hyper_d)\n u_s = (1 - self.lumbda) * u_s + self.lumbda * F.relu(self.u_s_linear(tmp_s))\n\n pred_tmp = torch.cat((torch.squeeze(torch.sum(u_i, 1), dim=1), u_s), 1)\n pred = self.pred_dropout(self.pred_linear(pred_tmp))\n pred = self.pred_linear_second(pred)<ECODE>", "isAccepted": true, "likes": 12, "poster": "YongyiTang92" }, { "contents": "<SCODE>u_i[:,j,:] = (1 - self.lumbda)*u_i[:,j,:].clone() + self.lumbda*F.relu(self.u_i_linear(tmp_x))\nu_i[:,j,:] = u_i[:,j,:].clone()*comp_mask_ui.float()\n\nalpha_i[:,j,:] = F.tanh(self.alpha_i_linear(torch.cat((u_i[:,j,:], u_s),1)))\nalpha_i[:,j,:] = alpha_i[:,j,:].clone()*comp_mask.float()\n\n<ECODE>", "isAccepted": true, "likes": 25, "poster": "apaszke" }, { "contents": "Thanks, it’s working. But what do you mean by ‘Assignments to Variables are in-place operations’? So something like x=x+1 is in-place operation? Or just because I am using indexing in a matrix?", "isAccepted": true, "likes": null, "poster": "YongyiTang92" }, { "contents": "", "isAccepted": true, "likes": 97, "poster": "apaszke" }, { "contents": "Thanks! That’s a great explanation!", "isAccepted": true, "likes": null, "poster": "YongyiTang92" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "AaronYALai" }, { "contents": "", "isAccepted": true, "likes": 4, "poster": "wasiahmad" }, { "contents": "I think you can just try to clone the variable before you use it.", "isAccepted": true, "likes": null, "poster": "YongyiTang92" }, { "contents": "I’m getting same error for a different scenario. I am running a LSTM code and my code works completely fine when I use criterion as MSELoss and it gives me this error when I change my criterion to CrossEntropyLoss (of course I am feeding in desired type of inputs to my criterion). I get this error when I call loss.backward(). Strangely, the code runs perfectly fine when I call loss.backward() at every time step in the time loop instead of calling after the entire sequence has been completed. Is it possible to have some pointing to a variable at least which is causing this trouble or any other way to reason out the possible error? Thank you in advance for your help.", "isAccepted": true, "likes": 1, "poster": "bhattad2" }, { "contents": "Hi, Best regards Thomas", "isAccepted": true, "likes": 2, "poster": "tom" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Asif_Hossain" }, { "contents": "<SCODE>theta[:, 0] = 1 - theta[:, 0].clone()\n<ECODE>", "isAccepted": true, "likes": 2, "poster": "chaoyan1073" }, { "contents": "Same problem. Have you found the solution?", "isAccepted": true, "likes": null, "poster": "kevin-ssy" }, { "contents": "<SCODE> phi = x_out[:, :, 0] # [B, C]\n phi.clone()\n Batch = phi.shape[0]\n \n for i in range(Batch): \t\n \tphi[i] = phi_constant * phi[i].clone() + phi_offset \n<ECODE>", "isAccepted": true, "likes": null, "poster": "RyanCV" }, { "contents": "actual, using a list to append the values in the for loop solved this problem.", "isAccepted": true, "likes": null, "poster": "RyanCV" }, { "contents": "", "isAccepted": true, "likes": 3, "poster": "cerisara" }, { "contents": "", "isAccepted": true, "likes": 5, "poster": "gwg" }, { "contents": "Hi apaszke, I encountered this error in a cuda extension. <SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Function\nimport sys, os\nsys.path.insert(0, os.path.abspath(\"build/lib.linux-x86_64-3.6/\"))\nimport reprelu_cuda, reprelu_cpp\n\nclass RepReLUFunction(Function):\n @staticmethod\n def forward(ctx, input, pos_weight, neg_weight):\n\n pos_weight = pos_weight.contiguous()\n neg_weight = neg_weight.contiguous()\n\n output = reprelu_cuda.forward(input, pos_weight, neg_weight)[0]\n \n ctx.save_for_backward(input, pos_weight, neg_weight)\n\n return output\n\n @staticmethod\n def backward(ctx, grad_output):\n\n outputs = reprelu_cuda.backward(grad_output.contiguous(), *ctx.saved_variables)\n\n d_input, d_pos_weight, d_neg_weight = outputs\n\n return d_input, d_pos_weight, d_neg_weight\n\nclass RepReLU(nn.Module):\n\n def __init__(self, planes, neg_slope=0.25):\n \n super(RepReLU, self).__init__()\n\n\n \n def forward(self, x):\n\n w_pos = torch.nn.several_computations(x)\n w_neg = torch.nn.other_computations(x)\n\n return RepReLUFunction.apply(x, w_pos, w_neg)\n<ECODE> When running following test code, the error occurs: <SCODE>data = torch.zeros(1, 2, 2, 2).cuda()\nreprelu = RepReLU(2).cuda()\n\nloss = reprelu(data).sum()\nloss.backward()\n<ECODE> <SCODE>outputs = reprelu_cuda.backward(grad_output.contiguous(), *ctx.saved_variables)\n<ECODE>", "isAccepted": true, "likes": null, "poster": "zeakey" }, { "contents": "<SCODE>attention = nn.functional.softmax(torch.matmul(key.transpose(1, 2), query) / math.sqrt(num_of_channels), dim=1)\nattention[:, 0, 0] = torch.ones(attention.shape[0], requires_grad = True)\n<ECODE> Error: <SCODE>RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation\n<ECODE> However, if I run this code like this: <SCODE>attention = nn.functional.softmax(torch.matmul(key.transpose(1, 2), query), dim=1) / math.sqrt(num_of_channels)\nattention[:, 0, 0] = torch.ones(attention.shape[0], requires_grad = True)\n<ECODE> it works perfectly fine. What is the reason for such behavior? The resulting tensor has the attribute requires_grad set to True in both cases, only the backward function differs. Actually, what is fun is that I can just use a dummy division by 1.0 to avoid getting an error, but it does not seem elegant at all: <SCODE> attention = nn.functional.softmax(torch.matmul(key.transpose(1, 2), query) / math.sqrt(num_of_channels), dim=1)/1.0\nattention[:, 0, 0] = torch.ones(attention.shape[0], requires_grad = True)\n<ECODE>", "isAccepted": true, "likes": 1, "poster": "dkwasny" }, { "contents": "Super useful. It works for me. Thank you so much.", "isAccepted": true, "likes": null, "poster": "Shih-han_Wang" } ]
true
Problems on implementation of deep neural decision forest
null
[ { "contents": "<SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport numpy as np\nfrom torch.nn.parameter import Parameter\nimport torch.optim as optim\n\nfrom data_util import load_mnist\n\n\nclass DeepNeuralDecisionForest(nn.Module):\n def __init__(self, p_keep_conv, p_keep_hidden, n_leaf, n_label, n_tree, n_depth):\n super(DeepNeuralDecisionForest, self).__init__()\n\n self.conv = nn.Sequential()\n self.conv.add_module('conv1', nn.Conv2d(1, 10, kernel_size=5))\n self.conv.add_module('relu1', nn.ReLU())\n self.conv.add_module('pool1', nn.MaxPool2d(kernel_size=2))\n self.conv.add_module('drop1', nn.Dropout(1 - p_keep_conv))\n self.conv.add_module('conv2', nn.Conv2d(10, 20, kernel_size=5))\n self.conv.add_module('relu2', nn.ReLU())\n self.conv.add_module('pool2', nn.MaxPool2d(kernel_size=2))\n self.conv.add_module('drop2', nn.Dropout(1 - p_keep_conv))\n\n self._nleaf = n_leaf\n self._nlabel = n_label\n self._ntree = n_tree\n self._ndepth = n_depth\n self._batchsize = 128\n\n self.treelayers = nn.ModuleList()\n self.pi_e = nn.ParameterList()\n for i in xrange(self._ntree):\n treelayer = nn.Sequential()\n treelayer.add_module('sub_linear1', nn.Linear(320, 50))\n treelayer.add_module('sub_relu', nn.ReLU())\n treelayer.add_module('sub_drop1', nn.Dropout(1 - p_keep_hidden))\n treelayer.add_module('sub_linear2', nn.Linear(50, self._nleaf))\n treelayer.add_module('sub_sigmoid', nn.Sigmoid())\n \n self.treelayers.append(treelayer)\n self.pi_e.append(Parameter(self.init_prob_weights([self._nleaf, self._nlabel], -2, 2)))\n\n def init_pi(self):\n return torch.ones(self._nleaf, self._nlabel)/float(self._nlabel)\n\n def init_weights(self, shape):\n return torch.randn(shape) * 0.01\n\n def init_prob_weights(self, shape, minval=-5, maxval=5):\n return torch.Tensor(shape[0], shape[1]).uniform_(minval, maxval)\n\n def compute_mu(self, flat_decision_p_e):\n n_batch = self._batchsize\n batch_0_indices = torch.range(0, n_batch * self._nleaf - 1, self._nleaf).unsqueeze(1).repeat(1, self._nleaf).long()\n\n in_repeat = self._nleaf / 2\n out_repeat = n_batch\n\n batch_complement_indices = torch.LongTensor(\n np.array([[0] * in_repeat, [n_batch * self._nleaf] * in_repeat] * out_repeat).reshape(n_batch, self._nleaf))\n\n # First define the routing probabilistics d for root nodes\n mu_e = []\n indices_var = Variable((batch_0_indices + batch_complement_indices).view(-1)) \n indices_var = indices_var.cuda()\n # iterate over each tree\n for i, flat_decision_p in enumerate(flat_decision_p_e):\n mu = torch.gather(flat_decision_p, 0, indices_var).view(n_batch, self._nleaf)\n mu_e.append(mu)\n\n # from the scond layer to the last layer, we make the decison nodes\n for d in xrange(1, self._ndepth + 1):\n indices = torch.range(2 ** d, 2 ** (d + 1) - 1) - 1\n tile_indices = indices.unsqueeze(1).repeat(1, 2 ** (self._ndepth - d + 1)).view(1, -1)\n batch_indices = batch_0_indices + tile_indices.repeat(n_batch, 1).long()\n\n in_repeat = in_repeat / 2\n out_repeat = out_repeat * 2\n # Again define the indices that picks d and 1-d for the nodes\n batch_complement_indices = torch.LongTensor(\n np.array([[0] * in_repeat, [n_batch * self._nleaf] * in_repeat] * out_repeat).reshape(n_batch, self._nleaf))\n\n mu_e_update = []\n indices_var = Variable((batch_indices + batch_complement_indices).view(-1))\n indices_var = indices_var.cuda()\n for mu, flat_decision_p in zip(mu_e, flat_decision_p_e):\n mu = torch.mul(mu, torch.gather(flat_decision_p, 0, indices_var).view(\n n_batch, self._nleaf))\n mu_e_update.append(mu)\n mu_e = mu_e_update\n return mu_e\n\n def compute_py_x(self, mu_e):\n py_x_e = []\n n_batch = self._batchsize\n\n for i in xrange(len(mu_e)):\n py_x_tree = mu_e[i].unsqueeze(2).repeat(1, 1, self._nlabel).mul(self.pi_e[i].unsqueeze(0).repeat(n_batch, 1, 1)).mean(1)\n py_x_e.append(py_x_tree.squeeze().unsqueeze(0))\n\n py_x_e = torch.cat(py_x_e, 0)\n py_x = py_x_e.mean(0).squeeze()\n \n return py_x\n\n def forward(self, x):\n feat = self.conv.forward(x)\n feat = feat.view(-1, 320)\n self._batchsize = x.size(0)\n\n flat_decision_p_e = []\n \n for i in xrange(len(self.treelayers)):\n decision_p = self.treelayers[i].forward(feat)\n decision_p_comp = 1 - decision_p\n decision_p_pack = torch.cat((decision_p, decision_p_comp))\n flat_decision_p = decision_p_pack.view(-1)\n flat_decision_p_e.append(flat_decision_p)\n self.pi_e[i] = Parameter(nn.Softmax()(self.pi_e[i]).data)\n \n mu_e = self.compute_mu(flat_decision_p_e)\n \n py_x = self.compute_py_x(mu_e)\n #py_x = nn.Softmax()(py_x)\n return py_x\n# training process\ndef train(model, loss, optimizer, X_val, Y_val):\n X_val = Variable(X_val)\n Y_val = Variable(Y_val)\n\n optimizer.zero_grad()\n\n py_x = model.forward(X_val)\n output = loss.forward(py_x, Y_val)\n output.backward()\n\n optimizer.step()\n\n return output.data[0]\n\n# testing process\ndef predict(model, X_val):\n model.eval()\n X_val = Variable(X_val)\n py_x = model.forward(X_val)\n \n return py_x.data.cpu().numpy().argmax(axis=1)\n\ndef main():\n ################ Definition #########################\n DEPTH = 3 # Depth of a tree\n N_LEAF = 2 ** (DEPTH + 1) # Number of leaf node\n N_LABEL = 10 # Number of classes\n N_TREE = 1 # Number of trees (ensemble)\n N_BATCH = 128 # Number of data points per mini-batch\n # network hyperparameters\n p_conv_keep = 0.8\n p_full_keep = 0.5\n\n cuda = 1\n \n model = DeepNeuralDecisionForest(p_keep_conv = p_conv_keep, p_keep_hidden = p_full_keep, n_leaf= N_LEAF, n_label= N_LABEL, n_tree= N_TREE, n_depth= DEPTH)\n\n ################ Load dataset #######################\n print('# data loading')\n trX, teX, trY, teY = load_mnist(onehot=False)\n trX = trX.reshape(-1, 1, 28, 28)\n teX = teX.reshape(-1, 1, 28, 28)\n\n trX = torch.from_numpy(trX).float()\n teX = torch.from_numpy(teX).float()\n trY = torch.from_numpy(trY).long()\n\n n_examples = len(trX)\n\n if cuda:\n model.cuda()\n trX = trX.cuda()\n trY = trY.cuda()\n teX = teX.cuda()\n\n optimizer = optim.RMSprop(model.parameters(), lr=1e-2, weight_decay=0.001)\n batch_size = N_BATCH\n print('# begin training')\n loss = nn.NLLLoss()\n \n for i in range(100):\n model.train()\n cost = 0.\n num_batches = n_examples / batch_size\n for k in range(num_batches):\n start, end = k * batch_size, (k + 1) * batch_size\n cost += train(model, loss, optimizer, trX[start:end], trY[start:end])\n #model.pi_e = repackage_hidden(model.pi_e)\n # Define cost and optimization method\n predY = predict(model, teX)\n print(\"Epoch %d, cost = %f, test acc = %.2f%%\"\n % (i + 1, cost / num_batches, 100. * np.mean(predY == teY )))\n\n\nif __name__=='__main__':\n print \"Training Deep CNN model on MNIST dataset\"\n main()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "waitwaitforget" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "waitwaitforget" }, { "contents": "In the tensorflow implementation you reference, they use: train_step = tf.train.RMSPropOptimizer(0.001, 0.9).minimize(cost) Did you try with these parameters ? In your code, it would be: optimizer = optim.RMSprop(model.parameters(), lr=0.001, weight_decay=0.9) Maybe you just need to increase the L2 regularization (weight decay)", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "I think this is not the main problem, thank you for your help anyway.", "isAccepted": false, "likes": null, "poster": "waitwaitforget" }, { "contents": "Here is a working version : 99 % accuracy after 10 epochs <SCODE>from __future__ import print_function\nimport argparse\nimport torch\nimport torch.nn as nn\nimport numpy as np\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom torch.autograd import Variable\nfrom torch.nn.parameter import Parameter\n\n# Training settings\nparser = argparse.ArgumentParser(description='PyTorch MNIST Example')\nparser.add_argument('--batch-size', type=int, default=64, metavar='N',\n help='input batch size for training (default: 64)')\nparser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',\n help='input batch size for testing (default: 1000)')\nparser.add_argument('--epochs', type=int, default=10, metavar='N',\n help='number of epochs to train (default: 2)')\nparser.add_argument('--lr', type=float, default=0.001, metavar='LR',\n help='learning rate (default: 0.001)')\nparser.add_argument('--momentum', type=float, default=0.9, metavar='M',\n help='SGD momentum (default: 0.9)')\nparser.add_argument('--no-cuda', action='store_true', default=False,\n help='enables CUDA training')\nparser.add_argument('--seed', type=int, default=1, metavar='S',\n help='random seed (default: 1)')\nparser.add_argument('--log-interval', type=int, default=10, metavar='N',\n help='how many batches to wait before logging training status')\nargs = parser.parse_args()\nargs.cuda = not args.no_cuda and torch.cuda.is_available()\n\ntorch.manual_seed(args.seed)\nif args.cuda:\n torch.cuda.manual_seed(args.seed)\n\n\nkwargs = {'num_workers': 1, 'pin_memory': True} if args.cuda else {}\ntrain_loader = torch.utils.data.DataLoader(\n datasets.MNIST('../data', train=True, download=True,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=args.batch_size, shuffle=True, **kwargs)\ntest_loader = torch.utils.data.DataLoader(\n datasets.MNIST('../data', train=False, transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=args.batch_size, shuffle=True, **kwargs)\n\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = F.relu(self.fc2(x))\n return F.log_softmax(x)\n \nclass DeepNeuralDecisionForest(nn.Module):\n def __init__(self, p_keep_conv, p_keep_hidden, n_leaf, n_label, n_tree, n_depth):\n super(DeepNeuralDecisionForest, self).__init__()\n\n self.conv = nn.Sequential()\n self.conv.add_module('conv1', nn.Conv2d(1, 10, kernel_size=5))\n self.conv.add_module('relu1', nn.ReLU())\n self.conv.add_module('pool1', nn.MaxPool2d(kernel_size=2))\n self.conv.add_module('drop1', nn.Dropout(1-p_keep_conv))\n self.conv.add_module('conv2', nn.Conv2d(10, 20, kernel_size=5))\n self.conv.add_module('relu2', nn.ReLU())\n self.conv.add_module('pool2', nn.MaxPool2d(kernel_size=2))\n self.conv.add_module('drop2', nn.Dropout(1-p_keep_conv))\n\n self._nleaf = n_leaf\n self._nlabel = n_label\n self._ntree = n_tree\n self._ndepth = n_depth\n self._batchsize = args.batch_size\n\n self.treelayers = nn.ModuleList()\n self.pi_e = nn.ParameterList()\n for i in range(self._ntree):\n treelayer = nn.Sequential()\n treelayer.add_module('sub_linear1', nn.Linear(320, 50))\n treelayer.add_module('sub_relu', nn.ReLU())\n treelayer.add_module('sub_drop1', nn.Dropout(1-p_keep_hidden))\n treelayer.add_module('sub_linear2', nn.Linear(50, self._nleaf))\n treelayer.add_module('sub_sigmoid', nn.Sigmoid())\n \n self.treelayers.append(treelayer)\n self.pi_e.append(Parameter(self.init_prob_weights([self._nleaf, self._nlabel], -2, 2)))\n\n def init_pi(self):\n return torch.ones(self._nleaf, self._nlabel)/float(self._nlabel)\n\n def init_weights(self, shape):\n return torch.randn(shape).uniform(-0.01,0.01)\n\n def init_prob_weights(self, shape, minval=-5, maxval=5):\n return torch.Tensor(shape[0], shape[1]).uniform_(minval, maxval)\n\n def compute_mu(self, flat_decision_p_e):\n n_batch = self._batchsize\n batch_0_indices = torch.range(0, n_batch * self._nleaf - 1, self._nleaf).unsqueeze(1).repeat(1, self._nleaf).long()\n\n in_repeat = self._nleaf // 2\n out_repeat = n_batch\n\n batch_complement_indices = torch.LongTensor(\n np.array([[0] * in_repeat, [n_batch * self._nleaf] * in_repeat] * out_repeat).reshape(n_batch, self._nleaf))\n\n # First define the routing probabilistics d for root nodes\n mu_e = []\n indices_var = Variable((batch_0_indices + batch_complement_indices).view(-1)) \n indices_var = indices_var.cuda()\n # iterate over each tree\n for i, flat_decision_p in enumerate(flat_decision_p_e):\n mu = torch.gather(flat_decision_p, 0, indices_var).view(n_batch, self._nleaf)\n mu_e.append(mu)\n\n # from the scond layer to the last layer, we make the decison nodes\n for d in range(1, self._ndepth + 1):\n indices = torch.range(2 ** d, 2 ** (d + 1) - 1) - 1\n tile_indices = indices.unsqueeze(1).repeat(1, 2 ** (self._ndepth - d + 1)).view(1, -1)\n batch_indices = batch_0_indices + tile_indices.repeat(n_batch, 1).long()\n\n in_repeat = in_repeat // 2\n out_repeat = out_repeat * 2\n # Again define the indices that picks d and 1-d for the nodes\n batch_complement_indices = torch.LongTensor(\n np.array([[0] * in_repeat, [n_batch * self._nleaf] * in_repeat] * out_repeat).reshape(n_batch, self._nleaf))\n\n mu_e_update = []\n indices_var = Variable((batch_indices + batch_complement_indices).view(-1))\n indices_var = indices_var.cuda()\n for mu, flat_decision_p in zip(mu_e, flat_decision_p_e):\n mu = torch.mul(mu, torch.gather(flat_decision_p, 0, indices_var).view(\n n_batch, self._nleaf))\n mu_e_update.append(mu)\n mu_e = mu_e_update\n return mu_e\n\n def compute_py_x(self, mu_e, leaf_p_e):\n py_x_e = []\n n_batch = self._batchsize\n\n for i in range(len(mu_e)):\n py_x_tree = mu_e[i].unsqueeze(2).repeat(1, 1, self._nlabel).mul(leaf_p_e[i].unsqueeze(0).repeat(n_batch, 1, 1)).mean(1)\n py_x_e.append(py_x_tree.squeeze().unsqueeze(0))\n\n py_x_e = torch.cat(py_x_e, 0)\n py_x = py_x_e.mean(0).squeeze()\n \n return py_x\n\n def forward(self, x):\n feat = self.conv.forward(x)\n feat = feat.view(-1, 320)\n self._batchsize = x.size(0)\n\n flat_decision_p_e = []\n leaf_p_e = []\n \n for i in range(len(self.treelayers)):\n decision_p = self.treelayers[i].forward(feat)\n decision_p_comp = 1 - decision_p\n decision_p_pack = torch.cat((decision_p, decision_p_comp))\n flat_decision_p = decision_p_pack.view(-1)\n flat_decision_p_e.append(flat_decision_p)\n leaf_p = F.softmax(self.pi_e[i])\n leaf_p_e.append(leaf_p)\n \n mu_e = self.compute_mu(flat_decision_p_e)\n \n py_x = self.compute_py_x(mu_e, leaf_p_e)\n return torch.log(py_x)\n\n\n################ Definition ######################### \nDEPTH = 3 # Depth of a tree\nN_LEAF = 2 ** (DEPTH + 1) # Number of leaf node\nN_LABEL = 10 # Number of classes\nN_TREE = 10 # Number of trees (ensemble)\nN_BATCH = args.batch_size#args.batch-size # Number of data points per mini-batch\n# network hyperparameters\np_conv_keep = 0.8\np_full_keep = 0.5\n\n \nmodel = DeepNeuralDecisionForest(p_keep_conv = p_conv_keep, p_keep_hidden = p_full_keep, n_leaf= N_LEAF, n_label= N_LABEL, n_tree= N_TREE, n_depth= DEPTH)\n\n#model = Net()\nif args.cuda:\n model.cuda()\n\noptimizer = optim.RMSprop(model.parameters(), lr=args.lr) \n\ndef train(epoch):\n model.train()\n for batch_idx, (data, target) in enumerate(train_loader):\n if args.cuda:\n data, target = data.cuda(), target.cuda()\n data, target = Variable(data), Variable(target)\n optimizer.zero_grad()\n output = model(data)\n loss = F.nll_loss((output), target)\n loss.backward()\n optimizer.step()\n if batch_idx % args.log_interval == 0:\n print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n epoch, batch_idx * len(data), len(train_loader.dataset),\n 100. * batch_idx / len(train_loader), loss.data[0]))\n\ndef test(epoch):\n model.eval()\n test_loss = 0\n correct = 0\n for data, target in test_loader:\n if args.cuda:\n data, target = data.cuda(), target.cuda()\n data, target = Variable(data, volatile=True), Variable(target)\n output = model(data)\n test_loss += F.nll_loss(output, target).data[0]\n pred = output.data.max(1)[1] # get the index of the max log-probability\n correct += pred.eq(target.data).cpu().sum()\n\n test_loss = test_loss\n test_loss /= len(test_loader) # loss function already averages over batch size\n print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n test_loss, correct, len(test_loader.dataset),\n 100. * correct / len(test_loader.dataset)))\n\n\nfor epoch in range(1, args.epochs + 1):\n train(epoch)\n test(epoch)\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "stephane_guillitte" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you very much.", "isAccepted": false, "likes": null, "poster": "waitwaitforget" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "Hi, could you tell me how to run this code? May be used the command line “python train.py”?", "isAccepted": false, "likes": null, "poster": "Jianyuan_Sun" } ]
false
Saving torch models
null
[ { "contents": "<SCODE>class VAE(nn.Module):\n def __init__(self):\n super(VAE, self).__init__()\n\n self.fc1 = nn.Linear(784, 400)\n\nmodel = VAE()\n\nwith open('model_save', 'wb') as f: \ntorch.save(model, f)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Russel_Russel" }, { "contents": "I have no idea when are you getting the first error and what does it mean. I’ve never seen that.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Russel_Russel" }, { "contents": "<SCODE>net = Net(...).cuda()\ntorch.save(net.state_dict(), './net.pth')\nnet.load_state_dict(torch.load('./net.pth'))<ECODE>", "isAccepted": false, "likes": 5, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "x0s" }, { "contents": "I am implementing these exact same lines of code, but I get an error. Code : (On a different computer, with GPU) Code: (on my laptop) Error : module ‘torch._C’ has no attribute ‘_cuda_getDevice’ Note : I trained my model with cuda = True on a virtual environment on a network (with GPUs), but I want to evaluate my model on my laptop. I believe we need to ‘switch off’ the cuda before saving. Could you help?", "isAccepted": false, "likes": null, "poster": "jphreid" }, { "contents": "Ok, got it! … map_location=lambda storage, loc: storage) : load the model on CPU.", "isAccepted": false, "likes": null, "poster": "jphreid" } ]
false
Different fine-tuning results between Torch and PyTorch
null
[ { "contents": "I’m fine-tuning a same custom dataset using Torch and PyTorch based on ResNet 34. \nWith Torch, I use fb.resnet.torch 11 with learning rate 0.001. After 50 epoch:\n \ntop1: 89.267 top5: 97.933\n \nWith PyTorch, I use code below for fine-tuing. After 90 epoch:\n\nlearning rate = 0.01, top1: 78.500, top5: 94.083\nlearning rate = 0.001, top1: 74.792, top5: 92.583\n\n learning rate = 0.01, top1: 78.500, top5: 94.083 learning rate = 0.001, top1: 74.792, top5: 92.583 You can see that PyTorch fine-tuning result is still not so good as Torch. Fine-tuning ResNet 18 has similar results. Any suggestion or guidance? PyTorch code used for fine-tuning: <SCODE>class FineTuneModel(nn.Module):\n def __init__(self, original_model, arch, num_classes):\n super(FineTuneModel, self).__init__()\n # Everything except the last linear layer\n self.features = nn.Sequential(*list(original_model.children())[:-1])\n self.classifier = nn.Sequential(\n nn.Linear(512, num_classes)\n )\n\n # Freeze those weights\n for p in self.features.parameters():\n p.requires_grad = False\n\n\n def forward(self, x):\n f = self.features(x)\n f = f.view(f.size(0), -1)\n y = self.classifier(f)\n return y<ECODE>", "isAccepted": false, "likes": 1, "poster": "panovr" }, { "contents": "Your fine tune model looks OK to me. I don’t have enough context to see what could be different. Can you post your fine-tuning code?", "isAccepted": false, "likes": null, "poster": "colesbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "panovr" }, { "contents": "one possible reason is that, torch adopts more image transforms for data augmentation than pytorch.", "isAccepted": false, "likes": 1, "poster": "shicai" }, { "contents": "I think that the difference comes from the fact that you fix all the weights in the pytorch model (except from the last classifier), while in lua torch you are fine-tuning the whole network.", "isAccepted": false, "likes": 1, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "different data augmentations or image transforms “hard” fine-tuning and “soft” fine-tuning From my dataset experiment result, I think maybe “soft” fine-tuning is better than “hard” fine-tuning.", "isAccepted": false, "likes": null, "poster": "panovr" } ]
false
Accessing a variable inside the model
null
[ { "contents": "What’s a good way to access a convolutional filter (the learned weights) in the model? edit: The filter in question nn.Conv2d(1, 1, 5, stride=1, padding=2, bias=False)", "isAccepted": false, "likes": 1, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "That works. <SCODE>Parameter containing:\n(0 ,0 ,.,.) = \n -0.1781 -0.3750 0.0752 -0.1060 0.1356\n 0.1607 -0.2711 0.1783 0.2942 0.0471\n 0.1992 0.0228 -0.1627 -0.4729 -0.0560\n 0.1801 -0.0715 0.0305 -0.0124 -0.1072\n 0.2290 0.3730 0.1166 -0.1296 0.0746\n[torch.cuda.FloatTensor of size 1x1x5x5 (GPU 0)]<ECODE>", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "How can you call the conv operator with non learnable tensors? Suppose I have that tensor I posted earlier that I really like, I want it to be constant and not have to worry about it being unfrozen during a p.requires_grad = True operation such as is often performed in GANs. Or I have it as a variable declared outside the class of the model where I have direct control over it from the main execution flow.", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "It’s not obvious how can you force conv2d to use it?", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "Wrap it in a Variable at every forward and give it to Conv2d", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "So I need to manually set self.conv.weight = Variable(filter) inside forward? The way you said “give it to” implies that it can be passed as an argument? I don’t see how.", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "Don’t save the variable anywhere! It should be created at every forward and use as a local Variable. I don’t really know what you want to do but I guess this might help: <SCODE>filter = Variable(self.filter) # self.filter is a registered buffer (tensor)\nresult = F.conv2d(input, filter)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Autoencoders in Pytorch
null
[ { "contents": "Would Pytorch support something like this? How does one go about implementing a simple Autoencoder? <SCODE>class Encoder(nn.Module):\n def __init__(self):\n super(Encoder, self).__init__()\n self.fc1 = nn.Linear(784, 32)\n\n def forward(self, x):\n return F.sigmoid(self.fc1(x))\n\nclass Decoder(nn.Module):\n def __init__(self):\n super(Decoder, self).__init__()\n self.fc1 = nn.Linear(32, 784)\n \n def forward(self, x):\n return F.sigmoid(self.fc1(x))\n\nclass AutoEncoder(nn.Module):\n def __init__(self):\n super(AutoEncoder, self).__init__()\n self.fc1 = Encoder()\n self.fc2 = Decoder()\n\n def forward(self, x):\n return self.fc2(self.fc1(x))\n\nmodel = AutoEncoder()\noptimizer = optim.Adam(model.parameters(), lr=0.5)\nfor epoch in range(1, 201):\n train(epoch)\n test(epoch, validation)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "Just looking for the simplest possible implementation of an AutoEncoder here", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "Yes, that should work.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "If you really want to do the simplest, I would suggest: <SCODE>class Autoencoder(nn.Module):\n def __init__(self, ):\n super(Autoencoder, self).__init__()\n self.fc1 = nn.Linear(784, 32)\n self.fc2 = nn.Linear(32, 784)\n self.sigmoid = nn.Sigmoid()\n\n def forward(self, x):\n x = self.sigmoid(self.fc1(x))\n x = self.sigmoid(self.fc2(x))\n return x<ECODE>", "isAccepted": false, "likes": 3, "poster": "alexis-jacq" }, { "contents": "<SCODE>matrices expected, got 4D, 2D tensors at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/TH/generic/THTensorMath.c:857\n<ECODE>", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "In that case your approach seems simpler. You can even do: <SCODE>encoder = nn.Sequential(nn.Linear(782,32), nn.Sigmoid())\ndecoder = nn.Sequential(nn.Linear(32,732), nn.Sigmoid())\nautoencoder = nn.Sequential(encoder, decoder)<ECODE>", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Wong_Fungtion" }, { "contents": "So you want a kind of balanced autoencoder, where Encoder = Transpose(Decoder)? In that case, I would do something like this: <SCODE>class BalancedAE(nn.Module):\n def __init__(self, ):\n super(BalancedAE, self).__init__()\n self.encoder = nn.Parameter(torch.rand(size_input, size_output))\n\n def forward(self, x):\n x = torch.sigmoid(torch.mm(self.encoder, x))\n x = torch.sigmoid(torch.mm(x, torch.transpose(self.encoder, 0, 1))\n return x\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "alexis-jacq" } ]
false
Problem about WGAN
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "yichuan9527" }, { "contents": "this has been fixed in the latest version of the code.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "your meaning the fixed problem is WGAN code or pytorch?", "isAccepted": false, "likes": null, "poster": "yichuan9527" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yichuan9527" } ]
false
PyTorch apply transform
null
[ { "contents": "If we have a dataloader <SCODE>train_loader = torch.utils.data.DataLoader(trainset_imoprt, batch_size=200, shuffle=True)\nsemi_loader = torch.utils.data.DataLoader(trainunl_imoprt, batch_size=200, shuffle=True)\nvalid_loader = torch.utils.data.DataLoader(validset_import, batch_size=200, shuffle=True)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "DataLoaders don’t support transforms, you need to make your dataset apply them.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Strange bug with numpy conversion and back
null
[ { "contents": "Kind of weird bug. if i do: <SCODE>train_loader = torch.utils.data.DataLoader(trainset_imoprt, batch_size=200, shuffle=True)\nsemi_loader = torch.utils.data.DataLoader(trainunl_imoprt, batch_size=200, shuffle=True)\nvalid_loader = torch.utils.data.DataLoader(validset_import, batch_size=200, shuffle=True)\n\nfeatures = train_loader.dataset.train_data.numpy()\nlabels = train_loader.dataset.train_labels.numpy()\n\nimg = features\nimg = img.astype('float32')\nlab = labels\n\nimg, lab = torch.from_numpy(img), torch.from_numpy(lab)\n\ntrain = torch.utils.data.TensorDataset(img.unsqueeze(1), lab)\ntrain_loader = torch.utils.data.DataLoader(train, batch_size=64, shuffle=False)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "Maybe it’s because of the different batch size or shuffle? Also, I don’t understand why you have to unsqueeze a dimension that wasn’t used by the first train_loader.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
[Solved]Confusing HTTP error when installing pytorch
null
[ { "contents": "<SCODE>Fetching package metadata ...........\nSolving package specifications: .\n\nPackage plan for installation in environment /home/zhanglu/anaconda3:\n\nThe following NEW packages will be INSTALLED:\n\npytorch: 0.1.9-py36_2 soumith\ntorchvision: 0.1.7-py36_1 soumith\n\n\nThe following packages will be UPDATED:\n\nconda: 4.3.8-py36_0 --> 4.3.13-py36_0\n\n\nProceed ([y]/n)? y\n\npytorch-0.1.9- 100% |################################| Time: 0:02:49 1.44 MB/s\npytorch-0.1.9- 100% |################################| Time: 0:05:52 692.97 kB/s\npytorch-0.1.9- 100% |################################| Time: 0:01:15 3.23 MB/s\n\nCondaError: CondaHTTPError: HTTP None None for url \nElapsed: None\n\nAn HTTP error occurred when trying to retrieve this URL.\nConnectionError(ReadTimeoutError(\"HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com',\n port=443): Read timed out.\",),)\n\nCondaError: CondaHTTPError: HTTP None None for url \nElapsed: None\n\nAn HTTP error occurred when trying to retrieve this URL.\nConnectionError(ReadTimeoutError(\"HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com',\n port=443): Read timed out.\",),)\n\nCondaError: CondaHTTPError: HTTP None None for url \nElapsed: None\n\nAn HTTP error occurred when trying to retrieve this URL.\nConnectionError(ReadTimeoutError(\"HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com',\n port=443): Read timed out.\",),)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "squirrel" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "squirrel" }, { "contents": "I am getting this error when I use conda to install magma-cuda80. Reboot does not help me. Is there any alternative way to install magma-cuda80?", "isAccepted": false, "likes": null, "poster": "wangg12" }, { "contents": "<SCODE>conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/\nconda config --set show_channel_urls yes\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wangg12" }, { "contents": "what do you to slove the problem about magma-cuda80 package?", "isAccepted": false, "likes": null, "poster": "qichaotang" }, { "contents": "Though i reboot,can not solve the problem.", "isAccepted": false, "likes": null, "poster": "1113" }, { "contents": "Please update to the latest version of Navigator. $ conda update anaconda-navigator", "isAccepted": false, "likes": null, "poster": "SarveshTS" } ]
false
How can I create training and testing dataset use for dcgan?
vision
[ { "contents": "HI,I want use the the dcgan from pytorch examples to generate faces, and i want to use training and testing images that I collected by myself. So how can I create dataset like cifar-10-batches-py? Thanks very much!", "isAccepted": false, "likes": null, "poster": "xiaoxing_zeng" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thanks very much!", "isAccepted": false, "likes": null, "poster": "xiaoxing_zeng" } ]
false
Odd segmentation fault if torch is imported
null
[ { "contents": "<SCODE>from skimage.data import astronaut\nfrom matplotlib import pyplot as plt\nimport torch\n\nf = plt.figure()\nplt.imshow(astronaut())\nf.savefig('0.pdf')\n<ECODE> I have no idea what’s going on.", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Reverse the order of imports for now", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" } ]
false
Sparse Tensors in PyTorch
null
[ { "contents": "What is the current state of sparse tensors in PyTorch?", "isAccepted": false, "likes": 2, "poster": "Ilya_Kostrikov" }, { "contents": "right now, the description / summary of this PR that was merged 11 hours ago gives a good idea of the current state of things:", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "But we’re not documenting them on purpose, because they might undergo some more changes in the future. The first step was to implement sprase updates for Embedding. Can I ask what’s your use case?", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I need just basic sparse matrix multiplication in order to implement a Graph ConvNet model. For example: and The Laplacian matrix is extremely sparse is this case.", "isAccepted": false, "likes": null, "poster": "Ilya_Kostrikov" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I need sparse x dense -> dense. So I can use PyTorch in this case. Thanks a lot!", "isAccepted": false, "likes": null, "poster": "Ilya_Kostrikov" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "But it’s not in autograd yet? Is there a fast way to add it?", "isAccepted": false, "likes": null, "poster": "Ilya_Kostrikov" }, { "contents": "a fast and local way is for you to write an autograd function for yourself.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "That’s my question too (now on 21st Sept). Can anyone comment on the current state of sparse tensors in PyTorch? Thank you", "isAccepted": false, "likes": 4, "poster": "hconroy" }, { "contents": "I would like to update variable with sparse gradients. and it’s normal. I know that wasn’t support by tensorflow. so how about pytorch/. Thank you!", "isAccepted": false, "likes": 1, "poster": "11131" }, { "contents": "What is the status of Sparse Support in PyTorch? I’d like to contribute to the module. What are the current design strategies in place? There seems to be no detail or comprehensive discussion on this aspect.", "isAccepted": false, "likes": null, "poster": "pradeepgopalakrishna" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "rajeshpiryani" } ]
false
Which version of python do you recommend to use for PyTorch?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
How can i compute seq2seq loss using mask?
null
[ { "contents": "In TensorFlow, i can do this as below. <SCODE># targets is an int64 tensor of shape (batch_size, padded_length) which contains word indices. \n# masks is a tensor of shape (batch_size, padded_length) which contains 0 or 1 (0 if pad otherwise 1).\n\noutputs = decoder(...) # unnormalized scores of shape (batch_size, padded_length, vocab_size) \noutputs = tf.reshape(outputs, (-1, vocab_size))\ntargets = tf.reshape(targets, (-1))\nlosses = tf.nn.sparse_softmax_cross_entropy_loss(outputs, targets) # loss of shape (batch_size*padded_length)\n\nmasks = tf.reshape(masks, (-1))\nloss = losses * masks \n<ECODE> <SCODE>criterion = nn.CrossEntropyLoss()\noutputs = decoder(features, inputs) # (batch_size, padded_length, vocab_size)\nloss = criterion(outputs.view(-1, vocab_size), targets.view(-1)) # this gives a scalar not tensor\n<ECODE> How can i solve this problem?", "isAccepted": false, "likes": 3, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jekbradbury" }, { "contents": "<SCODE>cap = CocoCaptions(root = './data/train2014resized',\n annFile = './data/annotations/captions_train2014.json',\n vocab = vocab,\n transform=transform,\n target_transform=transforms.ToTensor())\n\ndata_loader = torch.utils.data.DataLoader(\n cap, batch_size=16, shuffle=True, num_workers=2)\n\nfor i, (images, input_seqs, target_seqs, masks) in enumerate(data_loader):\n # images: a tensor of shape (batch_size, 3, 256, 256).\n # input_seqs, target_seqs, masks: tensors of shape (batch_size, padded_length).<ECODE>", "isAccepted": false, "likes": 1, "poster": "yunjey" }, { "contents": "<SCODE>import torch.nn.utils.rnn as rnn_utils \nlstm = nn.LSTM(in_size, hidden_size, bidirectional, num_layers)\npacked = rnn_utils.pack_padded_sequence(padded, lengths)\npacked_out, packed_hidden = lstm(packed)\nunpacked, unpacked_len = rnn_utils.pad_packed_sequence(packed_out)<ECODE>", "isAccepted": false, "likes": 2, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "Could you let me know how to deal with the following example case? Assume that I have an input, where maximum sequence length is 5, minibatch size is 4, and there are 3 possible labels. idxs has effective lengths of sequences in the input. <SCODE>input = Variable(torch.randn(5,4,3))\nidxs = [5,3,3,2]\ntarget = Variable(torch.LongTensor(5,4))\n# assume the target labels are assigned corresponding to the input.\n<ECODE> Thanks!", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pranav" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Then the cross_entropy loss could calculated with <SCODE>def nll_loss(outputs, targets):\n return torch.gather(outputs, 1, targets.view(-1,1))\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "Can you help me understand this torch.gather I have used the same tensors with similar shape but this line torch.gather(outputs, 1, targets.view(-1,1)) gives me error.", "isAccepted": false, "likes": null, "poster": "vijendra_rana" }, { "contents": "Thanks for sharing the code. I test the nll_loss function and get: RuntimeError: Input tensor must have same dimensions as output tensor at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:424 It seems that the input and index arguments of gather function should have same dimensions.", "isAccepted": false, "likes": null, "poster": "volpato30" }, { "contents": "currently I use this way to compute masked loss for seq2seq model: <SCODE>'''\ntarget_sentences: [batch_size, max_length] int64\nbatch_target_len_array: [batch_size] int64, each element is the valid length of the corresponding target sentence. \n'''\nfor step in range(max_length):\n decoder_output, decoder_hidden = decoder(\n decoder_input, decoder_hidden, encoder_outputs)\n if np.sum(batch_target_len_array > step) == 0:\n break\n mask_vector = torch.from_numpy((batch_target_len_array > step).astype(np.int32)).byte()\n index_vector = Variable(torch.masked_select(torch.arange(0, batch_size), mask_vector).long()).cuda()\n valid_output = torch.index_select(decoder_output, 0, index_vector)\n valid_target = torch.index_select(target_sentences[:, step], 0, index_vector)\n\n step_loss = criterion(valid_output, valid_target)\n loss += step_loss\n decoder_input = target_sentences[:, step] # Teacher forcing<ECODE>", "isAccepted": false, "likes": 4, "poster": "volpato30" }, { "contents": "<SCODE>loss_function = nn.CrossEntropyLoss(ignore_index=0)\n<ECODE> since i have zero padded targets.", "isAccepted": false, "likes": 10, "poster": "cbaziotis" }, { "contents": "I have tried, doesn’t work in my case. Could you post your code with a bit more details?", "isAccepted": false, "likes": null, "poster": "Madotto_Andrea" }, { "contents": "It is clear. I’ll try! The newest version 0.3 updated the cost function with a new Parameter “reduce”, and it’s easy to mask the loss", "isAccepted": false, "likes": null, "poster": "Littleye" }, { "contents": "Can you explain how to mask the loss now?", "isAccepted": false, "likes": null, "poster": "himat" }, { "contents": "With the parameter “reduce”, we can get the loss per batch element, but how can we use the mask on it? For example, if I have a minibatch whose valid sequence lengths is [3,1,2], and with “reduce” we can get three loss values [L1, L2, L3]. But we need to mask the last two values in calculating L2 and last one value in calculating L3. How could it be achieved? ------------------------------edited---------------------------- Oh I see, we only need to use ignore_index=0 parameter in loss(), but don’t need the “reduce” parameter to achieve this. Is it right?", "isAccepted": false, "likes": null, "poster": "jpzhou" } ]
false
Cuda error with Pascal 100?
null
[ { "contents": "I am running the same code on two different machines, one with Titan X and CUDA 7.5 and the other with Pascal 100 and CUDA 8.0 and I am seeing this error with the Pascal setup: <SCODE> File \"./python2.7/site-packages/torch/autograd/variable.py\", line 145, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"./python2.7/site-packages/torch/nn/_functions/linear.py\", line 22, in backward\n grad_input = torch.mm(grad_output, weight)\nRuntimeError: cublas runtime error : the GPU program failed to execute at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487343590888/work/torch/lib/THC/THCBlas.cu:246\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Lifeng_Jin" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I was trying to install from source on Mac and this error happens when compiling: <SCODE>CMake Warning (dev):\n Policy CMP0042 is not set: MACOSX_RPATH is enabled by default. Run \"cmake\n --help-policy CMP0042\" for policy details. Use the cmake_policy command to\n set the policy and suppress this warning.\n\n MACOSX_RPATH is not specified for the following targets:\n\n THPP\n\nThis warning is for project developers. Use -Wno-dev to suppress it.\n\n-- Generating done\n-- Build files have been written to: /whatever/pytorch/torch/lib/build/THPP\n[ 11%] Linking CXX shared library libTHPP.dylib\nUndefined symbols for architecture x86_64:\n \"_THCSByteTensor_cadd\", referenced from:\n thpp::THCSTensor<unsigned char>::cadd(thpp::Tensor const&, long long, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSByteTensor_cmul\", referenced from:\n thpp::THCSTensor<unsigned char>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSByteTensor_free\", referenced from:\n thpp::THCSTensor<unsigned char>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<unsigned char>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<unsigned char>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<unsigned char>::free() in THCSTensor.cpp.o\n \"_THCSByteTensor_new\", referenced from:\n thpp::THCSTensor<unsigned char>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<unsigned char>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<unsigned char>::newTensor() const in THCSTensor.cpp.o\n \"_THCSByteTensor_newClone\", referenced from:\n thpp::THCSTensor<unsigned char>::clone() const in THCSTensor.cpp.o\n \"_THCSByteTensor_newContiguous\", referenced from:\n thpp::THCSTensor<unsigned char>::contiguous() const in THCSTensor.cpp.o\n \"_THCSByteTensor_retain\", referenced from:\n thpp::THCSTensor<unsigned char>::clone_shallow() in THCSTensor.cpp.o\n thpp::THCSTensor<unsigned char>::retain() in THCSTensor.cpp.o\n \"_THCSCharTensor_cadd\", referenced from:\n thpp::THCSTensor<char>::cadd(thpp::Tensor const&, long long, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSCharTensor_cmul\", referenced from:\n thpp::THCSTensor<char>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSCharTensor_free\", referenced from:\n thpp::THCSTensor<char>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<char>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<char>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<char>::free() in THCSTensor.cpp.o\n \"_THCSCharTensor_new\", referenced from:\n thpp::THCSTensor<char>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<char>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<char>::newTensor() const in THCSTensor.cpp.o\n \"_THCSCharTensor_newClone\", referenced from:\n thpp::THCSTensor<char>::clone() const in THCSTensor.cpp.o\n \"_THCSCharTensor_newContiguous\", referenced from:\n thpp::THCSTensor<char>::contiguous() const in THCSTensor.cpp.o\n \"_THCSCharTensor_retain\", referenced from:\n thpp::THCSTensor<char>::clone_shallow() in THCSTensor.cpp.o\n thpp::THCSTensor<char>::retain() in THCSTensor.cpp.o\n \"_THCSDoubleTensor_cadd\", referenced from:\n thpp::THCSTensor<double>::cadd(thpp::Tensor const&, double, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSDoubleTensor_cmul\", referenced from:\n thpp::THCSTensor<double>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSDoubleTensor_free\", referenced from:\n thpp::THCSTensor<double>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<double>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<double>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<double>::free() in THCSTensor.cpp.o\n \"_THCSDoubleTensor_new\", referenced from:\n thpp::THCSTensor<double>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<double>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<double>::newTensor() const in THCSTensor.cpp.o\n \"_THCSDoubleTensor_newClone\", referenced from:\n thpp::THCSTensor<double>::clone() const in THCSTensor.cpp.o\n \"_THCSDoubleTensor_newContiguous\", referenced from:\n thpp::THCSTensor<double>::contiguous() const in THCSTensor.cpp.o\n \"_THCSDoubleTensor_retain\", referenced from:\n thpp::THCSTensor<double>::clone_shallow() in THCSTensor.cpp.o\n thpp::THCSTensor<double>::retain() in THCSTensor.cpp.o\n \"_THCSFloatTensor_cadd\", referenced from:\n thpp::THCSTensor<float>::cadd(thpp::Tensor const&, double, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSFloatTensor_cmul\", referenced from:\n thpp::THCSTensor<float>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSFloatTensor_free\", referenced from:\n thpp::THCSTensor<float>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<float>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<float>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<float>::free() in THCSTensor.cpp.o\n \"_THCSFloatTensor_new\", referenced from:\n thpp::THCSTensor<float>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<float>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<float>::newTensor() const in THCSTensor.cpp.o\n \"_THCSFloatTensor_newClone\", referenced from:\n thpp::THCSTensor<float>::clone() const in THCSTensor.cpp.o\n \"_THCSFloatTensor_newContiguous\", referenced from:\n thpp::THCSTensor<float>::contiguous() const in THCSTensor.cpp.o\n \"_THCSFloatTensor_retain\", referenced from:\n thpp::THCSTensor<float>::clone_shallow() in THCSTensor.cpp.o\n thpp::THCSTensor<float>::retain() in THCSTensor.cpp.o\n \"_THCSHalfTensor_cadd\", referenced from:\n thpp::THCSTensor<__half>::cadd(thpp::Tensor const&, double, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSHalfTensor_cmul\", referenced from:\n thpp::THCSTensor<__half>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSHalfTensor_free\", referenced from:\n thpp::THCSTensor<__half>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<__half>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<__half>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<__half>::free() in THCSTensor.cpp.o\n \"_THCSHalfTensor_new\", referenced from:\n thpp::THCSTensor<__half>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<__half>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<__half>::newTensor() const in THCSTensor.cpp.o\n \"_THCSHalfTensor_newClone\", referenced from:\n thpp::THCSTensor<__half>::clone() const in THCSTensor.cpp.o\n \"_THCSHalfTensor_newContiguous\", referenced from:\n thpp::THCSTensor<__half>::contiguous() const in THCSTensor.cpp.o\n \"_THCSHalfTensor_retain\", referenced from:\n thpp::THCSTensor<__half>::clone_shallow() in THCSTensor.cpp.o\n thpp::THCSTensor<__half>::retain() in THCSTensor.cpp.o\n \"_THCSIntTensor_cadd\", referenced from:\n thpp::THCSTensor<int>::cadd(thpp::Tensor const&, long long, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSIntTensor_cmul\", referenced from:\n thpp::THCSTensor<int>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSIntTensor_free\", referenced from:\n thpp::THCSTensor<int>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<int>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<int>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<int>::free() in THCSTensor.cpp.o\n \"_THCSIntTensor_new\", referenced from:\n thpp::THCSTensor<int>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<int>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<int>::newTensor() const in THCSTensor.cpp.o\n \"_THCSIntTensor_newClone\", referenced from:\n thpp::THCSTensor<int>::clone() const in THCSTensor.cpp.o\n \"_THCSIntTensor_newContiguous\", referenced from:\n thpp::THCSTensor<int>::contiguous() const in THCSTensor.cpp.o\n \"_THCSIntTensor_retain\", referenced from:\n thpp::THCSTensor<int>::clone_shallow() in THCSTensor.cpp.o\n thpp::THCSTensor<int>::retain() in THCSTensor.cpp.o\n \"_THCSLongTensor_cadd\", referenced from:\n thpp::THCSTensor<long>::cadd(thpp::Tensor const&, long long, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSLongTensor_cmul\", referenced from:\n thpp::THCSTensor<long>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSLongTensor_free\", referenced from:\n thpp::THCSTensor<long>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<long>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<long>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<long>::free() in THCSTensor.cpp.o\n \"_THCSLongTensor_new\", referenced from:\n thpp::THCSTensor<long>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<long>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<long>::newTensor() const in THCSTensor.cpp.o\n \"_THCSLongTensor_newClone\", referenced from:\n thpp::THCSTensor<long>::clone() const in THCSTensor.cpp.o\n \"_THCSLongTensor_newContiguous\", referenced from:\n thpp::THCSTensor<long>::contiguous() const in THCSTensor.cpp.o\n \"_THCSLongTensor_retain\", referenced from:\n thpp::THCSTensor<long>::clone_shallow() in THCSTensor.cpp.o\n thpp::THCSTensor<long>::retain() in THCSTensor.cpp.o\n \"_THCSShortTensor_cadd\", referenced from:\n thpp::THCSTensor<short>::cadd(thpp::Tensor const&, long long, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSShortTensor_cmul\", referenced from:\n thpp::THCSTensor<short>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o\n \"_THCSShortTensor_free\", referenced from:\n thpp::THCSTensor<short>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<short>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<short>::~THCSTensor() in THCSTensor.cpp.o\n thpp::THCSTensor<short>::free() in THCSTensor.cpp.o\n \"_THCSShortTensor_new\", referenced from:\n thpp::THCSTensor<short>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<short>::THCSTensor(THCState*) in THCSTensor.cpp.o\n thpp::THCSTensor<short>::newTensor() const in THCSTensor.cpp.o\n \"_THCSShortTensor_newClone\", referenced from:\n thpp::THCSTensor<short>::clone() const in THCSTensor.cpp.o\n \"_THCSShortTensor_newContiguous\", referenced from:\n thpp::THCSTensor<short>::contiguous() const in THCSTensor.cpp.o\n \"_THCSShortTensor_retain\", referenced from:\n thpp::THCSTensor<short>::clone_shallow() in THCSTensor.cpp.o\n thpp::THCSTensor<short>::retain() in THCSTensor.cpp.o\nld: symbol(s) not found for architecture x86_64\nclang: error: linker command failed with exit code 1 (use -v to see invocation)\nmake[2]: *** [libTHPP.1.dylib] Error 1\nmake[1]: *** [CMakeFiles/THPP.dir/all] Error 2\nmake: *** [all] Error 2\n\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Lifeng_Jin" } ]
false
Is it possible to check each ouput of a layer in a nn.Sequential or nn.Module?
null
[ { "contents": "I want to check the output value of each layer I build. but I didn’t find any element in layers. Is there any way to do this? For example, I have a model: <SCODE>model = nn.Sequential(nn.Linear(128, 64), nn.Linear(64, 32), nn.Linear(32, 1))\nav = Variable(torch.rand(7, 128), requires_grad = True)\nmodel.forward(av)\n<ECODE> I want to see the output of each linear layer.", "isAccepted": false, "likes": null, "poster": "maplewizard" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "alexis-jacq" } ]
false
Missing torch.nn.utils.clip_grad_norm in Anaconda-based pytorch Installation!
null
[ { "contents": "So, I would like to know how can I repair this error? I think somehow the installation procedure, above mentioned command, is a little incomplete! Thanks", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "Below is an example for installing PyTorch from source. <SCODE>$ git clone https://github.com/pytorch/pytorch\n$ export CMAKE_PREFIX_PATH=/home/yunjey/anaconda3 # your anaconda path\n$ conda install numpy mkl setuptools cmake gcc cffi\n$ conda install -c soumith magma-cuda80\n$ export MACOSX_DEPLOYMENT_TARGET=10.9 # if OSX\n$ pip install -r requirements.txt\n$ python setup.py install<ECODE>", "isAccepted": false, "likes": 1, "poster": "yunjey" }, { "contents": "Thanks, problem was solved.", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "We’ll be publishing new binaries soon and they will include that function.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "We’re working on fixing some issues with compilers over optimizing the binaries with AVX2 vector instructions, that are not available in lots of older CPUs. We’re going to publish them once we’re done with that.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
[resolved] Is there MNIST dataset in torchvision.datasets?
null
[ { "contents": "<SCODE>from tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"./mnist/data/\")\nx_train = mnist.train.images # numpy array \ny_train = mnist.train.labels\nx_test = mnist.test.images\ny_test = mnist.test.labels\n<ECODE> Is there any simple way to handle this in PyTorch?", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "Yes it already there - see here and here, The code looks something like this, <SCODE>train_loader = torch.utils.data.DataLoader(\n datasets.MNIST('../data', train=True, download=True,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=args.batch_size, shuffle=True, **kwargs)\ntest_loader = torch.utils.data.DataLoader(\n datasets.MNIST('../data', train=False, transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=args.batch_size, shuffle=True, **kwargs)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "How do you subset the MNIST training data? It’s 60,000 images, how can you reduce it to say 2000? Here’s the code <SCODE>>>> from torchvision import datasets, transforms\n\n\n>>> \n>>> \n>>> train_all_mnist = datasets.MNIST('../data', train=True, download=True,\n... transform=transforms.Compose([\n... transforms.ToTensor(),\n... transforms.Normalize((0.1307,), (0.3081,))\n... ]))\nFiles already downloaded\n>>> train_all_mnist\n<torchvision.datasets.mnist.MNIST object at 0x7f89a150cfd0>\n<ECODE> It’s a bit ugly - anyone know a neater way to do this?", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "what is your purpose of subsetting the training dataset?", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "I’m interested in Omniglot, which is like an inverse, MNIST, lots of classes, each with a small number of examples. Take a look, here By the way - thank you for your tutorials - they are very clear and helpful to learn from. Best regards, Ajay", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Spent a hour hacking together my own loader - but this looks better! Seems to be the easiest data set for experimenting with one-shot learning?", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "Whats the current best methodology for Omniglot? Who or what’s doing the best at the moment?", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "I though you might like this, implementation of a GAN on Omniglot,", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "Have you found a better way to do this?", "isAccepted": false, "likes": null, "poster": "ritchieng" }, { "contents": "Nope sorry - been totally snowed under the past couple of months - not had any time to work on it. If you’re referring to the alternative cost functions for GANs I don’t think they make much difference? Cheers, Aj", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "It includes Omniglot data downloading and iterating scripts along with all the models proposed in the paper (the nets are written and trained with Theano).", "isAccepted": false, "likes": 1, "poster": "pranav" } ]
false
Problem with pytorch installation from source
null
[ { "contents": "The installation commands which I’ve used were:", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "It’s not a problem with anaconda but with the CUDA driver. It seems that some of the libraries are missing. Can you reinstall the driver?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I also have a compilation error when trying to install from source: <SCODE>Install the project...\n-- Install configuration: \"Release\"\n-- Installing: /home/alexis/src/pytorch/torch/lib/tmp_install/lib/libTHNN.so.1\n-- Installing: /home/alexis/src/pytorch/torch/lib/tmp_install/lib/libTHNN.so\n-- Up-to-date: /home/alexis/src/pytorch/torch/lib/tmp_install/include/THNN/THNN.h\n-- Up-to-date: /home/alexis/src/pytorch/torch/lib/tmp_install/include/THNN/generic/THNN.h\n-- Removing -DNDEBUG from compile flags\n-- TH_LIBRARIES: /home/alexis/src/pytorch/torch/lib/tmp_install/lib/libTH.so.1\n-- Compiling with MAGMA support\n-- MAGMA INCLUDE DIRECTORIES: /home/alexis/anaconda2/include\n-- MAGMA LIBRARIES: /home/alexis/anaconda2/lib/libmagma.a\n-- MAGMA V2 check: 1\n-- Autodetected CUDA architecture(s): 5.0\n-- got cuda version 8.0\n-- Found CUDA with FP16 support, compiling with torch.CudaHalfTensor\n-- CUDA_NVCC_FLAGS: -DTH_INDEX_BASE=0 -I/home/alexis/src/pytorch/torch/lib/tmp_install/include -I/home/alexis/src/pytorch/torch/lib/tmp_install/include/TH -I/home/alexis/src/pytorch/torch/lib/tmp_install/include/THC -I/home/alexis/src/pytorch/torch/lib/tmp_install/include/THPP;-gencode;arch=compute_50,code=sm_50;-DCUDA_HAS_FP16=1\n-- THC_SO_VERSION: 1\n-- Configuring done\n-- Generating done\n-- Build files have been written to: /home/alexis/src/pytorch/torch/lib/build/THC\n[ 1%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensorMath.cu.o\n[ 2%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensorMathMagma.cu.o\n[ 3%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensorRandom.cu.o\n[ 5%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensorScatterGather.cu.o\nIn file included from /home/alexis/src/pytorch/torch/lib/THC/THCTensorMathMagma.cu:4:0:\n/home/alexis/src/pytorch/torch/lib/THC/THCTensorMathMagma.cuh:5:19: fatal error: magma.h: No such file or directory\n #include <magma.h>\n ^\ncompilation terminated.\nCMake Error at THC_generated_THCTensorMathMagma.cu.o.cmake:207 (message):\n Error generating\n /home/alexis/src/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMathMagma.cu.o\n\n\nCMakeFiles/THC.dir/build.make:3869: recipe for target 'CMakeFiles/THC.dir/THC_generated_THCTensorMathMagma.cu.o' failed\nmake[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMathMagma.cu.o] Error 1\nmake[2]: *** Waiting for unfinished jobs....\n<ECODE> before I run: <SCODE>$ git clone https://github.com/pytorch/pytorch\n$ export CMAKE_PREFIX_PATH=/home/alexis/anaconda2\n$ conda install numpy mkl setuptools cmake gcc cffi\n$ conda install -c soumith magma-cuda80\n$ pip install -r requirements.txt\n$ python setup.py install\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ajdroid" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "I’ve experienced the very same problem. Grabbing the latest cuda-8.0 and reinstalling it solved the issue.", "isAccepted": false, "likes": null, "poster": "elanmart" }, { "contents": "I faced the same problem when installing Pytorch in a new computer today. I solve it by edit the ‘.bashrc’, please make sure the environment variables of CUDA is set accurately as:", "isAccepted": false, "likes": 3, "poster": "ywu36" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yunpengli" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "WeiFoo" }, { "contents": "error running this command:", "isAccepted": false, "likes": null, "poster": "K23" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "SimonW" }, { "contents": "try installing bzip2-devel", "isAccepted": false, "likes": null, "poster": "SimonW" }, { "contents": "is there a way to install bzip2-devel without having root access? i’ve trying different things but no luck so far.", "isAccepted": false, "likes": null, "poster": "K23" } ]
false
Giving multiple parameters in optimizer
null
[ { "contents": "How can i give multiple parameters to the optimizer? <SCODE>fc1 = nn.Linear(784, 500)\nfc2 = nn.Linear(500, 10)\noptimizer = torch.optim.SGD([fc1.parameters(), fc2.parameters()], lr=0.01) # This causes an error.\n<ECODE> In this case, for simplicity, i don’t want to use a class with nn.Module.", "isAccepted": false, "likes": 3, "poster": "yunjey" }, { "contents": "you have to concatenate python lists: <SCODE>params = list(fc1.parameters()) + list(fc2.parameters())\n\ntorch.optim.SGD(params, lr=0.01)\n<ECODE>", "isAccepted": false, "likes": 69, "poster": "smth" }, { "contents": "Thanks, it works well.", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "Zichun_Zhang" }, { "contents": "Dear Soumith, While executing your approach, it says: TypeError: add() received an invalid combination of arguments - got (list), but expected one of: (Tensor other, Number alpha) (Number other, Number alpha) Can you help me?) Is there something wrong?", "isAccepted": false, "likes": null, "poster": "MrPositron" }, { "contents": "Probably you set a bracket to the wrong place. You have to convert the parameters to a list separately and add the lists afterwards.", "isAccepted": false, "likes": null, "poster": "justusschock" }, { "contents": "[SOLVED] <SCODE>params = self.net.state_dict()\npas = list(params['net.0.weight']) + list(params['net.0.bias']) + list(params['net.3.weight'] + list(params['net.3.bias']) + list(params['net.6.weight']) + list(params['net.6.bias']))\nself.optimizer1 = optim.Adam(pas, lr = 0.01)\n<ECODE> Here is my code. I think everything is ok", "isAccepted": false, "likes": null, "poster": "MrPositron" }, { "contents": "<SCODE>import itertools\n\nparams = [fc1.parameters(), fc2.parameters()]\n\ntorch.optim.SGD(itertools.chain(*params), lr=0.01)\n<ECODE>", "isAccepted": false, "likes": 12, "poster": "huangbiubiu" }, { "contents": "How is this different from just putting all of the tensors in a list directly as OP did?", "isAccepted": false, "likes": null, "poster": "riley" }, { "contents": "If your models are in a list or tuple somewhere already, you can also use a nested list comprehension: <SCODE>models = [nn.Linear(784, 500),\n nn.Linear(500, 10)\n ]\noptimizer = torch.optim.SGD((par for model in models for par in model.parameters()),\n lr=0.01)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "NeverConvex" }, { "contents": "OP is passing a list of lists.", "isAccepted": false, "likes": null, "poster": "Shubajit_Saha" }, { "contents": "Thank you! This helped me a lot!", "isAccepted": false, "likes": null, "poster": "ncuxomun" } ]
false
Is there pretrained CNN (e.g. ResNet) for CIFAR-10 or CIFAR-100?
null
[ { "contents": "", "isAccepted": false, "likes": 1, "poster": "yunjey" }, { "contents": "We dont offer pre-trained resnet with cifar. You might have to train one yourself.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Maybe NIN with CIFAR? :3", "isAccepted": false, "likes": null, "poster": "skrish13" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "skrish13" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "jekbradbury" }, { "contents": "Oh okay! I’ll see if I can train it. Thanks again!", "isAccepted": false, "likes": null, "poster": "skrish13" }, { "contents": "Didn’t we have a repo of links to PyTorch implementation of various papers? (Or I’m confusing that with Chainer’s similar repo :S)", "isAccepted": false, "likes": 1, "poster": "skrish13" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "skrish13" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "prlz77" }, { "contents": "Sorry for the dumb question, but how do you load a .pytorch file. Is it the same extension as .pth? This doesn’t seem to work:", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "prlz77" }, { "contents": "What would be TheModelClass for your model?", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "@jekbradbury Did you manage to train/load the model successfully? I managed to train it, but I am having a sizes do not match error, when I am trying to load it. I am trying to load it with the following code: but getting:", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "karlTUM" }, { "contents": "Yes, I managed to load ResNets that I trained on CIFAR datasets. The code for that is: The parameters for the model and for the net you are loading should agree. For what is worth, the accuracy I got was: with these hyperparameters: Would be interesting to see what happens if I use some more advanced optimizer like Adam. Anyway, in case you don’t have time to train them, I can upload the models today, during the afternoon.", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "prlz77" }, { "contents": "Hi, Then on which dataset all the models are pretrained?", "isAccepted": false, "likes": null, "poster": "raaj043" }, { "contents": "they’re all pre-trained on Imagenet-12", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thank you very much. Its helpfulu", "isAccepted": false, "likes": null, "poster": "raaj043" } ]
false
Multiple parameter optimization in Multi GPU
null
[ { "contents": "Hello, I am using pre-trained VGGNet-16 model where the layers skipping the FC part are wrapped in torch.nn.DataParallel. The optimizer I used is: <SCODE>optimizer = optim.SGD([{'params': model.pretrained_model[0][24].parameters()},\n {'params': model.pretrained_model[0][26].parameters()},\n {'params': model.pretrained_model[0][28].parameters()},\n {'params': model.regressor[0][1].parameters()},\n {'params': model.regressor[0][4].parameters()}], lr=0.001, momentum=0.9)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "srv902" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Following is the model summary: <SCODE>Sequential (\n (0): Sequential (\n (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): ReLU (inplace)\n (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (3): ReLU (inplace)\n (4): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (6): ReLU (inplace)\n (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (8): ReLU (inplace)\n (9): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (11): ReLU (inplace)\n (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (13): ReLU (inplace)\n (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (15): ReLU (inplace)\n (16): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (18): ReLU (inplace)\n (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (20): ReLU (inplace)\n (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (22): ReLU (inplace)\n (23): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (25): ReLU (inplace)\n (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (27): ReLU (inplace)\n (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (29): ReLU (inplace)\n (30): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n )\n (1): Sequential (\n (0): Dropout (p = 0.5)\n (1): Linear (25088 -> 4096)\n (2): ReLU (inplace)\n (3): Dropout (p = 0.5)\n (4): Linear (4096 -> 4096)\n (5): ReLU (inplace)\n (6): Linear (4096 -> 1000)\n )\n)\n<ECODE> and the class for model creation: <SCODE> class MyModel(nn.Module):\n def __init__(self, pretrained_model):\n super(MyModel, self).__init__()\n self.pretrained_model = nn.Sequential(*list(pretrained_model.children())[:-1])\n self.pretrained_model = torch.nn.DataParallel(self.pretrained_model)\n self.regressor = nn.Sequential(net1)\n \n def forward(self, x):\n x = self.pretrained_model(x)\n x = x.view(-1,35840)\n x = self.regressor(x)\n x = x.view(-1,57,77)\n return x\n<ECODE>", "isAccepted": false, "likes": null, "poster": "srv902" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "model.pretrained_model[0][24].parameters() model.pretrained_model.module[0][24].parameters() it didn’t give any indexing error. Thank you.", "isAccepted": false, "likes": 1, "poster": "srv902" } ]
false
Binary label: Tensor type?
null
[ { "contents": "Hello, It seems to be a minor issue. I’m training a toy siamese network. My label is either -1 or 1, so I was using a LongTensor to store the label. It seems to me that torch complains because target is supposed to be a FloatTensor? (torch.cuda.LongTensor tensor, int value)\ndidn’t match because some of the arguments have invalid types: (torch.cuda.LongTensor, float) (torch.cuda.LongTensor tensor, torch.cuda.LongTensor other)\ndidn’t match because some of the arguments have invalid types: (torch.cuda.LongTensor, float) (torch.cuda.LongTensor tensor, int value)\ndidn’t match because some of the arguments have invalid types: (torch.cuda.LongTensor, float) (torch.cuda.LongTensor tensor, torch.cuda.LongTensor other)\ndidn’t match because some of the arguments have invalid types: (torch.cuda.LongTensor, float) Thanks!", "isAccepted": false, "likes": null, "poster": "lg-zhang" }, { "contents": "What loss are you using? I think casting the target to a FloatTensor should fix it.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I assumed that a binary label should be an integer until it complains. Thanks!", "isAccepted": false, "likes": 1, "poster": "lg-zhang" } ]
false
How to creating a new layer with cuDNN?
null
[ { "contents": "HI, I have changed the RNN module into another one RNN1. Everything is working fine if I disable cuDNN. But once cuDNN is on, there is an error. It seems that I have to define it in cuDNN file? How to do that?", "isAccepted": false, "likes": null, "poster": "Sunnydreamrain" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Sunnydreamrain" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Sunnydreamrain" }, { "contents": "Any updates on this?", "isAccepted": false, "likes": null, "poster": "MatthijsVk" } ]
false
Gradient checking
null
[ { "contents": "Is there any simple and common gradient checking method, when extending an autograd function ?", "isAccepted": false, "likes": 2, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "alexis-jacq" }, { "contents": "It’s been added only recently and we forgot about that. Can you send a PR please?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "alexis-jacq" }, { "contents": "Hello, <SCODE>class SolveTrianguler(Function):\n # sloves A * x = b\n def __init__(self, trans=0, lower=True):\n super(SolveTrianguler, self).__init__()\n # trans=1, transpose the matrix A.T * x = b\n self.trans = trans\n # lower=False, use data contained in the upper triangular, the default is lower\n self.lower = lower\n # self.needs_input_grad = (True, False)\n\n def forward(self, matrix, rhs):\n x = torch.from_numpy(\n solve_triangular(matrix.numpy(), rhs.numpy(),\n trans=self.trans, lower=self.lower))\n\n self.save_for_backward(matrix, x)\n return x\n\n def backward(self, grad_output):\n # grad_matrix = grad_rhs = None\n matrix, x = self.saved_tensors\n # formula from Giles 2008, 2.3.1\n return -matrix.inverse().t().mm(grad_output).mm(torch.t(x)), \\\n matrix.inverse().t().mm(grad_output)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "yinhao" } ]
false
How to stop weight-decay from affectnig PReLU?
null
[ { "contents": "Hi, so I would like to use the PReLU function, however as can be seen here, it warns us that we should not use weight-decay with it. Now in my code, the weight-decay is being used, and it is called like this: <SCODE># Optimizer\noptimizer = optim.Adam(net.parameters(), \n lr = float(args.learningRate), \n weight_decay = float(args.l2_weightDecay)\n ) \n<ECODE> What I do not understand though, is how can I specify explicitly, that I do not want my PReLU layers to be affected by weight decay?", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "xuanzhangyang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "falmasri" } ]
false
Selecting entries of matrix based on on 1D tensor of indices
null
[ { "contents": "Hi, My question is: if I have a 2D tensor and a 1D LongTensor which stores a list of indices, then I want to select an entry from each row of the 2D tensor based on the 1D longTensor. How can I achieve that in PyTorch? For example, a = [[1,2,3],[4,5,6],[7,8,9]], b = [2,1,0], then I would like to get [3, 5, 7] Also, say if I torch.sum([3,5,7]) and then take the derivative of it, is it doable in the sense that the partial derivatives will be successfully calculated? Thanks a lot!", "isAccepted": false, "likes": 2, "poster": "bilibili" }, { "contents": "This should do it: <SCODE>x.gather(1, b.unsqueeze(1))\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "The values in b shouldn’t have to be unique, since they’re indexing into separate rows?", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "Yes, of course. You’re right. They should be unique within a row, but it’s not a problem here.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks a lot for your answer! It works now.", "isAccepted": false, "likes": null, "poster": "bilibili" } ]
false
CPU usage issue when using pytorch compiled from source
null
[ { "contents": "Hi, When I use the pytorch compiled from the src(pytorch-src), the CPU usage is much higher than the pytorch installed directly through conda(pytorch-conda). In addition, when using pytorch-src, gpu usage is lower than pytorch-conda. I just followed the instructions on github to install from src. I’m using python 2.7. Is there anything with my installation?", "isAccepted": false, "likes": 1, "poster": "yikang-li" }, { "contents": "What OS are you on? And by higher CPU usage do you mean that PyTorch is able to use more cores effectively and run faster?", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "I’m on Ubuntu 14.04. It uses more cores, but not faster nor efficient, I’m afraid. When I changed back to the pytorch installed through conda, it seems OK.", "isAccepted": false, "likes": null, "poster": "yikang-li" }, { "contents": "Maybe your pytorch that was compiled from source is not using cudnn? And maybe the compiled version is linking against OpenBLAS instead of MKL?", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Hi Massa, Thank you for your help. I’m not very familiar with Linux. If you don’t mind, could you tell me how to check them.", "isAccepted": false, "likes": null, "poster": "yikang-li" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "florin" } ]
false
NLP in Pytorch Tutorial
null
[ { "contents": "Hi, I have been working on a tutorial as a fast introduction to deep learning NLP with Pytorch. I feel that the current tutorials focus mostly on CV. There are some NLP examples out there, but I didn’t find anything for beginners (which I am looking for, since we are using Pytorch for an NLP class I am TA’ing). So I wrote a tutorial. It assumes NLP knowledge and familiarity with neural nets, but not with deep learning programming. I wanted to post the tutorial here to get feedback and also because I figure it may be helpful to some people. There are some fast explanations and a lot of code, with a few working examples (nothing state of the art, just things to get an idea). I still need to add a BiLSTM-CRF tagger for NER example, which will be the most complicated one. Here’s the link. If you look at it, I’m happy to get any feedback. I want it to be useful to the students in my class.", "isAccepted": false, "likes": 9, "poster": "rguthrie3" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "Hey, nice work and thanks for sharing! I have some minor suggestions: Also, I’d also recommend using tensor indexing to create BoW vectors, as that will likely be faster than iterating over a list in tensor constructor.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Hi, Thanks for the comments! I will update it when I get the chance.", "isAccepted": false, "likes": 1, "poster": "rguthrie3" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "denizs" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "zhutao" }, { "contents": "H zhutaoi, it is very nice for you to post your implementation. But I have some questions about it. I am not in the area of NLP so if I misunderstood the model or made a mistake, please point it out and I am happy to discuss with you. Thanks!", "isAccepted": false, "likes": null, "poster": "pengkaizhu" }, { "contents": "I think it should be along this line: <SCODE>class CBOW(nn.Module):\n \n def __init__(self, vocab_size, embedding_dim):\n super(CBOW, self).__init__()\n self.embedding = nn.Embedding(num_embeddings=vocab_size,\n embedding_dim=embedding_dim)\n \n self.linear = nn.Linear(in_features=embedding_dim,\n out_features=vocab_size)\n \n def forward(self, x):\n # embeds 4 context words into say, 10 dim,\n # then take their sum along the rows (dim=0) to get 1 by 10 vector\n embedding = self.embedding(x).sum(dim=0) \n out = self.linear(embedding)\n out = F.log_softmax(out)\n return out\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ehsanmok" }, { "contents": "<SCODE>word_to_ix = {word: i for i, word in enumerate(raw_text)}\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "pengkaizhu" }, { "contents": "Ah, right! of course, it makes more sense to enumerate vocab for later embedding.", "isAccepted": false, "likes": null, "poster": "ehsanmok" }, { "contents": "Thank you very much this is really good for starters. However, as I am new to PyTorch I am looking for any tutorial that can handle sparse operations as I am dealing with one hot vectors. Please guide if you know any such tutorials. Sincerely,", "isAccepted": false, "likes": null, "poster": "pjavia" }, { "contents": "I implemented the exercise of CBOW (my code is below). Please let me know if it looks okay. vocab = set(raw_text) class CBOWClassifier(nn.Module): <SCODE>def __init__ (self, vocab_size, embed_size, context_size):\n\tsuper(CBOWClassifier,self).__init__()\n\tself.embeddings = nn.Embedding(vocab_size, embed_size)\n\tself.linear1 = nn.Linear(embed_size, 128)\n\tself.linear2 = nn.Linear(128, vocab_size)\n\ndef forward(self, inputs):\n\tembed = self.embeddings(inputs)\n\tembed = torch.sum(embed, dim=0)\n\tout = self.linear1(embed)\n\tout = F.relu(out)\n\tout = self.linear2(out)\n\tlog_probs = F.log_softmax(out)\n\treturn log_probs\n<ECODE> VOCAB_SIZE = len(word_to_idx) <SCODE>\tcontext_idx = [word_to_idx[w] for w in context]\n\tcontext_var = autograd.Variable(torch.LongTensor(context_idx))\n\tmodel.zero_grad()\n\tlog_probs = model(context_var)\n\ttarget_idx = word_to_idx[target]\n\tloss = loss_function(log_probs, autograd.Variable(torch.LongTensor([target_idx])))\n\tloss.backward()\n\toptimizer.step()\n\ttotal_loss = total_loss + loss.data\nlosses.append(total_loss)\n<ECODE> print(losses)", "isAccepted": false, "likes": null, "poster": "sanket" }, { "contents": "Hello, I also implemented the CBOW model as follows: Loss is decresing but how much epoch is needed to get the output for the CBOW exercise?", "isAccepted": false, "likes": null, "poster": "emirceyani" }, { "contents": "In other words, why wrap the Variable around with an autograd? Because type(hello_embed) for the two lines produce the same result. (<class ‘torch.autograd.variable.Variable’>_ Thanks!", "isAccepted": false, "likes": null, "poster": "yuqli" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Andy_Markman" }, { "contents": "Anyone see issues with my implementation? <SCODE>EMBEDDING_DIM = 10\n\nraw_text = \"\"\"We are about to study the idea of a computational process.\nComputational processes are abstract beings that inhabit computers.\nAs they evolve, processes manipulate other abstract things called data.\nThe evolution of a process is directed by a pattern of rules\ncalled a program. People create programs to direct processes. In effect,\nwe conjure the spirits of the computer with our spells.\"\"\".split()\n\n# By deriving a set from `raw_text`, we deduplicate the array\nvocab = set(raw_text)\nvocab_size = len(vocab)\n\nword_to_ix = {word: i for i, word in enumerate(vocab)}\ndata = []\nfor i in range(2, len(raw_text) - 2):\n context = [raw_text[i - 2], raw_text[i - 1],\n raw_text[i + 1], raw_text[i + 2]]\n target = raw_text[i]\n data.append((context, target))\nprint(data[:5])\n\n\nclass CBOW(nn.Module):\n\n def __init__(self, vocab_size, embedding_dim):\n super(CBOW, self).__init__()\n self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n self.linear1 = nn.Linear(embedding_dim, 128)\n self.linear2 = nn.Linear(128, vocab_size) \n \n\n def forward(self, inputs): \n embeds = self.embeddings(inputs).sum(0).view((1,-1))\n out = self.linear1(F.relu(embeds))\n out = self.linear2(out)\n log_probs = F.log_softmax(out, dim=1)\n return log_probs\n \nlosses = []\nloss_function = nn.NLLLoss()\nmodel = CBOW(len(vocab), EMBEDDING_DIM)\noptimizer = optim.SGD(model.parameters(), lr=0.001)\n\n# create your model and train. here are some functions to help you make\n# the data ready for use by your module\n\n\ndef make_context_vector(context, word_to_ix):\n idxs = [word_to_ix[w] for w in context]\n return torch.tensor(idxs, dtype=torch.long)\n\n\nfor epoch in range(10): # train a bit\n total_loss = torch.Tensor([0])\n for context, target in data:\n\n # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words\n # into integer indices and wrap them in variables)\n context_idxs = torch.tensor(make_context_vector(context, word_to_ix), dtype=torch.long)\n\n \n # Step 2. Recall that torch *accumulates* gradients. Before passing in a\n # new instance, you need to zero out the gradients from the old\n # instance\n model.zero_grad()\n\n # Step 3. Run the forward pass, getting log probabilities over next\n # words\n log_probs = model(context_idxs)\n\n # Step 4. Compute your loss function. (Again, Torch wants the target\n # word wrapped in a variable)\n loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))\n\n # Step 5. Do the backward pass and update the gradient\n loss.backward()\n optimizer.step()\n\n # Get the Python number from a 1-element Tensor by calling tensor.item()\n total_loss += loss.item()\n losses.append(total_loss)\n\n# keep training\nepoch_count = 10\nprint(\"Training until loss is less than 1..\")\nwhile losses[-1] >= 1: # go until seriously overfitting :)\n total_loss = torch.Tensor([0])\n for context, target in data:\n\n # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words\n # into integer indices and wrap them in variables)\n context_idxs = torch.tensor(make_context_vector(context, word_to_ix), dtype=torch.long)\n\n \n # Step 2. Recall that torch *accumulates* gradients. Before passing in a\n # new instance, you need to zero out the gradients from the old\n # instance\n model.zero_grad()\n\n # Step 3. Run the forward pass, getting log probabilities over next\n # words\n log_probs = model(context_idxs)\n\n # Step 4. Compute your loss function. (Again, Torch wants the target\n # word wrapped in a variable)\n loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))\n\n # Step 5. Do the backward pass and update the gradient\n loss.backward()\n optimizer.step()\n\n # Get the Python number from a 1-element Tensor by calling tensor.item()\n total_loss += loss.item()\n losses.append(total_loss)\n epoch_count += 1\n \nprint(\"Final loss of %0.4f in %d epochs\" % (float(losses[-1]), epoch_count))\n\n# Test\ncorrect = 0\nfor context, target in data:\n context_idxs = torch.tensor(make_context_vector(context, word_to_ix), dtype=torch.long) \n log_probs = model(context_idxs)\n _, ix = torch.max(log_probs, 1)\n prediction = next(key for key, value in word_to_ix.items() if value == int(ix))\n correct += target == prediction\n \naccuracy = correct / len(data)\nprint(\"Average accuracy:\", accuracy)\n<ECODE> <SCODE>Training util loss is less than 1..\nFinal loss of 0.9996 in 1436 epochs\nAverage accuracy: 1.0\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "austinmw" } ]
false
PyTorch Gradients
null
[ { "contents": "Normally when we’re doing backprop we would do the following: <SCODE>loss.backward() # This calculates the gradients\noptimizer.step() # This updates the net\n<ECODE>", "isAccepted": true, "likes": 1, "poster": "ritchieng" }, { "contents": "Hi, If you want to accumulate gradients from multiple backwards, you can just backward multiple times without resetting the gradients: <SCODE>optimizer.zero_grad()\n\nfor i in range(minibatch):\n loss = model(batch_data[i])\n loss.backward()\n\noptimizer.step()\n<ECODE>", "isAccepted": true, "likes": 9, "poster": "albanD" }, { "contents": "Not that loss functions average over the batch size, so if you do multiple backprops you might need to average over the number of the for loop iterations.", "isAccepted": true, "likes": 4, "poster": "apaszke" }, { "contents": "", "isAccepted": true, "likes": 3, "poster": "ritchieng" }, { "contents": "Yes. Divide it by the number of iterations in the for loop.", "isAccepted": true, "likes": 1, "poster": "apaszke" }, { "contents": "Is there a performance penalty to running backward() multiple times vs. just using a bigger batch (in situations where its an option)?", "isAccepted": true, "likes": null, "poster": "Veril" }, { "contents": "Yes, it’s always going to be slower, but it’s a tradeoff between performance and memory usage. Try to do as few iterations as you can (you can split each batch into smaller sub-batches, so that they nearly fill up the memory).", "isAccepted": true, "likes": 1, "poster": "apaszke" }, { "contents": "But it doesn’t affect the performance (accuracy) of the model using the accumulated gradient method right?", "isAccepted": true, "likes": null, "poster": "Zijun_Wei" }, { "contents": "<SCODE>num_epoch = 10\nreal_batchsize = 100 # I want to update weight every `real_batchsize`\nfor epoch in range(num_epoch):\n total_loss = Variable(torch.zeros(1).cuda(), requires_grad=True)\n for batch_idx, (data, target) in enumerate(train_loader):\n\n data, target = Variable(data.cuda()), Variable(target.cuda())\n output = net(data)\n\n total_loss = total_loss + loss\n\n if batch_idx % real_batchsize == 0:\n\n ave_loss = total_loss/real_batchsize\n ave_loss.backward()\n optimizer.step()\n \n total_loss.data.zero_()\n optimizer.zero_grad()\n\n<ECODE> The above code will produce an error: <SCODE> total_loss.data.zero_()\n<ECODE> by <SCODE>total_loss = Variable(torch.zeros(1).cuda(), requires_grad=True)\n<ECODE> Now it seems to work. But I am 100% sure if I have done it right. Can you give any advice on how to do it properly?", "isAccepted": true, "likes": null, "poster": "jdhao" }, { "contents": "I think a simpler way to do this would be: <SCODE>num_epoch = 10\nreal_batchsize = 100 # I want to update weight every `real_batchsize`\nfor epoch in range(num_epoch):\n total_loss = 0\n for batch_idx, (data, target) in enumerate(train_loader):\n\n data, target = Variable(data.cuda()), Variable(target.cuda())\n output = net(data)\n loss = crit(output, target)\n total_loss = total_loss + loss\n\n if batch_idx % real_batchsize == 0:\n\n ave_loss = total_loss/real_batchsize\n optimizer.zero_grad()\n ave_loss.backward()\n optimizer.step()\n total_loss = 0\n<ECODE>", "isAccepted": true, "likes": 4, "poster": "albanD" }, { "contents": "Can we directly add an int value with torch Variable? Will the average loss and backward gradient calculation be on GPU?", "isAccepted": true, "likes": null, "poster": "jdhao" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "albanD" }, { "contents": "Just a note (which you may be aware of): When you call ave_loss.backward() you propagate errors with respect to your (correct) loss, but these errors are functions of what the activations are when the .backward method was called. Since you’ve thrown away all but the last 10 samples, you are making the assumption that the first 90 samples were the same as the last 10.", "isAccepted": true, "likes": 3, "poster": "fbcotter" }, { "contents": "<SCODE>x = tensor(x0,requires_grad=True)\nloss = 0\nfor i in range(inputs.numel()): # For my apps, it's between 5 and 50.\n\trec = f(x,i)\n\tloss += loss_func(inputs[i], rec)\nloss.backward()\ng = x.grad\n<ECODE> <SCODE>x = tensor(x0, requires_grad=True)\nloss = 0\nfor i in range(inputs.numel()):\n\trec = f(x,i)\n\tloss += loss_func(inputs[i], rec)\n\tloss.backward()\ng = x.grad\n<ECODE> <SCODE>x = tensor(x0, requires_grad=True, retain_graph=True)\nloss = 0\nfor i in range(inputs.numel()):\n\trec = f(x,i)\n\tloss += loss_func(inputs[i], rec)\n\tloss.backward()\ng = x.grad\n<ECODE>", "isAccepted": true, "likes": null, "poster": "matthieuheitz" }, { "contents": "Hi,", "isAccepted": true, "likes": null, "poster": "albanD" }, { "contents": "<SCODE>state_dict = model.state_dict()[key]\n<ECODE> Thanks in advance", "isAccepted": true, "likes": null, "poster": "MariosOreo" }, { "contents": "optimizer.state_dict() is dependant on the optimizer itself. It will contain whatever is needed for this optimizer to continue working as if it was not stopped (saving things like momentum terms or statistics).", "isAccepted": true, "likes": 1, "poster": "albanD" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "matthieuheitz" }, { "contents": "Yes it will retain them until you actually destroy the computational graph. This will increase the peak memory usage during the backward.", "isAccepted": true, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "karl7" } ]
true
How can i convert model’s parameter data type to Double?
null
[ { "contents": "", "isAccepted": false, "likes": 1, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": 8, "poster": "albanD" }, { "contents": "Thanks very much! It’s very simple.", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "<SCODE>X_train = torch.FloatTensor(X_train)\n<ECODE> instead", "isAccepted": false, "likes": null, "poster": "Marwan_Elghitany" } ]
false
Example DCGAN: when updateing netD, noise can be volatile
null
[ { "contents": "<SCODE>noise.data.resize_(batch_size, nz, 1, 1)\nnoise.data.normal_(0, 1)\nfake = netG(noise)\n<ECODE> I think it would be better if we change it to: <SCODE>noise = Variable(torch.Tensor(batch_size, nz, 1, 1).normal_(0, 1),volatile = True)\nfake = Variable(netG(noise).data)\n<ECODE> because when training netD, we won’t need the buffer and gradient of netG. this may acclerate training step and reduce memory usage.[details=Summary]This text will be hidden[/details]", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "You are correct, changing it like this is better. Even better, you can do: <SCODE>fake = netG(noise)\nfake.detach()\n<ECODE> If you are interested, please send a pull request to fix it, I will merge.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "detach according to the docs: What does it mean for the variable to be detached and how does it affect performance?", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Right, but it will also copy the volatile attribute of its predecessor, which means that you must either recast the variable anyway or not use volatile for the generator. So does detach serve a purpose here? I assume volatile in the generator is good, or why else have that flag option at all?", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "Because you don’t need to use volatile if you’re going to use detach later. It will be nearly equivalent, with volatile possibly being a bit faster and more memory efficient.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>-------------------------------------------------------------------\nRuntimeError Traceback (most recent call last)\n<ipython-input-25-5b95d5373ece> in <module>()\n 19 fake_pic = netg(noise_).detach()\n 20 output2 = netd(fake_pic)\n---> 21 output2.backward(mone) #change for wgan\n 22 D_x2 = output2.data.mean()\n 23 optimizerD.step()\n\n/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.pyc in backward(self, gradient, retain_variables)\n 149 \"\"\"\n 150 if self.volatile:\n--> 151 raise RuntimeError('calling backward on a volatile variable')\n 152 if gradient is None and self.requires_grad:\n 153 if self.data.numel() != 1:\n\nRuntimeError: calling backward on a volatile variable\n<ECODE> <SCODE>> /usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py(151)backward()\n 149 \"\"\"\n 150 if self.volatile:\n--> 151 raise RuntimeError('calling backward on a volatile variable')\n 152 if gradient is None and self.requires_grad:\n 153 if self.data.numel() != 1:\n\nipdb> u\n> <ipython-input-27-5b95d5373ece>(21)<module>()\n 19 fake_pic = netg(noise_).detach()\n 20 output2 = netd(fake_pic)\n---> 21 output2.backward(mone) #for wgan\n 22 D_x2 = output2.data.mean()\n 23 optimizerD.step()\n\nipdb> output2.volatile\nTrue\nipdb> fake_pic.volatile\nTrue\n<ECODE>", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "Yes, volatile will be propagated even if you use detach(). I meant that you could remove the volatile flag and use detach with Variables that don’t require grad.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Visualization in Variable
null
[ { "contents": "How to visualize a martix in the Variable type, is there any build-in function to use. Thx.", "isAccepted": false, "likes": null, "poster": "JunshengShen" }, { "contents": "you can use matplotlib or your favorite python visualization. <SCODE>x = Variable(...)\nviz_matrix = x.data.numpy()\n# visualize viz_matrix with matplotlib\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Mention of conda create env wouldn’t hurt
Site Feedback
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "gue22" }, { "contents": "we dont need a separate env for pytorch, so we didn’t mention it. If the user wants a separate env, then its up to them.", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Implementation details of the loss criterion
null
[ { "contents": "I’m trying to read the source code but confused about where the loss criterion is implemented. I also checked /torch/nn/_functions/thnn , but couldn’t find the implementation details there Thanks!", "isAccepted": false, "likes": null, "poster": "Xixi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
Question about Batch Normalization
null
[ { "contents": "The same model without the two self.bn.forward statements gives an accuracy drop to random guessing… does that make sense? <SCODE>class WideNet(nn.Module):\n def __init__(self):\n super(WideNet, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n# self.conv3 = nn.Conv2d(10, 20, kernel_size=2)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(5120, 500)\n self.fcmid = nn.Linear(500, 50)\n self.fc2 = nn.Linear(50, 10)\n self.bn1 = nn.BatchNorm2d(10)\n self.bn2 = nn.BatchNorm2d(20)\n\n def forward(self, x):\n x = F.leaky_relu(F.max_pool2d(self.conv1(x), 2))\n x = self.bn1.forward(x)\n x = F.upsample_bilinear(x, size=(16, 16))\n x = F.leaky_relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = self.bn2.forward(x)\n x = F.upsample_bilinear(x, size=(16, 16))\n x = x.view(-1, 5120)\n x = F.leaky_relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = F.leaky_relu(self.fcmid(x))\n x = F.dropout(x, training=self.training)\n x = F.leaky_relu(self.fc2(x))\n return F.log_softmax(x)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "Not sure, the leaky_relu doesnt seem to have that problem. The upsample is just local replication, so I’m not sure that’s the problem. The issue is that it drops to 10%. The way I fixed it is to have: <SCODE> def forward(self, x):\n x = self.prelu_ac[0](self.bn[0](F.max_pool2d(self.conv1(x), 2)))\n #x = self.bn[0](x)\n x = F.upsample_bilinear(x, size=(16, 16))\n x = self.prelu_ac[1](self.bn[1](F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)))\n #x = self.bn[1](x)\n x = F.upsample_bilinear(x, size=(16, 16))\n x = x.view(-1, 5120)\n x = self.prelu_ac[2](self.bn[2](self.drop1(self.fc1(x))))\n #x = self.bn[2](x)\n x = F.dropout(x, training=self.training)\n x = self.prelu_ac[3](self.bn[3](self.drop2(self.fcmid(x))))\n #x = self.bn[3](x)\n x = F.dropout(x, training=self.training)\n x = self.prelu_ac[4](self.bn[4](self.fc2(x)))\n #x = self.bn[4](x)\n return F.log_softmax(x)\n<ECODE> Any tips for improvement? Should I drop the upsample? I think it helps the second conv layer", "isAccepted": false, "likes": null, "poster": "dmadeka1" } ]
false
Whether Variable requires_grad
null
[ { "contents": "Here inputs requires_grad default is True, and labels must be False, and my question is whether tmp_conv and h_init requires_grad True in the forward. Many thx <SCODE> import torch.nn as nn\n import torch.nn.functional as F\n\n class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n #alexnet\n self.conv1 = nn.Conv2d(3, 20, 5, stride=1)\n self.conv1_bn = nn.BatchNorm2d(20)\n #for initial\n self.fc_h2l = nn.Linear(hidden_dim, out_dim)\n\n def forward(self, inputs):\n #alexnet\n inputs = F.max_pool2d(F.relu(self.conv1_bn(self.conv1(inputs))), (3, 3), stride = 2)\n #variable to store inputs\n tmp_conv = Variable(torch.zeros(2,batch_size,inputs.size()[1],inputs.size()[2],inputs.size()[3]))\n tmp_conv[0,:,:,:,:] = inputs[:,:,:,:].clone()\n ......\n #for initial\n h_init = Variable(torch.randn(batch_size,hidden_dim))\n step_init = F.sigmoid(self.fc_h2l(h_init))\n .....\n alexnet = Net()\n alexnet.cuda()\n #####train\n inputs= Variable(inpt.cuda())\n labels = Variable(labels.cuda(), requires_grad=False)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Tepp" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Yep, thans, that is exactly what I want. However, in my classification task, tmp_conv and step_init are incorporated to form the final feature representation as below. tmp_conv and h_init should requires_grad True or False?? I am newbie here, hope i don’t bother you. Here func is some sort of function", "isAccepted": false, "likes": null, "poster": "Tepp" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Running average of parameters
null
[ { "contents": "I would like to use the running average of parameters instead of using the parameters from the training directly at the test session. To do this, I initialized the running average parameters from the network as <SCODE>avg_param = torch.cat([param.view(-1) for param in model.parameters()],0)\n<ECODE> Then, I performed the running average at each training iteration as <SCODE>avg_param = 0.9*avg_param + 0.1*torch.cat([param.data.view(-1) for param in model.parameters()],0)\n<ECODE> Finally, at the test session, I loaded the parameters as <SCODE>i = 0\nfor param in model.parameters():\n param = avg_param[i:i+param.nelement()].resize(*param.size())\n i = i+param.nelement()\n<ECODE> Is this process correct ?", "isAccepted": false, "likes": 1, "poster": "Seungyoung_Park" }, { "contents": "There are a few problems I can see: This would be correct: <SCODE>def flatten_params():\n return torch.cat([param.data.view(-1) for param in model.parameters()], 0)\n\ndef load_params(flattened):\n offset = 0\n for param in model.parameters():\n param.data.copy_(flattened[offset:offset + param.nelement()]).view(param.size())\n offset += param.nelement()\n\navg_param = flatten_params() # initialize\n\ndef train():\n ...\n avg_param = 0.9 * avg_param + 0.1 * flatten_params()\n\ndef test():\n original_param = flatten_params() # save current params\n load_params(avg_param) # load the average\n ...\n load_params(original_param) # restore parameters\n \n<ECODE>", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "Thanks for your reply. Before employing the running average, my code occupied only half of video memory. But, when I tried as you suggested, it didn’t proceed after 2 iterations due to ‘out of memory’. The error is shown below. <SCODE>CompleteTHCudaCheck FAIL file=/data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.9_1487349287443/work/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory\nTraceback (most recent call last):\n File \"inception_ae_resume_ra.py\", line 251, in <module>\ntrain_iter_loss, avg_param = train(config, epoch, avg_param)\n File \"inception_ae_resume_ra.py\", line 166, in train\navg_param = 0.9*avg_param + 0.1*flatten_params()\n File \"/home/sypark/anaconda2/envs/py36/lib/python3.6/site-packages/torch/tensor.py\", line 320, in __mul__\nreturn self.mul(other)\nRuntimeError: cuda runtime error (2) : out of memory at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.9_1487349287443/work/torch/lib/THC/generic/THCStorage.cu:66<ECODE>", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "Your model must have a lot of parameters. Instead of flattening them into a single big tensor, you process it in parts: <SCODE>from copy import deepcopy\n\navg_param = deepcopy(list(p.data for p in model.parameters()))\n\ndef train():\n ...\n for p, avg_p in zip(model.parameters(), avg_param):\n avg_p.mul_(0.9).add_(0.1, p.data)\n<ECODE> Not sure if you’ll manage to fit another copy of the params in memory, so you can restore them after testing.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for your reply. <SCODE>param.data.copy_(flattened[offset:offset + param.nelement()]).view(param.size())\nRuntimeError: copy from list to FloatTensor isn't implemented\n<ECODE> I think the load_param function should be modified due to the list.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I modified the functions as below. <SCODE>def load_params(avg_param):\n for p, avg_p in zip(model.parameters(), avg_param):\n p.data = deepcopy(avg_p)\n\n def flatten_params():\n flatten = deepcopy(list(p.data for p in model.parameters()))\n return flatten\n\ndef load_params(flattened):\n for p, avg_p in zip(model.parameters(), flattened):\n p.data = deepcopy(avg_p)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "This would be better: <SCODE>def load_params(flattened):\n for p, avg_p in zip(model.parameters(), flattened):\n p.data.copy_(avg_p)\n<ECODE> Also, note that they’re no longer flattened, so you might want to change the name.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks. I will change them.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "<SCODE>W = Variable(w_init, requires_grad=True)\nW_avg = Variable(torch.FloatTensor(W).type(dtype), requires_grad=False)\nfor i in range(nb_iterations):\n #some GD stuff...\n W_avg = (1/nb_iter)*W + W_avg<ECODE>", "isAccepted": false, "likes": null, "poster": "pinocchio" } ]
false
How to traverse all the model parametes (including mean/var in BN) in a single run?
null
[ { "contents": "so, any suggestions here? thx.", "isAccepted": false, "likes": null, "poster": "shicai" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "shicai" } ]
false
Multi Label Classification in pytorch
null
[ { "contents": "Hi Everyone, I’m trying to use pytorch for a multilabel classification, has anyone done this yet? I have a total of 505 target labels, and samples have multiple labels (varying number per sample). I tried to solve this by banalizing my labels by making the output for each sample a 505 length vector with 1 at position i, if it maps to label i, and 0 if it doesn’t map to label i. Then, I tried to use the multilabelmarginloss(). Problem I’m facing - with more iterations, the output of the model should move towards a sparse vector with most elements 0 and a few elements 1, but it is actually moving towards a vector with often very large negative values. Can some one tell me how to go about this? Thanks!", "isAccepted": false, "likes": 11, "poster": "SpandanMadan" }, { "contents": "Do you have a smaller dataset? 505 target labels with very few samples might just not train well. Maybe add weight decay?", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "SpandanMadan" }, { "contents": "Can you tell us what kind of loss function you used?", "isAccepted": false, "likes": 2, "poster": "jlee" }, { "contents": "I’d like to try doing the same. Can you suggest any small multi-label datasets, (i.e. samples with multiple labels), to start experimenting with? I’m a novice with this type of problem, so not sure where’s the simplest place to start? Best regards, Ajay", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "i’m also learning pytorch, and take it as an exercise, the input is BCHW, using multilabelmarginloss()", "isAccepted": false, "likes": 1, "poster": "dablyo" }, { "contents": "Kind regards, Ajay", "isAccepted": false, "likes": 1, "poster": "AjayTalati" }, { "contents": "I’ve execute train loop in python console by this way: then", "isAccepted": false, "likes": null, "poster": "dablyo" }, { "contents": "I got simiar result, I got zero loss from the second epoch. Do you find the reason?", "isAccepted": false, "likes": 1, "poster": "sshuair" }, { "contents": "Hi there, hope that helps? Aj", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "bartolsthoorn" }, { "contents": "Kind regards, Ajay", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "@AjayTalati Now once you have your prediction, you need to threshold. 0.5 is the default naive way but it’s probably not optimal. In any case, once you get there, great !", "isAccepted": false, "likes": 8, "poster": "mratsim" }, { "contents": "Does anyone understand how MultiMarginLoss is calculated exactly? I’m not sure I understand completely.", "isAccepted": false, "likes": 1, "poster": "nabergh" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "mratsim" }, { "contents": "Hi Everyone, However, my obtained result isn’t good at all. As a result, my precise questions are as follows:", "isAccepted": false, "likes": null, "poster": "ahkarami" }, { "contents": "Your loss function is correct btw.", "isAccepted": false, "likes": 3, "poster": "mratsim" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ahkarami" }, { "contents": "the MultiLabelMarginLoss doesn’t work, loss become 0 in 2nd minibatch. I’m whole confused", "isAccepted": false, "likes": 1, "poster": "dablyo" }, { "contents": "Hi, I used BCELoss. It’s the standard for multi label classification in many ways. Give it a shot.", "isAccepted": false, "likes": 5, "poster": "SpandanMadan" } ]
false
How the backward() works for torch Variable?
null
[ { "contents": "For torch.autograd Variable, there’s the ‘gradient’ input param for the .backward() function. I don’t quite understand what this ‘gradient’ input param stands for, and why this param is needed?", "isAccepted": false, "likes": 2, "poster": "david-leon" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Note that if you’re calling it on a loss/cost variable, you don’t need to provide an argument since autograd will assume a tensor of ones.", "isAccepted": false, "likes": 1, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "For example, I have an Embedding variable which is 10,000*100. When I used 10th and 20th embedding, will backward function calculate the gradient only for this two embeddings or will it calculate the gradient for all 10,000 embeddings?", "isAccepted": false, "likes": null, "poster": "Adoni" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I also have trouble with this. It says that the argument of backward should be the gradients w.r.t. the output. But consistently across the documentation it is never mentioned of what the gradient w.r.t. the output (must be a conspiracy to drive me crazy). Clearly you can define the gradient as an operator but then you don’t get numbers (like 1). Can the thing of which you take the gradient w.r.t. the output not be stated concisely? Does it depend on some usecase? If I just have y=f(x) for example how do I derive the 1 that is assumed?", "isAccepted": false, "likes": null, "poster": "automatisch" }, { "contents": "Am I getting there?", "isAccepted": false, "likes": 3, "poster": "automatisch" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
How to pass a 3D tensor to Linear layer?
null
[ { "contents": "I have a 3D tensor (5x9x12) I want to cast it to a (5x9x1) tensor through the linear layer. But I found that the nn.LinearLayer require that the input should be a matrix instead of a 3d tensor. How can I achieve my task?", "isAccepted": false, "likes": 1, "poster": "maplewizard" }, { "contents": "You can do that with Tensor.view() <SCODE>x = x.view(-1, 12)\nb_size = x.size(0)\nx = linear(x)\nx = x.view(b_size, -1, 1)<ECODE>", "isAccepted": false, "likes": 1, "poster": "kim.seonghyeon" }, { "contents": "Thanks, it is likely to solve my problem. If there is a single function to to the trick, it would be more clear.", "isAccepted": false, "likes": null, "poster": "maplewizard" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "unnatjain" }, { "contents": "Could you explain how to do this in a non iterative manner? I think it would be very helpful", "isAccepted": false, "likes": null, "poster": "Dimitrisl" }, { "contents": "", "isAccepted": false, "likes": 8, "poster": "mattheus_lee" } ]
false
How to convert a variable to a numpy?
null
[ { "contents": "I found that a tensor can be converted into a numpy array. but how to cast a variable into a numpy array?", "isAccepted": false, "likes": null, "poster": "maplewizard" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "maplewizard" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "Thanks. It can fulfill my requirement.", "isAccepted": false, "likes": null, "poster": "maplewizard" }, { "contents": "what about replacing the var by such sort of var.data <-- np.array, it won’t autograd ???", "isAccepted": false, "likes": null, "poster": "Tepp" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "var.data.numpy() will convert Variable to numpy array.", "isAccepted": false, "likes": null, "poster": "deepak_mangla" } ]
false
How to enable the dataloader to sample from each class with equal probability
null
[ { "contents": "The dataloader utility in torch (courtesy of Soumith Chintala) allowed one to sample from each class with equal probability. I was wondering, if there is a straightforward approach to enable the same in pytorch dataloaders.", "isAccepted": false, "likes": 3, "poster": "Gaurav_Pandey" }, { "contents": "<SCODE>y = torch.from_numpy(np.array([0, 0, 1, 1, 0, 0, 1, 1]))\nsampler = StratifiedSampler(class_vector=y, batch_size=2)\n# then pass this sampler as an argument to DataLoader\n<ECODE> Let me know if you need help adapting it. It depends on scikit-learn unfortunately, because they have a ton of good samplers like that and I didn’t feel like reimplementing it.", "isAccepted": false, "likes": 12, "poster": "ncullen93" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "This is extremely useful. Thanks a lot. I was looking for a code that selects each class randomly with equal probability, and then samples an instance from that class, again with equal probability. However, stratified sampling does the job well. I will try using it in my code and let you know if I have any doubts. Thanks again.", "isAccepted": false, "likes": null, "poster": "Gaurav_Pandey" }, { "contents": "I am trying to get balanced classes for a multi-classes classification task. I have tried to use torch.utils.data.sampler.WeightedRandomSampler with no success. I am using your implementation but i get this error: ValueError: The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2. Here is my code : <SCODE>BATCH_SIZE = 128\n\ntrain_set = SentimentDataset(file=TRAIN_DATA, word2idx=word2idx, tword2idx=tword2idx,\n max_length=0, max_topic_length=0, topic_bs=True)\nval_set = SentimentDataset(file=VAL_DATA, word2idx=word2idx, tword2idx=tword2idx,\n max_length=0, max_topic_length=0, topic_bs=True)\n\n_weights = 1 / torch.FloatTensor(train_set.weights) # [296, 3381, 12882, 12857, 1016]\n_weights = _weights.view(1, 5)\n_weights = _weights.double()\n\nsampler = StratifiedSampler(_weights, BATCH_SIZE)\n\nloader_train = DataLoader(train_set, batch_size=BATCH_SIZE,\n shuffle=False, sampler=sampler, num_workers=4)\n\nloader_val = DataLoader(val_set, batch_size=BATCH_SIZE,\n shuffle=False, sampler=sampler, num_workers=4)\n\nmodel = RNN(embeddings, num_classes=num_classes, **_hparams)\n\ncriterion = torch.nn.CrossEntropyLoss()\nparameters = filter(lambda p: p.requires_grad, model.parameters())\noptimizer = torch.optim.Adam(parameters)\n\n# TRAIN<ECODE>", "isAccepted": false, "likes": null, "poster": "kkorovesis" }, { "contents": "In your StratifiedSampler, why you calculate n-splits as the number of batches, while you only iterate the shuffle&split iterator once? To my knowledge, n-splits defines the K in K-fold cross validation, StratifiedShuffleSplit just ensure at each cross, the distribution follows the population statistics on the whole dataset? For me, it makes sense that if your StratifiedSampler use the n_split =1 since you always reconstruct the StratifiedShuffleSplit?", "isAccepted": false, "likes": 1, "poster": "weiyi_xie" }, { "contents": "Implementation: <SCODE>import torch\nfrom sklearn.model_selection import StratifiedKFold\n\nclass StratifiedBatchSampler:\n \"\"\"Stratified batch sampling\n Provides equal representation of target classes in each batch\n \"\"\"\n def __init__(self, y, batch_size, shuffle=True):\n if torch.is_tensor(y):\n y = y.numpy()\n assert len(y.shape) == 1, 'label array must be 1D'\n n_batches = int(len(y) / batch_size)\n self.skf = StratifiedKFold(n_splits=n_batches, shuffle=shuffle)\n self.X = torch.randn(len(y),1).numpy()\n self.y = y\n self.shuffle = shuffle\n\n def __iter__(self):\n if self.shuffle:\n self.skf.random_state = torch.randint(0,int(1e8),size=()).item()\n for train_idx, test_idx in self.skf.split(self.X, self.y):\n yield test_idx\n\n def __len__(self):\n return len(self.y)\n<ECODE> Usage example: <SCODE>from torch.utils.data import TensorDataset, DataLoader\n\nX = torch.randn(100,20)\ny = torch.randint(0,7,size=(100,))\n\ndata_loader = DataLoader(\n dataset=TensorDataset(X,y),\n batch_sampler=StratifiedBatchSampler(y, batch_size=5)\n)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "rfeinman" }, { "contents": "Looks a good solution! Why providing the seed here instead at creation time? I think there is a little correction to do though:", "isAccepted": false, "likes": 1, "poster": "3nomis" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Kishore_S" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "oustella" } ]
false
How to loop over all the variable in a nn.module
null
[ { "contents": "and register hooks to the gradient?", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "Thanks for your reply. Do you know how to get name or meaningful identifier of variable? I did not find a name field in variable.", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "Thanks for your reply. But this seems to only work on layer level? <SCODE>def inves(name=''):\n def f(tensor):\n if np.isnan(torch.mean(tensor).data.cpu().numpy() ):\n print('gradient of {} is'.format(name))\n print(tensor)\n assert 0, 'nan gradient'\n return f\n\nfor key, var in Model.varibles().iteritems():\n var.register_hook(inves(key))\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "update: I fond using locals() can be a solution.", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "You could try that: <SCODE>def register_nan_checks(model):\n def check_grad(module, grad_input, grad_output):\n if np.isnan(grad_input.data.numpy()):\n print('NaN gradient in ' + type(module).__name__)\n model.apply(lambda module: module.register_backward_hook(check_grad))\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "<SCODE>def register_nan_checks(model):\n def check_grad(module, grad_input, grad_output):\n # print(module) you can add this to see that the hook is called\n if any(np.all(np.isnan(gi.data.numpy())) for gi in grad_input if gi is not None):\n print('NaN gradient in ' + type(module).__name__)\n model.apply(lambda module: module.register_backward_hook(check_grad))\n<ECODE> I’ve just checked and it works for me. If I add an additional print, it will show all modules.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "Can you show me the code? It works for me", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks, it works now!", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Thanks for your comments. Currently, I have no clue which part goes wrong.", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "Why it’s not suggested to directly use forward, isn’t it just building a graph?", "isAccepted": false, "likes": null, "poster": "ruotianluo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I dont understand what the issue is, isn’t <SCODE> for W in mdl_sgd.parameters():\n W.data = W.data - eta*W.grad.data\n<ECODE> what u do to loop over? (also it seems ur title doesn’t reflect ur real question I suggest to update it, its confusing and unnecessary).", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" } ]
false
Division of tensors that don’t have the same size
null
[ { "contents": "When you divide a tensor of size (64,128,32,32) by a tensor of size (64,1,32,32) you get an error because the size must match, yet in situations like these expand() is implicit and numpy does it this way. torch’s div() doesn’t operate that way, am I missing something or is this intentional? The solution is of course expand_as() but the fact that it’s not default leads me to believe that perhaps I missed something.", "isAccepted": false, "likes": 1, "poster": "Veril" }, { "contents": "We want to add broadcasting, but didn’t have time to implement that yet.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "maplewizard" }, { "contents": "if you are still wondering, it’s been implemented but it should be done properly. <SCODE>\n>>> a = torch.rand(64, 20)\n>>> b = torch.rand(64, 1)\n>>> a/b\ntensor([[ 5.0057e-01, 3.5622e-01, 3.1053e-01, ..., 3.2856e-01,\n 1.0888e+00, 9.7678e-01],\n [ 4.7883e+00, 5.7695e+00, 2.8125e+00, ..., 1.6500e+01,\n 3.5257e+00, 1.5637e+01],\n [ 6.8112e-01, 1.8881e+00, 2.0702e+00, ..., 4.2512e-01,\n 1.4803e+00, 8.5795e-01],\n ...,\n [ 4.1682e-01, 9.6458e-01, 1.1828e+00, ..., 9.9901e-01,\n 1.0716e+00, 1.4875e+00],\n [ 1.2503e-01, 1.2347e+00, 6.0802e-01, ..., 5.0439e-01,\n 1.2536e+00, 1.3501e+00],\n [ 9.1765e-01, 6.7741e-01, 1.0928e+00, ..., 8.1460e-01,\n 9.7924e-01, 3.8059e-01]])\n>>> torch.div(a,b)\ntensor([[ 5.0057e-01, 3.5622e-01, 3.1053e-01, ..., 3.2856e-01,\n 1.0888e+00, 9.7678e-01],\n [ 4.7883e+00, 5.7695e+00, 2.8125e+00, ..., 1.6500e+01,\n 3.5257e+00, 1.5637e+01],\n [ 6.8112e-01, 1.8881e+00, 2.0702e+00, ..., 4.2512e-01,\n 1.4803e+00, 8.5795e-01],\n ...,\n [ 4.1682e-01, 9.6458e-01, 1.1828e+00, ..., 9.9901e-01,\n 1.0716e+00, 1.4875e+00],\n [ 1.2503e-01, 1.2347e+00, 6.0802e-01, ..., 5.0439e-01,\n 1.2536e+00, 1.3501e+00],\n [ 9.1765e-01, 6.7741e-01, 1.0928e+00, ..., 8.1460e-01,\n 9.7924e-01, 3.8059e-01]])\n\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "greed2411" } ]
false
Storing torch tensor for dqn memory issue
reinforcement-learning
[ { "contents": "Hello, <SCODE>import resource\n\nimport argparse\nimport gym\nimport numpy as np\nfrom itertools import count\nfrom collections import namedtuple\nimport os \n\nimport torch\nimport random\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torch.autograd as autograd\nfrom torch.autograd import Variable\nimport torchvision.transforms as T\nimport cv2\nimport pickle\nimport glob\nimport time\nimport subprocess\nfrom collections import namedtuple\n\n# Class\nclass ReplayMemory(object):\n '''\n A simple class to wrap around the concept of memory\n this helps for managing how much data is used. \n '''\n def __init__(self, capacity):\n self.capacity = capacity\n self.memory = []\n self.position = 0\n \n def push(self, *args):\n \"\"\"Saves a transition.\"\"\"\n if len(self.memory) < self.capacity:\n self.memory.append(None) \n self.memory[self.position] = Transition(*args)\n self.position = (self.position + 1) % self.capacity\n \n def sample(self, batch_size):\n return random.sample(self.memory, batch_size)\n \n def __len__(self):\n return len(self.memory)\n\n# Functions\ndef ProcessState(state,torchOutput=True):\n img = cv2.cvtColor(state, cv2.COLOR_BGR2GRAY)\n img = cv2.resize(img, (imageShape[1],imageShape[0])).astype('float32')\n if torchOutput:\n img = torch.from_numpy(img)\n img /= 255\n img -= 0.5 \n img *= 2\n return img\n\n# Variables\nTransition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward', 'done'))\n\nimageShape = (110,80)\nenv = gym.make('PongDeterministic-v3')\naction = 0 \nmemory = ReplayMemory(32)\n\n# Example with pytorch\nfor i_episode in range(25):\n break\n print 'Pytorch: Memory usage: %s (kb)' % resource.getrusage(resource.RUSAGE_SELF).ru_maxrss\n obser = env.reset()\n obser = ProcessState(obser)\n\n state = torch.ones((3,imageShape[0],imageShape[1])) \n state = torch.cat((state,obser.view(1,imageShape[0],imageShape[1])),0)\n\n for t in range(10000): \n obser, reward, done, _ = env.step(0)\n\n #this is new observation getting process\n obser = ProcessState(obser)\n\n state = torch.cat((state,obser.view(1,imageShape[0],imageShape[1])),0)\n \n memory.push(state[:-1], action, state[1:], reward, done)\n\n state = state[1:]\n\n if done:\n break\n# quit()\n# memory = ReplayMemory(32)\n# Numpy\nfor i_episode in range(25):\n print 'Numpy: Memory usage: %s (kb)' % resource.getrusage(resource.RUSAGE_SELF).ru_maxrss\n obser = env.reset()\n obser = ProcessState(obser,False)\n\n state = np.zeros((3,imageShape[0],imageShape[1]))\n state = np.concatenate([state, obser.reshape((1,imageShape[0],imageShape[1]))])\n\n for t in range(10000): \n obser, reward, done, _ = env.step(0)\n\n #this is new observation getting process\n obser = ProcessState(obser,False)\n\n # state = torch.cat((state,obser.view(1,imageShape[0],imageShape[1])),0)\n state = np.concatenate([state, obser.reshape((1,imageShape[0],imageShape[1]))])\n \n memory.push(state[:-1], action, state[1:], reward, done)\n state = state[1:]\n\n if done:\n break\n<ECODE> Here is the output I get for running the first loop (using pytorch) vs the second one, which is saving numpy arrays. <SCODE>jtremblay@office:~/code/Personal-git/dqn$ python memory_issue.py \n[2017-03-06 12:38:30,254] Making new env: PongDeterministic-v3\nPytorch: Memory usage: 113432 (kb)\nPytorch: Memory usage: 226380 (kb)\nPytorch: Memory usage: 323796 (kb)\nPytorch: Memory usage: 410124 (kb)\nPytorch: Memory usage: 490116 (kb)\nPytorch: Memory usage: 565884 (kb)\nPytorch: Memory usage: 637428 (kb)\nPytorch: Memory usage: 704220 (kb)\nPytorch: Memory usage: 760188 (kb)\nPytorch: Memory usage: 815892 (kb)\nPytorch: Memory usage: 861828 (kb)\nPytorch: Memory usage: 905388 (kb)\nPytorch: Memory usage: 938916 (kb)\nPytorch: Memory usage: 966900 (kb)\nPytorch: Memory usage: 993036 (kb)\nPytorch: Memory usage: 1001484 (kb)\nPytorch: Memory usage: 1001484 (kb)\nPytorch: Memory usage: 1001484 (kb)\nPytorch: Memory usage: 1001484 (kb)\nPytorch: Memory usage: 1001484 (kb)\nPytorch: Memory usage: 1001484 (kb)\nPytorch: Memory usage: 1001484 (kb)\nPytorch: Memory usage: 1001484 (kb)\nPytorch: Memory usage: 1001484 (kb)\nPytorch: Memory usage: 1001484 (kb)\njtremblay@office:~/code/Personal-git/dqn$ python memory_issue.py \n[2017-03-06 12:39:22,433] Making new env: PongDeterministic-v3\nNumpy: Memory usage: 113936 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\nNumpy: Memory usage: 130988 (kb)\n\n<ECODE> As you can see the numpy saving is much more stable. This does not look like much but when you run my script with a replay size of one million frames it crashes quickly. Should I avoid storing torch tensor? I quite like keeping everything as a torch tensor to be honest. It saves me a few torch.from_numpy calls. Is there a way to release memory used by torch, I was not able to find anything on that subject in the documentation. I can provide more examples with learning loops if needed.", "isAccepted": false, "likes": 1, "poster": "jtremblay" }, { "contents": "That’s weird, I’ll look into that tomorrow. Thanks for posting the script.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I look forward hearing from you. Is there a way to for delete a tensor? I also tried to use storage object instead and it did not work. Also here is the version I am using: <SCODE>import torch\ntorch.__version__\n'0.1.9_2'\n<ECODE> [edit] I just tested with the most current version (‘0.1.10+ac9245a’) and i obverse the same problem.", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "It should get freed as soon as it goes out of scope (last reference to it is gone).", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Ok this is not what I am observing though. Am I keeping weird references to the tensor somewhere? I have “fixed” my script with storing numpy arrays, but I am loosing a lot in performance, about twice the time it took with storing tensor arrays. Is there a fast way to copy numpy arrays into a tensor reference of the same size, eg avoiding using torch.from_numpy()?", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I have done more test for the memory and you are right, the tensor gets clear from the memory when there is no more reference. I am trying to figure out in the script that I have shared where references are kept. Using clone where I could helped reducing the footprint, but there is still a leakage. Here is the updated code. <SCODE>import resource\n\nimport argparse\nimport gym\nimport numpy as np\nfrom itertools import count\nfrom collections import namedtuple\nimport os \n\nimport torch\nimport random\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torch.autograd as autograd\nfrom torch.autograd import Variable\nimport torchvision.transforms as T\nimport cv2\nimport pickle\nimport glob\nimport time\nimport subprocess\nfrom collections import namedtuple\n\n# Class\nclass ReplayMemory(object):\n '''\n A simple class to wrap around the concept of memory\n this helps for managing how much data is used. \n '''\n def __init__(self, capacity):\n self.capacity = capacity\n self.memory = []\n self.position = 0\n \n def push(self, *args):\n \"\"\"Saves a transition.\"\"\"\n if len(self.memory) < self.capacity:\n self.memory.append(None) \n self.memory[self.position] = Transition(*args)\n self.position = (self.position + 1) % self.capacity\n \n def sample(self, batch_size):\n return random.sample(self.memory, batch_size)\n \n def __len__(self):\n return len(self.memory)\n\n# Functions\ndef ProcessState(state,torchOutput=True):\n img = cv2.cvtColor(state, cv2.COLOR_BGR2GRAY)\n img = cv2.resize(img, (imageShape[1],imageShape[0])).astype('float32')\n if torchOutput:\n img = torch.from_numpy(img)\n img /= 255\n img -= 0.5 \n img *= 2\n return img\n\n# Variables\nTransition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward', 'done'))\n\nimageShape = (110,80)\nenv = gym.make('PongDeterministic-v3')\naction = 0 \nmemory = []\nreward = 0\ndone = False \n# Example with pytorch\nfor i_episode in range(25):\n # break\n print ('Pytorch: Memory usage: %s (kb)' % resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)\n obser = env.reset()\n obser = ProcessState(obser).clone()\n\n state = torch.ones((3,imageShape[0],imageShape[1])).clone() \n state = torch.cat((state.clone(),obser.view(1,imageShape[0],imageShape[1])),0).clone()\n\n for t in range(10000): \n obser, reward, done, _ = env.step(0)\n\n #this is new observation getting process\n obser = ProcessState(obser).clone()\n\n state = torch.cat((state.clone(),obser.view(1,imageShape[0],imageShape[1])),0).clone()\n \n memory.append({'state':state[:-1].clone(), 'action': action, 'state1':state[1:].clone(), \n 'reward':reward, 'done':done})\n if len(memory) > 32:\n memory = memory[1:]\n\n state = state[1:].clone()\n\n if done:\n break\n# quit()\n# memory = ReplayMemory(32)\n# Numpy\nfor i_episode in range(25):\n print ('Numpy: Memory usage: %s (kb)' % resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)\n obser = env.reset()\n obser = ProcessState(obser,False)\n\n state = np.zeros((3,imageShape[0],imageShape[1]))\n state = np.concatenate([state, obser.reshape((1,imageShape[0],imageShape[1]))])\n\n for t in range(10000): \n obser, reward, done, _ = env.step(0)\n\n #this is new observation getting process\n obser = ProcessState(obser,False)\n\n # state = torch.cat((state,obser.view(1,imageShape[0],imageShape[1])),0)\n state = np.concatenate([state, obser.reshape((1,imageShape[0],imageShape[1]))])\n \n memory.append({'state':state[:-1], 'action': action, 'state1':state[1:], \n 'reward':reward, 'done':done})\n if len(memory) > 32:\n memory = memory[1:]\n state = state[1:]\n\n if done:\n break\n<ECODE> Here is my output: <SCODE>(py3) jtremblay@office:~/code/Personal-git/dqn$ python memory_issue.py \n[2017-03-07 12:19:41,554] Making new env: PongDeterministic-v3\nPytorch: Memory usage: 115628 (kb)\nPytorch: Memory usage: 131180 (kb)\nPytorch: Memory usage: 132500 (kb)\nPytorch: Memory usage: 133556 (kb)\nPytorch: Memory usage: 135932 (kb)\nPytorch: Memory usage: 137252 (kb)\nPytorch: Memory usage: 137780 (kb)\nPytorch: Memory usage: 138308 (kb)\nPytorch: Memory usage: 139364 (kb)\nPytorch: Memory usage: 139892 (kb)\nPytorch: Memory usage: 140420 (kb)\nPytorch: Memory usage: 140684 (kb)\nPytorch: Memory usage: 141476 (kb)\nPytorch: Memory usage: 141476 (kb)\nPytorch: Memory usage: 142004 (kb)\nPytorch: Memory usage: 142268 (kb)\nPytorch: Memory usage: 143060 (kb)\nPytorch: Memory usage: 143588 (kb)\nPytorch: Memory usage: 143852 (kb)\nPytorch: Memory usage: 143852 (kb)\nPytorch: Memory usage: 144116 (kb)\nPytorch: Memory usage: 144380 (kb)\nPytorch: Memory usage: 144380 (kb)\nPytorch: Memory usage: 144644 (kb)\nPytorch: Memory usage: 144644 (kb)\nNumpy: Memory usage: 144908 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\nNumpy: Memory usage: 154932 (kb)\n<ECODE> I have also try to create a simple code that would replicate the behaviour: <SCODE>import torch\nimport resource\nimport numpy as np\n\nimport argparse\nparser = argparse.ArgumentParser(description='Memory issue')\nparser.add_argument('--numpy', action='store_true')\nargs = parser.parse_args()\n\n\na = [None for _ in range(10)]\n# print (a)\nj = 0\n\nif args.numpy:\n state = np.ones((3,200,200))\n state = np.concatenate([state,np.random.rand(1,200,200)],0)\nelse:\n state = torch.ones((3,200,200))\n state = torch.cat((state,torch.rand(1,200,200)),0)\n\nfor i in range(5000):\n if i % 400 is 0:\n if args.numpy:\n print ('Numpy Memory usage: %s (kb)' % resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)\n else:\n print ('Torch Memory usage: %s (kb)' % resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)\n if j > 8:\n j = -1\n j+=1\n\n if args.numpy:\n state = np.concatenate([state,np.random.rand(1,200,200)],0)\n else:\n state = torch.cat((state,torch.rand(1,200,200)),0)\n a[j] = state[0:3]\n state = state[1:]\n<ECODE> Here is the output: <SCODE>(py3) jtremblay@office:~/code/Personal-git/dqn$ python memory_simple.py\nTorch Memory usage: 82428 (kb)\nTorch Memory usage: 95536 (kb)\nTorch Memory usage: 96024 (kb)\nTorch Memory usage: 96024 (kb)\nTorch Memory usage: 96024 (kb)\nTorch Memory usage: 96088 (kb)\nTorch Memory usage: 96660 (kb)\nTorch Memory usage: 96660 (kb)\nTorch Memory usage: 96660 (kb)\nTorch Memory usage: 96660 (kb)\nTorch Memory usage: 96660 (kb)\nTorch Memory usage: 96660 (kb)\nTorch Memory usage: 96660 (kb)\nTorch Memory usage: 96660 (kb)\nTorch Memory usage: 96808 (kb)\nTorch Memory usage: 96808 (kb)\nTorch Memory usage: 96808 (kb)\nTorch Memory usage: 96808 (kb)\nTorch Memory usage: 96808 (kb)\nTorch Memory usage: 96808 (kb)\n(py3) jtremblay@office:~/code/Personal-git/dqn$ python memory_simple.py --numpy\nNumpy Memory usage: 81132 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\nNumpy Memory usage: 96064 (kb)\n<ECODE> Really I am not sure how this helps or not. As for the time taken storing numpy arrays and then translate them into torch tensor, I will run more experiments and report them here.", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "I ran quick experiment running 4 data point in total and I get better performance if I am storing torch tensor rather than numpy. The experiment included running q network on 20 episodes in a deterministic environment The only difference in the code includes converting the numpy array intor a torch array in order to run inference and learning process. I hope this helps.", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "Wow, that’s weird. I just ran your test script and I can’t reproduce the issue. If I increase the number of loop iterations the memory usage stabilizes at 93420KB for me (and 94440KB for numpy). Maybe it only happens in Python 2, I’ll need to try", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Nope, can’t reproduce. On Python 2.7 it takes a bit longer to stabilize, but it stops at 98668KB.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "i think this is just the kernel or the default memory allocator being smart / doing some caching. Some allocators / kernels do this, and I’ve seen this in other settings (unrelated to this).", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you for running the test. The smaller snippet of code also stabilize for bigger loops on my machine (96876 kb). But the longer snippet of code (the one which uses the gym environment) does not stabilize. I am not sure why.", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "I think I might have been wrong, I ran a very long experiment and after a while the memory usage stabilize. I am sorry about this.", "isAccepted": false, "likes": 1, "poster": "jtremblay" }, { "contents": "I decided to run a longer test with the script storing states coming from the gym environment: <SCODE>import resource\n\nimport argparse\nimport gym\nimport numpy as np\nfrom itertools import count\nfrom collections import namedtuple\nimport os \n\nimport torch\nimport random\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torch.autograd as autograd\nfrom torch.autograd import Variable\nimport torchvision.transforms as T\nimport cv2\nimport pickle\nimport glob\nimport time\nimport subprocess\nfrom collections import namedtuple\n\n# Functions\ndef ProcessState(state,torchOutput=True):\n img = cv2.cvtColor(state, cv2.COLOR_BGR2GRAY)\n img = cv2.resize(img, (imageShape[1],imageShape[0])).astype('float32')\n if torchOutput:\n img = torch.from_numpy(img)\n img /= 255\n img -= 0.5 \n img *= 2\n return img\n\n# Variables\nTransition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward', 'done'))\n\nimageShape = (110,80)\nenv = gym.make('PongDeterministic-v3')\naction = 0 \nmemory = []\nreward = 0\ndone = False \n# Example with pytorch\nfor i_episode in range(5000):\n if i_episode % 500 is 0:\n print (str(i_episode)+' Pytorch: Memory usage: %s (kb)' % resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)\n \n obser = env.reset()\n obser = ProcessState(obser).clone()\n\n state = torch.ones((3,imageShape[0],imageShape[1])).clone() \n state = torch.cat((state.clone(),obser.view(1,imageShape[0],imageShape[1])),0).clone()\n\n for t in range(10000): \n obser, reward, done, _ = env.step(0)\n\n #this is new observation getting process\n obser = ProcessState(obser).clone()\n\n state = torch.cat((state.clone(),obser.view(1,imageShape[0],imageShape[1])),0).clone()\n \n memory.append({'state':state[:-1].clone(), 'action': action, 'state1':state[1:].clone(), \n 'reward':reward, 'done':done})\n if len(memory) > 32:\n memory = memory[1:]\n\n state = state[1:].clone()\n\n if done:\n break\n# quit()\n# memory = ReplayMemory(32)\n# Numpy\nfor i_episode in range(50000):\n if i_episode % 500 is 0:\n print (str(i_episode)+' Numpy: Memory usage: %s (kb)' % resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)\n obser = env.reset()\n obser = ProcessState(obser,False)\n\n state = np.zeros((3,imageShape[0],imageShape[1]))\n state = np.concatenate([state, obser.reshape((1,imageShape[0],imageShape[1]))])\n\n for t in range(10000): \n obser, reward, done, _ = env.step(0)\n\n #this is new observation getting process\n obser = ProcessState(obser,False)\n\n # state = torch.cat((state,obser.view(1,imageShape[0],imageShape[1])),0)\n state = np.concatenate([state, obser.reshape((1,imageShape[0],imageShape[1]))])\n \n memory.append({'state':state[:-1], 'action': action, 'state1':state[1:], \n 'reward':reward, 'done':done})\n if len(memory) > 32:\n memory = memory[1:]\n state = state[1:]\n\n if done:\n break\n<ECODE> I am still getting issues for storing pytorch tensor. For some reason a reference is kept to some tensors and they do not get clean out of the memory. Here is the output I got (The pytorch test ran for 6 hours). I stopped the numpy after 3000 episodes as it showed stabilities. <SCODE>(py3) jtremblay@office:~/code/Personal-git/dqn$ python memory_issue.py \n[2017-03-08 09:23:04,502] Making new env: PongDeterministic-v3\n0 Pytorch: Memory usage: 116916 (kb)\n500 Pytorch: Memory usage: 159720 (kb)\n1000 Pytorch: Memory usage: 172392 (kb)\n1500 Pytorch: Memory usage: 188232 (kb)\n2000 Pytorch: Memory usage: 204864 (kb)\n2500 Pytorch: Memory usage: 221232 (kb)\n3000 Pytorch: Memory usage: 236540 (kb)\n3500 Pytorch: Memory usage: 252908 (kb)\n4000 Pytorch: Memory usage: 268744 (kb)\n4500 Pytorch: Memory usage: 282472 (kb)\n(py3) jtremblay@office:~/code/Personal-git/dqn$ python memory_issue.py \n[2017-03-08 11:30:58,943] Making new env: PongDeterministic-v3\n0 Numpy: Memory usage: 116532 (kb)\n500 Numpy: Memory usage: 129508 (kb)\n1000 Numpy: Memory usage: 129508 (kb)\n1500 Numpy: Memory usage: 129508 (kb)\n2000 Numpy: Memory usage: 129508 (kb)\n2500 Numpy: Memory usage: 129508 (kb)\n3000 Numpy: Memory usage: 129508 (kb)\n<ECODE> There might be something that I do not understand while manipulating the tensors, in this context I am using torch.cat, clone and from_numpy. Using the same code with numpy arrays does create any instabilities with the memory. I thought using clone everywhere I could would force to freed any reference to previous tensors. If I do not use clone this is the result usage I get after 500 episodes: <SCODE>(py3) jtremblay@office:~/code/Personal-git/dqn$ python memory_issue.py \n[2017-03-08 12:51:20,207] Making new env: PongDeterministic-v3\n0 Pytorch: Memory usage: 115008 (kb)\n500 Pytorch: Memory usage: 492140 (kb)\n<ECODE> Without the clone call, the memory usage is quite large. Also I never invoke the copy function in the numpy part. I am extremely confused by this behaviour.", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "Ok, I’ve found and fixed the problem! Thanks for the report, the patch should be in master soon.", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jtremblay" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "denizs" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "j.laute" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "denizs" } ]
false
Installation error
null
[ { "contents": "When I install Pytorch on my Mac Pro (cylinder), os: EI Captian, installation from sources, python 2.7", "isAccepted": false, "likes": null, "poster": "Shiyu" }, { "contents": "Are you using conda? It requiers setting an additional env variable on OS X. Please check the README.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Done by updating Xcode. It is so weird.", "isAccepted": false, "likes": null, "poster": "Shiyu" }, { "contents": "Tips to fix -", "isAccepted": false, "likes": null, "poster": "RachelGomez161999" } ]
false
Illegal memory access in backward after first training epoch
null
[ { "contents": "I’m using a DataLoader and looping through my training data in the usual way: <SCODE>for epoch in range(num_epochs):\n\tfor training_batch_idx, training_batch in enumerate(dataloader):\n\t\t# forward/backward propagation code\n<ECODE> <SCODE>THCudaCheck FAIL file=/data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488757768560/work/torch/lib/THCUNN/generic/PReLU.cu line=79 error=77 : an illegal memory access was encountered\nTraceback (most recent call last):\n File \"trainer.py\", line 115, in <module>\n err.backward()\n File \"/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/autograd/variable.py\", line 146, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/nn/_functions/thnn/activation.py\", line 53, in backward\n 1\nRuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488757768560/work/torch/lib/THCUNN/generic/PReLU.cu:79\n<ECODE> The error points to some PReLU code. However, if I replace all the PReLU layers in my net with ReLU, I still get an illegal memory access error; it just points somewhere else: <SCODE>THCudaCheck FAIL file=/data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488757768560/work/torch/lib/THC/generic/THCTensorMath.cu line=26 error=77 : an illegal memory access was encountered\nTraceback (most recent call last):\n File \"trainer.py\", line 115, in <module>\n err.backward()\n File \"/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/autograd/variable.py\", line 146, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/nn/_functions/batchnorm.py\", line 60, in backward\n grad_bias = bias.new(bias.size()).zero_()\nRuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488757768560/work/torch/lib/THC/generic/THCTensorMath.cu:26\n<ECODE> Any thoughts on what might cause an error like this?", "isAccepted": false, "likes": null, "poster": "abweiss" }, { "contents": "Is there any chance we could see the model?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Below is a minimal example that produces an illegal memory access error. The really frustrating thing is that seemingly unimportant modifications to the network remove the error. For instance, changing the number of planes in the hidden layer from 8 to 16 removes the error. Also, deleting the ReLU layer removes the error. <SCODE>import torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.autograd import Variable\nimport torch.utils.data\n\n#---------------------------------------------------------------------------\n# define encoder-decoder network and optimizer\n\nclass encoder_decoder(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv = nn.Conv3d(1,8, kernel_size=2, stride=2)\n self.deconv = nn.ConvTranspose3d(8,1, kernel_size=3, stride=2, output_padding=-1)\n\n def forward(self, net_input):\n out = net_input\n out = self.conv(out)\n out = self.deconv(out)\n out = nn.ReLU()(out)\n return out\n\nnet = encoder_decoder()\nnet.cuda()\n\ncriterion = nn.MSELoss()\ncriterion.cuda()\n\noptimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.5)\n\n#----------------------------------------------------------------------------\n# define dataset and dataloader\n\nclass create_dataset(torch.utils.data.Dataset):\n def __init__(self):\n self.data = [ (torch.rand(1,64,64,64),\n torch.rand(1,64,64,64)) for i in range(100) ]\n\n def __getitem__(self, idx):\n return self.data[idx]\n\n def __len__(self):\n return len(self.data)\n\ndataset = create_dataset()\nprint('Loaded ' + str(len(dataset)) + ' training examples')\ndataloader = torch.utils.data.DataLoader(dataset, batch_size=8, shuffle=True, num_workers=8)\n\n#----------------------------------------------------------------------------\n# training loop\n\nprint('Start training loop')\nfor epoch in range(4):\n\t\n print('Epoch: ' + str(epoch))\n net.train()\n\n for training_idx, (input_batch,target_batch) in enumerate(dataloader):\n print('Training batch: ' + str(training_idx))\n input_batch = Variable(input_batch.cuda())\n target_batch = Variable(target_batch.cuda())\n\n optimizer.zero_grad()\n output_batch = net(input_batch)\n err = criterion(output_batch, target_batch)\n err.backward()\n optimizer.step()\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "abweiss" }, { "contents": "<SCODE>Loaded 100 training examples\nStart training loop\nEpoch: 0\nTraining batch: 0\nTraceback (most recent call last):\n File \"tmp.py\", line 70, in <module>\n err.backward()\n File \"/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py\", line 146, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/conv.py\", line 48, in backward\n if self.needs_input_grad[0] else None)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/conv.py\", line 112, in _grad_input\n cudnn.benchmark)\nRuntimeError: CUDNN_STATUS_EXECUTION_FAILED\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "I’ve tested the minimal example on three GPUs: Maxwell Titan X, GTX 1080, and Pascal Titan X. It fails on all three of them. The 2d version of this encoder-decoder works just fine, but it would be nice to get it working for the 3d case as well. Note, this is a very common network construction (just minus the hidden layers), so anyone segmenting 3d data is going to run into this. For the time being, I’ll just disable cudnn, as suggested.", "isAccepted": false, "likes": null, "poster": "abweiss" }, { "contents": "I’ve reproed with cudnn 5.1.10 (CUDNN_EXECUTION_FAILED). On cudnn 6.0.5 it returns <SCODE>Traceback (most recent call last):\n File \"conv3d.py\", line 64, in <module>\n output_batch = net(input_batch)\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 202, in __call__\n result = self.forward(*input, **kwargs)\n File \"conv3d.py\", line 19, in forward\n out = self.deconv(out)\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 202, in __call__\n result = self.forward(*input, **kwargs)\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/conv.py\", line 613, in forward\n output_padding, self.groups)\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/nn/functional.py\", line 141, in conv_transpose3d\n return f(input, weight, bias)\nRuntimeError: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ngimel" }, { "contents": "Interesting. When I call backward on a 3D convolution that’s supposed to perform the same operation as ConvTranspose, it is Ok, and on ConvTranspose it errors out. Should be same parameters to the same cudnnConvolutionBackwardData call. <SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\nconv = nn.Conv3d(1,8, kernel_size=3, padding=1, stride = 2)\nconv = conv.cuda()\ndeconv = nn.ConvTranspose3d(8,1, kernel_size=3, stride=2, output_padding=-1)\ndeconv = deconv.cuda()\n\nx = Variable(torch.randn(8,1,64,64,64).cuda(), requires_grad=True)\n#ok\nfor i in range(10):\n out = conv(x)\n err = out.sum()\n err.backward()\n\nprint(\"deconv\")\n\n#error\nx = Variable(torch.randn(8,8,32,32,32).cuda(), requires_grad=True)\n\nfor i in range(10):\n out = deconv(x)\n err = out.sum()\n err.backward()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ngimel" }, { "contents": "<SCODE>[pid 21910] _C.cpython-35m-x86_64-linux-gnu.so->cudnnGetConvolutionBackwardDataAlgorithm(0xffff7fffffff, { CUDNN_DATA_FLOAT, 0, [ 5, 8, 1, 3... ], CUDNN_TENSOR_NCHW }, { CUDNN_DATA_FLOAT, 0, 5, [ 2097152, 0, 8, 8... ], [ 0, 0, 262144, 32768... ] }, { CUDNN_CROSS_CORRELATION, CUDNN_DATA_FLOAT, 3, [ 0, 0, 0, 0... ], [ 2, 2, 2, 0... ], [ 1, 1, 1, 0... ] }, { CUDNN_DATA_FLOAT, 0, 5, [ 2097152, 0, 8, 1... ], [ 0, 0, 262144, 262144... ] }, 1, 0, 0 <unfinished ...>\n[pid 21917] _C.cpython-35m-x86_64-linux-gnu.so->cudnnGetConvolutionBackwardDataAlgorithm(0xffff7fffffff, { CUDNN_DATA_FLOAT, 0, [ 5, 8, 1, 3... ], CUDNN_TENSOR_NCHW }, { CUDNN_DATA_FLOAT, 0, 5, [ 2097152, 0, 8, 8... ], [ 0, 0, 262144, 32768... ] }, { CUDNN_CROSS_CORRELATION, CUDNN_DATA_FLOAT, 3, [ 1, 1, 1, 0... ], [ 2, 2, 2, 0... ], [ 1, 1, 1, 0... ] }, { CUDNN_DATA_FLOAT, 0, 5, [ 2097152, 0, 8, 1... ], [ 0, 0, 262144, 262144... ] }, 1, 0, 0 <unfinished ...>\n<ECODE> It is still a mystery to me how it can work anywhere (on Kepler? With 16 planes? deleting ReLU?). Also would probably be a good idea to add deconv test with output_padding, probably there are none now.", "isAccepted": false, "likes": 2, "poster": "ngimel" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "ngimel" }, { "contents": "Is the problem solved? I am getting the same error when using Conv3D After updating Pytorch I am getting: This error pops up not in the first epoch but fourth epoch for me.", "isAccepted": false, "likes": 2, "poster": "codeAC29" }, { "contents": "I’ve just seen this too. A model that had been working gives an error after 17 epochs: <SCODE> File \"/home/jhoward/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py\", line 776, in binary_cross_entropy\n return _functions.thnn.BCELoss.apply(input, target, weight, size_average)\n File \"/home/jhoward/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/thnn/auto.py\", line 47, in forward\n output, *ctx.additional_args)\nRuntimeError: cudaEventSynchronize in future::wait: an illegal memory access was encountered\n<ECODE> This is with the torchvision resnet34 model, with 244x244 input, and AdaptiveAveragePooling2D before the linear layer. Let me know if there’s any more info I can provide. Pytorch is current version from conda as of yesterday. Python 3.6. All conda packages updated.", "isAccepted": false, "likes": 1, "poster": "jphoward" }, { "contents": "is this with imagenet? if not, how big is your dataset? what GPU were you using? what CUDA version did you install? did you use conda install pytorch torchvision cuda80 -c soumith ?", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jphoward" }, { "contents": "Got this error again after about 110K steps using nn.DataParallel. Following is the system config:", "isAccepted": false, "likes": null, "poster": "Nabarun_Goswami" }, { "contents": "I was having this same issue but on a tabular dataset using the fastai library on top of pytorch and couldn’t figure it out. Ultimately I fixed the issue by running CUDA_LAUNCH_BLOCKING=1 to get the real stack trace. Then I saw pytorch was trying to save to a tmp directory that was empty but for some reason caused the illegal memory access error, possibly because the directory was being used by something else in the background? Anyway deleting the empty directory fixed the issue.", "isAccepted": false, "likes": null, "poster": "whamp" }, { "contents": "CUDA error after cudaEventDestroy in future dtor: an illegal memory access was encountered RuntimeError: cudaEventSynchronize in future::wait: an illegal memory access was encountered I encountered this error while training my model. I am using pytorch version 0.4.0, cuda version 9.0 and cudnn version 9.70. The only way currently for me to overcome this problem is to set <SCODE>torch.backends.cudnn.enabled=False\n<ECODE> If I disable cudnn, I observe the training takes more time than usual. Is there are fix to this issue?", "isAccepted": false, "likes": null, "poster": "Praveen_Tirupattur" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "shashankvkt" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "back2yes" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ptrblck" } ]
false
Question about Dropout
null
[ { "contents": "Hi, I would like to know how you guys deal with the dropout in testing since the dropout rate should be set to 0 while testing. I directly set model.training = False while testing. Or is there other ways to handle this?", "isAccepted": false, "likes": 2, "poster": "YongyiTang92" }, { "contents": "", "isAccepted": false, "likes": 8, "poster": "ypxie" }, { "contents": "That’s cool. Thanks.", "isAccepted": false, "likes": null, "poster": "YongyiTang92" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "micklexqg" }, { "contents": "Yes, it’s. You have to set to 0 when you evaluate your model.", "isAccepted": false, "likes": 1, "poster": "YongyiTang92" } ]
false
Anything like transformer for TensorDataset?
null
[ { "contents": "So the second scheme is to read in a bunch of data into the memory at a time into tensors: Anything like the transformer in the first approach for TensorDataset?", "isAccepted": false, "likes": 2, "poster": "siberiamark" }, { "contents": "No, we don’t support that at the moment but it would make sense to add that. We’re going to be rewriting some transforms soon and we’ll do that then. Thanks for a suggestion!", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "Is transform available for TensorDataset now?", "isAccepted": false, "likes": null, "poster": "adelaide" }, { "contents": "this looks like something you can integrate into.", "isAccepted": false, "likes": 1, "poster": "zym1010" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "RylanSchaeffer" } ]
false
RuntimeError: input is not contiguous
null
[ { "contents": "When i reshape a tensor (if the rank is changed), i get the following error. <SCODE>x = ... # Tensor of shape (100, 20)\nx.view(-1) # expect a tensor of shape (2000)\n\nRuntimeError: input is not contiguous\n<ECODE> What does ‘contiguous’ mean and why does this error occur?", "isAccepted": false, "likes": 5, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": 27, "poster": "apaszke" }, { "contents": "Can you tell when we get this error? I am facing this problem but not always. I am just wondering why this is happening occasionally?", "isAccepted": false, "likes": 3, "poster": "wasiahmad" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "parth126" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "colesbury" }, { "contents": "A classical way to obtain this bug is to use transposition. If you do <SCODE>x = torch.Tensor(5,2)\ny = x.t()\n<ECODE> Then, the storage of y is still the same than the one of x. You can check: <SCODE>x.fill_(0)\n 0 0\n 0 0\n 0 0\n 0 0\n 0 0\n[torch.FloatTensor of size 5x2]\n\ny\n 0 0 0 0 0\n 0 0 0 0 0\n[torch.FloatTensor of size 2x5]\n<ECODE> <SCODE>y.view(-1)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nRuntimeError: input is not contiguous at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:231\n<ECODE>", "isAccepted": false, "likes": 20, "poster": "alexis-jacq" }, { "contents": "Thanks. That makes complete sense now.", "isAccepted": false, "likes": null, "poster": "parth126" }, { "contents": "Why is it not good idea to add automatic tensor.contiguous() to transpose function?", "isAccepted": false, "likes": null, "poster": "ducha-aiki" }, { "contents": "I guess, because it saves a lot of memory to keep one storage for each tensor, while you don’t apply any reshaping transformation.", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "So, in the following case z=y.contiguous().view(-1) , will the gradients of some error with respect to z backpropgate to x?.", "isAccepted": false, "likes": 1, "poster": "zakaria_laskar" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "colesbury" }, { "contents": "thanks for your clear and useful reply ~~~", "isAccepted": false, "likes": null, "poster": "alphadl" }, { "contents": "<SCODE> xn_lstm = torch.cat((loss_prep, err_prep, grad_prep), 1).unsqueeze(0) # [n_learner_params, 6]\n # normal lstm([loss, grad_prep, train_err]) = lstm(xn)\n n_learner_params = xn_lstm.size(1)\n (lstmh, lstmc) = hs[0] # previous hx from first (standard) lstm i.e. lstm_hx = (lstmh, lstmc) = hs[0]\n if lstmh.size(1) != xn_lstm.size(1): # only true when prev lstm_hx is equal to decoder/controllers hx\n # make sure that h, c from decoder/controller has the right size to go into the meta-optimizer\n expand_size = torch.Size([1,n_learner_params,self.lstm.hidden_size])\n lstmh, lstmc = lstmh.squeeze(0).expand(expand_size), lstmc.squeeze(0).expand(expand_size)\n lstm_out, (lstmh, lstmc) = self.lstm(input=xn_lstm, hx=(lstmh, lstmc))\n<ECODE> error: <SCODE>RuntimeError: rnn: hx is not contiguous\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" } ]
false
[resolved] How to implement k-max pooling for CNN text classification?
null
[ { "contents": "A possible implmentation is <SCODE>def kmax(self, x, k):\n return x.sort(dim = 3)[0][:, :, :, -k:]\n<ECODE> However, this cannot keep the information of relative position.", "isAccepted": false, "likes": null, "poster": "WarBean" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "WarBean" }, { "contents": "Then I think I don’t understand the problem. What’s the exact formula for k-pooling and what’s the problem with your implementation?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>import torch\ndef kmax_pooling(x, dim, k):\n index = x.topk(k, dim = dim)[1].sort(dim = dim)[0]\n return x.gather(dim, index)\nx = torch.rand(4, 5, 6, 10)\ny = kmax_pooling(x, 3, 5)\nprint(x[0, 0])\nprint(y[0, 0])\n<ECODE> Output: <SCODE> 0.2762 0.3788 0.5708 0.3251 0.0568 0.2483 0.3930 0.1249 0.1874 0.1113\n 0.9230 0.7428 0.0957 0.2301 0.6187 0.8898 0.3007 0.2653 0.5313 0.1032\n 0.6376 0.9639 0.6584 0.1502 0.0250 0.5792 0.9283 0.1783 0.9545 0.1681\n 0.8456 0.6135 0.2860 0.9366 0.5178 0.0113 0.4864 0.9308 0.3005 0.5403\n 0.3280 0.8755 0.2290 0.0899 0.9093 0.6971 0.1557 0.2412 0.7991 0.9169\n 0.5389 0.4603 0.7291 0.4070 0.0113 0.3571 0.3860 0.3354 0.4081 0.0209\n[torch.FloatTensor of size 6x10]\n\n\n 0.2762 0.3788 0.5708 0.3251 0.3930\n 0.9230 0.7428 0.6187 0.8898 0.5313\n 0.6376 0.9639 0.6584 0.9283 0.9545\n 0.8456 0.6135 0.9366 0.9308 0.5403\n 0.8755 0.9093 0.6971 0.7991 0.9169\n 0.5389 0.4603 0.7291 0.4070 0.4081\n[torch.FloatTensor of size 6x5]\n<ECODE>", "isAccepted": false, "likes": 8, "poster": "WarBean" } ]
false
How to compute a multiplication between a matrixand a number
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "JunshengShen" }, { "contents": "What kind of product do you need? You want to scale each matrix element by the same scalar? Try this: <SCODE>a = ...\nb = ...\nb * a.expand_as(b)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I am wondering if using expand_as is efficient in this case ? does it store only the original value and the repetitions across each axis or something similar ? it would be cheaper to divide each matrix element by the same value without having to create the whole matrix containing the same values.", "isAccepted": false, "likes": null, "poster": "IssamLaradji" } ]
false
Partially reset a Variable, in-place vs. new Variable
null
[ { "contents": "<SCODE>def selective_zero(s, new):\n for b, reset in enumerate(new):\n if reset:\n for state_layer in s:\n state_layer.data[b].zero_()\n\nselective_zero(state, y[t + 1] != y[t])\n<ECODE> In order to complete this, I was thinking to register a hook in order to zero the gradient correspondingly as well. Something about this: <SCODE>for state_layer in s:\n state_layer.data[b].zero_()\n state_layer.register_hook(lambda grad: grad[b].zero_())\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "If what you want is to reset the content of the tensor to use it again independently of how you were using it before, you should repack it in a new Variable otherwise it will still have the history of the previous usage.", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "<SCODE>state_layer.data[b].zero_()\nstate_layer.register_hook(lambda grad: grad[b].zero_())\n<ECODE> is equivalent to your <SCODE>state_layer[b].zero_()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "OK, making progress: <SCODE>In [22]: c\nOut[22]: \nVariable containing:\n 0.8491\n 0.1877\n 0.1560\n 0.5188\n 0.7464\n[torch.FloatTensor of size 5]\n\nIn [23]: set(dir(c)) - set(dir(c.data))\nOut[23]: \n{'__getattr__',\n '__rpow__',\n '__setstate__',\n '_add',\n '_addcop',\n '_backward_hooks',\n '_blas',\n '_creator',\n '_do_backward',\n '_execution_engine',\n '_fallthrough_methods',\n '_get_type',\n '_grad',\n '_static_blas',\n '_sub',\n '_version',\n 'backward',\n 'creator',\n 'data',\n 'detach',\n 'detach_',\n 'grad',\n 'index_add',\n 'index_copy',\n 'index_fill',\n 'masked_copy',\n 'masked_fill',\n 'output_nr',\n 'register_hook',\n 'reinforce',\n 'requires_grad',\n 'resize',\n 'resize_as',\n 'scatter',\n 'volatile'}\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "<SCODE>>>> a = V(torch.rand(5), requires_grad=True)\nVariable containing:\n 0.8491\n 0.1877\n 0.1560\n 0.5188\n 0.7464\n[torch.FloatTensor of size 5]\n\n>>> c = a[2] * 0\nVariable containing:\n 0\n[torch.FloatTensor of size 1]\n<ECODE> <SCODE>>>> c = a.index_fill(0, 3, 0)\n---------------------------------------------------------------------------\nRuntimeError Traceback (most recent call last)\n<ipython-input-25-ad8f493be136> in <module>()\n----> 1 c = a.index_fill(0, 3, 0)\n\n/home/atcold/anaconda3/lib/python3.5/site-packages/torch/autograd/variable.py in index_fill(self, dim, index, value)\n 627 \n 628 def index_fill(self, dim, index, value):\n--> 629 return IndexFill(dim, value)(self, index)\n 630 \n 631 def index_fill_(self, dim, index, value):\n\nRuntimeError: expected a Variable argument, but got int\n<ECODE> The only think that worked is masking, but it’s not acceptable… <SCODE>In [12]: mask = V(torch.Tensor([1, 1, 1, 0, 1]))\n\nIn [13]: b = a * mask\n\nIn [14]: b\nOut[14]: \nVariable containing:\n 0.8491\n 0.1877\n 0.1560\n 0.0000\n 0.7464\n[torch.FloatTensor of size 5]\n\nIn [15]: m = b.mean()\n\nIn [16]: m\nOut[16]: \nVariable containing:\n 0.3879\n[torch.FloatTensor of size 1]\n\nIn [17]: m.backward()\n\nIn [18]: a.grad\nOut[18]: \nVariable containing:\n 0.2000\n 0.2000\n 0.2000\n 0.0000\n 0.2000\n[torch.FloatTensor of size 5]\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "<SCODE>import torch\nfrom torch.autograd import Variable\n\na = Variable(torch.rand(5, 3), requires_grad=True)\na = a.clone() # Otherwise we change inplace a leaf Variable\nprint(a)\n\nind = Variable(torch.LongTensor([3]))\na.index_fill_(0, ind, 0)\n\nprint(a)\n\na[1, :] = 0\n\nprint(a)\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Atcold" } ]
false
Sampling new text from Pytorch language model example
null
[ { "contents": "Hi, <SCODE> output, hidden = model(input, hidden)\n word_weights = output.squeeze().data.div(args.temperature).exp().cpu()\n word_idx = torch.multinomial(word_weights, 1)[0]\n input.data.fill_(word_idx)\n word = corpus.dictionary.idx2word[word_idx]\n<ECODE> I understand that this is not necessarily a Pytorch question, but would appreciate if someone could share details behind the sampling.", "isAccepted": false, "likes": 1, "poster": "jatinganhotra" }, { "contents": "A guess would be that it makes the output richer. Sticking to the most probable words would restrict the model to always use the most commonly used words, while if you use softmax, it should end up using the words approximately as often as they appear in natural language (so it will sometimes insert some more complex ones too).", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Fcn using pretrained vgg16 in model zoo?
vision
[ { "contents": "In summary, I first implemented a FCN-32 network class and loaded/reshaped pre-trained weights in the same way as caffe did. Before feeding data into the network, every image is scaled to [0 1], subtracted by mean [0.485, 0.456, 0.406], and divided by std [0.229, 0.224, 0.225]. I guess this is the same normalization used for training the model zoo vgg16 network. And the network is trained using the exact same parameters as caffe version. <SCODE>#-------- main part --------\nmodel = FCN_32()\n\npth_file = 'vgg16-82412952.pth' # download from model zoo\npre_trained_weights = torch.load(pth_file)\n\nlayer_names = [layer_name for layer_name in pre_trained_weights]\n\ncounter = 0\nfor p in model.parameters():\n if counter < 26: # conv1_1 to pool5\n p.data = pre_trained_weights[ layer_names[counter] ]\n elif counter == 26: # fc6 weight\n p.data = pre_trained_weights[ layer_names[counter] ].view(4096, 512, 7, 7)\n elif counter == 27: # fc6 bias\n p.data = pre_trained_weights[ layer_names[counter] ]\n elif counter == 28: # fc7 weight\n p.data = pre_trained_weights[ layer_names[counter] ].view(4096, 4096, 1, 1)\n elif counter == 31: # upscore layer\n m, k, h, w = 34, 34, 64, 64\n filter = upsample_filt(h)\n filter = torch.from_numpy(filter.astype('float32'))\n p.data = filter.repeat( m, k, 1, 1 )\n counter += 1\n\ndef upsample_filt(size):\n \"\"\"\n Make a 2D bilinear kernel suitable for upsampling of the given (h, w) size.\n \"\"\"\n factor = (size + 1) // 2\n if size % 2 == 1:\n center = factor - 1\n else:\n center = factor - 0.5\n og = np.ogrid[:size, :size]\n return (1 - abs(og[0] - center) / factor) * \\\n (1 - abs(og[1] - center) / factor)\n\n\n\n#-------- FCN-32 implementation part --------\nimport torch\nimport torch.nn as nn\nimport math\nimport torch.utils.model_zoo as model_zoo\n\n\ndef conv3x3(in_planes, out_planes, stride=1, padding=1):\n \"3x3 convolution with padding\"\n return nn.Conv2d(in_planes, out_planes, kernel_size=(3,3), stride=(stride,stride),\n padding=(padding,padding))\n\ndef conv1x1(in_planes, out_planes, stride=1):\n \"1x1 convolution with padding\"\n return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride,\n padding=0)\n\n\n\n\n\nclass FCN_32(nn.Module):\n def __init__(self):\n super(FCN_32, self).__init__()\n\n # vgg part\n self.conv1_1 = conv3x3(3, 64, stride=1, padding=100)\n self.conv1_2 = conv3x3(64, 64)\n \n self.conv2_1 = conv3x3(64, 128)\n self.conv2_2 = conv3x3(128, 128)\n \n self.conv3_1 = conv3x3(128, 256)\n self.conv3_2 = conv3x3(256, 256)\n self.conv3_3 = conv3x3(256, 256)\n\n self.conv4_1 = conv3x3(256, 512)\n self.conv4_2 = conv3x3(512, 512)\n self.conv4_3 = conv3x3(512, 512)\n\n self.conv5_1 = conv3x3(512, 512)\n self.conv5_2 = conv3x3(512, 512)\n self.conv5_3 = conv3x3(512, 512)\n \n self.fc6 = nn.Conv2d(512, 4096, kernel_size=7, stride=1, padding=0)\n self.dropout = nn.Dropout()\n self.fc7 = nn.Conv2d(4096, 4096, kernel_size=1, stride=1, padding=0)\n \n self.score_fr_sem = nn.Conv2d(4096, 34, kernel_size=1, stride=1, padding=0, bias=False)\n \n self.upscore_sem = nn.ConvTranspose2d(34, 34, kernel_size=64, stride=32, padding=0, output_padding=0, bias=False)\n\n \n self.pool = nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True)\n\n\n\n self.relu = nn.ReLU(inplace=True)\n \n self.softmax = nn.Softmax()\n \n self._initialize_weights()\n\n def forward(self, x):\n # vgg part\n conv1_1 = self.relu( self.conv1_1(x) )\n conv1_2 = self.relu( self.conv1_2(conv1_1) )\n pool1 = self.pool(conv1_2)\n \n conv2_1 = self.relu( self.conv2_1(pool1) )\n conv2_2 = self.relu( self.conv2_2(conv2_1) )\n pool2 = self.pool(conv2_2)\n \n conv3_1 = self.relu( self.conv3_1(pool2) )\n conv3_2 = self.relu( self.conv3_2(conv3_1) )\n conv3_3 = self.relu( self.conv3_3(conv3_2) )\n pool3 = self.pool(conv3_3)\n \n conv4_1 = self.relu( self.conv4_1(pool3) )\n conv4_2 = self.relu( self.conv4_2(conv4_1) )\n conv4_3 = self.relu( self.conv4_3(conv4_2) )\n pool4 = self.pool(conv4_3)\n \n conv5_1 = self.relu( self.conv5_1(pool4) )\n conv5_2 = self.relu( self.conv5_2(conv5_1) )\n conv5_3 = self.relu( self.conv5_3(conv5_2) )\n pool5 = self.pool(conv5_3)\n \n fc6 = self.dropout( self.relu( self.fc6(pool5) ) )\n fc7 = self.dropout( self.relu( self.fc7(fc6) ) )\n \n score_fr_sem = self.score_fr_sem( fc7 )\n \n upscore_sem = self.upscore_sem(score_fr_sem)\n \n crop = upscore_sem[:, :, 19:19+256, 19:19+256] # batch, 34, 256, 256\n \n crop = crop.transpose( 1, 3 )\n crop = crop.transpose( 1, 2 ) # batch, 256, 256, 34\n \n output = crop.contiguous().view(-1, crop.size(3))\n \n output = self.softmax(output)\n \n \n return output\n \n \n \n def _initialize_weights(self):\n for m in self.modules():\n if isinstance(m, nn.Conv2d):\n n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n m.weight.data.normal_(0, math.sqrt(2. / n))\n if m.bias is not None:\n m.bias.data.zero_()\n elif isinstance(m, nn.BatchNorm2d):\n m.weight.data.fill_(1)\n m.bias.data.zero_()\n elif isinstance(m, nn.Linear):\n n = m.weight.size(1)\n m.weight.data.normal_(0, 0.01)\n m.bias.data.zero_()<ECODE>", "isAccepted": false, "likes": null, "poster": "qianguih" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "shicai" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "qianguih" }, { "contents": "Any idea why it’s worse?", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "shicai" }, { "contents": "So you mean, the pretrained vgg model by pytorch is not done exactly the same way as vgg paper did, right?", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "No, it is not. Tt least provided vgg model used a different image preprocess method.", "isAccepted": false, "likes": null, "poster": "qianguih" }, { "contents": "Well, if that’s the case, then if you follow the exactly same preprocessing method which trained the pytorch model, then you would get the same accuracy as you would get on a caffe converted model, isn’t it?", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "shicai" }, { "contents": "Yes, you are right. The performance is lower than expected even using the same preprocessing method. Using caffe converted model is better.", "isAccepted": false, "likes": null, "poster": "qianguih" }, { "contents": "Previous | New error rates <SCODE>-VGG-11 32.14 12.12\t +VGG-11 30.98 11.37\n-VGG-13 31.04 11.40\t +VGG-13 30.07 10.75\n-VGG-16 29.11 10.17\t +VGG-16 28.41 9.62\n-VGG-19 28.42 9.69\t +VGG-19 27.62 9.12\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "shicai" }, { "contents": "vgg does not have BatchNorm", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "by using batchnorm layers during training, we can get better performance for vgg models.", "isAccepted": false, "likes": null, "poster": "shicai" }, { "contents": "i understand, but we cannot put it into the reference VGG model, everyone expects VGG to be of a certain architecture that mirrors the paper.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "WELL, previous error rate is so intimidating.", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "Isn’t this error rate still far off the original error rate? So are the pytorch vgg models about 2% worse than the original caffe vgg?", "isAccepted": false, "likes": null, "poster": "yxchng" } ]
false
Bug or am I missing something?
null
[ { "contents": "Please check out the following short program that reproduces the bug I was facing. Imagine in each module that self.params is just some transition matrix. The goal is to sum up the scores of each transition, where params[i][j] is the score of transitioning to i from j. Both WorkingModule and BuggyModule have a forward() function that correctly computes this score. WorkingModule does what you would expect. If you check its gradient after calling backward(), you will see 1’s in the places where there was a transition, and 0’s elsewhere. BuggyModule though, doesn’t backpropagate to self.params! The difference is that in this case, the sequence was wrapped in an autograd.Variable, and the transition indices are accessed with .data. I understand the dangers of .data and how it might cut off your backprop, but how is it cutting off the backprop from score to params? The only way sequence is ever involved is just providing an index. In principle, score should not be cut off from params in the computation graph unless I am missing something. In addition, I think that sequence[i].data should be evaluated before being passed to the indexing function, so I am not sure how there is any difference at all as far as constructing the “score” computation graph is concerned.", "isAccepted": false, "likes": null, "poster": "rguthrie3" }, { "contents": "Updating to 0.1.10 should fix it.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false