title
stringlengths
15
126
category
stringclasses
3 values
posts
list
answered
bool
2 classes
Global max pooling?
null
[ { "contents": "Hi, I am looking for the global max pooling layer. Because in my case, the input shape is uncertain and I want to use global max pooling to make their shape consistent. But I do not find this feature in pytorch?", "isAccepted": false, "likes": 1, "poster": "Fangzou_Liao" }, { "contents": "You can use the functional interface of max pooling for that. In you forward function: <SCODE>import torch.nn.functional as F\noutput = F.max_pool2d(input, kernel_size=input.size()[2:])\n<ECODE>", "isAccepted": false, "likes": 19, "poster": "fmassa" }, { "contents": "You can do something simpler like <SCODE>import torch\noutput, _ = torch.max(input, 1)\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "Ilya_Ezepov" } ]
false
Any PyTorch function can work as Keras’ Timedistributed?
vision
[ { "contents": "Thanks in advance for your patience and help!!", "isAccepted": false, "likes": 2, "poster": "HONGYUAN_ZHU" }, { "contents": "from the top of my head, I think that the model in Sean Naren’s deepspeech.pytorch does something very similar to what you want to achieve with the SequenceWise class: Best regards Thomas", "isAccepted": false, "likes": 2, "poster": "tom" }, { "contents": "Hi, Tom. Thanks for your sharing! I’ll try to look into that! Bests HY", "isAccepted": false, "likes": null, "poster": "HONGYUAN_ZHU" }, { "contents": "Hey, I developed a PyTorch module that mimics the TimeDistributed wrapper of Keras a few days ago: <SCODE>import torch.nn as nn\n\n\nclass TimeDistributed(nn.Module):\n def __init__(self, module, batch_first=False):\n super(TimeDistributed, self).__init__()\n self.module = module\n self.batch_first = batch_first\n\n def forward(self, x):\n\n if len(x.size()) <= 2:\n return self.module(x)\n\n # Squash samples and timesteps into a single axis\n x_reshape = x.contiguous().view(-1, x.size(-1)) # (samples * timesteps, input_size)\n\n y = self.module(x_reshape)\n\n # We have to reshape Y\n if self.batch_first:\n y = y.contiguous().view(x.size(0), -1, y.size(-1)) # (samples, timesteps, output_size)\n else:\n y = y.view(-1, x.size(1), y.size(-1)) # (timesteps, samples, output_size)\n\n return y<ECODE>", "isAccepted": false, "likes": 24, "poster": "miguelvr" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "HONGYUAN_ZHU" }, { "contents": "Could you give me some example on how to use this function to construct time distributed cnn + lstm? Several images will be computed by CNN and feed to LSTM all together. <SCODE>class CNN(nn.Module):\n def __init__(self):\n super(CNN, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n #x = F.relu(self.fc1(x))\n #x = F.dropout(x, training=self.training)\n #x = self.fc2(x)\n #return F.log_softmax(x, dim=1)\n return x\n\n\nclass Combine(nn.Module):\n def __init__(self):\n super(Combine, self).__init__()\n self.cnn = CNN()\n self.rnn = nn.LSTM(320, 10, 2)\n\n def forward(self, x):\n x = self.cnn(x)\n x = self.rnn(x)\n return F.log_softmax(x, dim=1)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Jacky_Liu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Mac_Yeh" }, { "contents": "For most cases, this function is not needed anymore. The Dense layer now supports 3 dimensional inputs, for example.", "isAccepted": false, "likes": 2, "poster": "miguelvr" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Mac_Yeh" }, { "contents": "Is putting a Dense layer after an RNN the same as applying a Dense layer to each time step though? Like in the first case don’t the time steps connect and mix together?", "isAccepted": false, "likes": 1, "poster": "kenfehling" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kota_Mori" }, { "contents": "Yes definitely, it still can be useful for other cases", "isAccepted": false, "likes": null, "poster": "miguelvr" }, { "contents": "thanks. I was looking for the timedistributed equivalent in pytorch and found your code…", "isAccepted": false, "likes": null, "poster": "satheesh" }, { "contents": "Hi Miguelvr, But my doubt is When I the print the weight parameters of NN. Time Distributor layer prints two times as follows fc4.weight torch.Size([1, 128]) fc4.bias torch.Size([1]) timedist.module.weight torch.Size([1, 128]) timedist.module.bias torch.Size([1]) is it correct or any mistakes in the implementation. Thanks", "isAccepted": false, "likes": null, "poster": "Thiyagu" }, { "contents": "Every nn.Linear object had a weight and a bias, so that’s correct", "isAccepted": false, "likes": null, "poster": "miguelvr" }, { "contents": "Thank you for your reply", "isAccepted": false, "likes": null, "poster": "Thiyagu" }, { "contents": "Can you provide a small working example where this works? I have an input of the shape (samples, timesteps, channels, width, height). With your code, it combines all the dimensions except the last one which becomes input size as per your x_reshape. Then, it doesn’t work with any of the layers, giving a size mismatch error.", "isAccepted": false, "likes": null, "poster": "hash-ir" }, { "contents": "Thanks a lot for your nice explanation. I have a novice confusion: as batch samples and timesteps are squashed, won’t it have any problem in LSTM sequential learning? i.e when the sequence is reshaped to (samples, timesteps, output_size), will it retain the sequential (timesteps) features ordering for each sample as it was before squashing?", "isAccepted": false, "likes": null, "poster": "akashs" }, { "contents": "Did you resolve about the structure of your network on PyTorch? I am facing exactly the same problem and I am wondering if you can share the code of the network. I have to develop a CNN+LSTM network for video sequence classification.", "isAccepted": false, "likes": null, "poster": "barloccia" }, { "contents": "2d CNN accepts 4d inputs only so you can pass the 5d tensor ( batch , timesteps , channles ,height ,width) as 4d tensor <SCODE>view(batch*timesteps,c,h,w)<ECODE>", "isAccepted": false, "likes": 1, "poster": "IliasPap" } ]
false
About MaxPool and MaxUnpool
null
[ { "contents": "Hi all, I have got a problem about the pooling function, the code were shown below: <SCODE>input = Variable(torch.rand(1,1,64,64))\npool1 = nn.MaxPool2d(2, stride=2, padding=1, return_indices=True)\npool2 = nn.MaxPool2d(2, stride=2, return_indices=True)\nunpool1= nn.MaxUnpool2d(2, stride=2)\nunpool2= nn.MaxUnpool2d(2, stride=2, padding=1)\n\noutput1, indices1 = pool1(input)\noutput2, indices2 = pool2(output1)\n\noutput3 = unpool1(output2,indices2)\noutput4 = unpool2(output3, indices1)\n<ECODE> How to solve this problem, I want to used the pooling function to get the ouput4 which with the same dimension as input. Thanks a lot!", "isAccepted": true, "likes": 3, "poster": "KwuJohn" }, { "contents": "<SCODE>output3 = unpool1(output2, indices2, output_size=output1.size())\noutput4 = unpool2(output3, indices1, output_size=input.size())\n<ECODE>", "isAccepted": true, "likes": 8, "poster": "fmassa" }, { "contents": "Hello, even when I’ve specified the size I want, I still got this error…QAQ", "isAccepted": true, "likes": null, "poster": "Yilin_Liu" }, { "contents": "Works well for me.Thanks a lot", "isAccepted": true, "likes": null, "poster": "coya" }, { "contents": "I was playing around with max_pooling unpooling and the results I found are somewhat weird. Please let me know whether this is the desired behavior of unpooling or am I doing something wrong. These are the results of input size 320\n\n These are the results of input size 128\n\n", "isAccepted": true, "likes": 1, "poster": "kshitijpatil09" }, { "contents": "Assuming the kernel size for the pooling operations is 2x2 with stride=(2,2) I guess the output of the MaxPool layer isn’t that confusing if you know how maxpooling works. The positions of the pixels from the Maxpool layer’s output are being placed according to the indices that are being outputted by the MaxPool layer. So it’s quite likely that you won’t get a chessboad like picture back. (That’s what I’ve thought at the beginning.)", "isAccepted": true, "likes": null, "poster": "DezzardHD" } ]
true
How to concatenate list of pytorch tensors?
null
[ { "contents": "This is my current solution <SCODE> data = th.zeros([len(imgs), imgs[0].size()[0], imgs[0].size()[1], imgs[0].size()[2]])\n for i, img in enumerate(imgs): \n print(img.size())\n print(img.type())\n data[i] = img<ECODE>", "isAccepted": false, "likes": 1, "poster": "erogol" }, { "contents": "", "isAccepted": false, "likes": 26, "poster": "fmassa" }, { "contents": "just in case you were wondering about the difference: stack cat", "isAccepted": false, "likes": 18, "poster": "Brando_Miranda" }, { "contents": "What if A is of shape (1,3,4) and B is (3,4)? I want the result to be (2,3,4). How do I do this?", "isAccepted": false, "likes": null, "poster": "Hiperdyne19012" }, { "contents": "<SCODE>a = torch.rand(1, 3, 4)\nprint(a.shape)\nb = torch.rand(3, 4)\nprint(b.shape)\nb = b.unsqueeze(0)\nprint(b.shape)\nc = torch.cat([a, b], dim=0)\nprint(c.shape)\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "harsha_g" }, { "contents": "Thank you, it works.", "isAccepted": false, "likes": null, "poster": "Hiperdyne19012" } ]
false
How to merge by avg multiple inputs to layer?
null
[ { "contents": "<SCODE> def forward(self, inputs):\n outputs = []\n for x in inputs:\n x = self.conv_column(x)\n x = self.clf_layer(x)\n outputs.append(x) \n # avg all outputs here !\n return outputs<ECODE>", "isAccepted": false, "likes": null, "poster": "erogol" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
Accessing the Single-Segment Buffer Interface
null
[ { "contents": "Hi everyone, I was wondering if there was a way to access the single-segment buffer interface of CUDA Tensors. For example, when using pyCUDA, I used to do: <SCODE>def buff(ary):\n return ary.gpudata.as_buffer(ary.nbytes)\n<ECODE> I’d like to use mpi4py with Pytorch, and while I can use the numpy interface for CPU tensors, this is not possible for GPU ones. This would be a nice trick while we wait for the distributed interface to be polished.", "isAccepted": false, "likes": null, "poster": "seba-1511" }, { "contents": "you cannot use the buffer interface for CUDA tensors, but you can get the GPU pointer as an int: <SCODE>x = torch.randn(10).cuda()\nprint(x.data_ptr())\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Thanks for the fast answer.", "isAccepted": false, "likes": null, "poster": "seba-1511" }, { "contents": "do you really need a PyBuffer? We attempted implementing the Buffer interface, but it is slightly different across many versions of python and impossible to implement without thousands of lines of code. Cant you work with the data pointer and size of tensor ?", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "The advantage of the Buffer interface is that mpi4py can take advantage of it. (Essentially making the whole of MPI available) I don’t know of a way to use it directly with address and length. Maybe using custom DataTypes, but I’d need to investigate that. Since I can afford a hacky solution (only need send/recv), I’ll try that while waiting for THDP. In any case, I’ll keep this thread up-to-date.", "isAccepted": false, "likes": null, "poster": "seba-1511" }, { "contents": "Any updates on this? I’m interested in using mpi4py with Tensors. Does Tensor use the PyBuffer interface?", "isAccepted": false, "likes": null, "poster": "stsievert" }, { "contents": "Not that I am aware of. I (quickly) tried to implement a buffer interface at that time, but was not successful.", "isAccepted": false, "likes": null, "poster": "seba-1511" }, { "contents": "Does PyTorch work with MPI like other programs would? i.e., would the below work <SCODE>$ mpiexec -n 4 python torch_script.py\n<ECODE> <SCODE>torch.distributed.init_process_group(backend='mpi', world_size=4)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "stsievert" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "To use MPI you have to build from source, I think it’s kind of annoying.", "isAccepted": false, "likes": null, "poster": "Stone" }, { "contents": "So, is there a workaround? torch.distributed MPI operations are very limited (+requires building from source), while mpi4py supports more operations and seems like excellent library.", "isAccepted": false, "likes": null, "poster": "seliad" } ]
false
Memory usage of a python process increases slowly
null
[ { "contents": "I tried memory_profiler and objgraph but couldn’t find where the leak occurs, It seems to me things are ok but the garbage collected memory isn’t allocated back to OS or something. I tried running “mnist_hogwild” and it doesn’t happen there though. Any help will be appreciated! log: <SCODE>[2017-03-25 21:31:25] INFO [MainThread:67] Time 00h 00m 00s, episode reward 0.0, episode length 102\n[2017-03-25 21:31:44] INFO [MainThread:53] Memory usage of one proc: 110.3203125 (mb)\n[2017-03-25 21:32:04] INFO [MainThread:53] Memory usage of one proc: 112.55078125 (mb)\n[2017-03-25 21:32:24] INFO [MainThread:53] Memory usage of one proc: 114.15625 (mb)\n[2017-03-25 21:32:25] INFO [MainThread:67] Time 00h 01m 01s, episode reward 1.0, episode length 100\n[2017-03-25 21:32:43] INFO [MainThread:53] Memory usage of one proc: 115.5703125 (mb)\n[2017-03-25 21:33:03] INFO [MainThread:53] Memory usage of one proc: 117.25390625 (mb)\n[2017-03-25 21:33:23] INFO [MainThread:53] Memory usage of one proc: 118.30859375 (mb)\n[2017-03-25 21:33:26] INFO [MainThread:67] Time 00h 02m 02s, episode reward 0.0, episode length 100\n[2017-03-25 21:33:44] INFO [MainThread:53] Memory usage of one proc: 119.89453125 (mb)\n[2017-03-25 21:34:06] INFO [MainThread:53] Memory usage of one proc: 121.17578125 (mb)\n[2017-03-25 21:34:27] INFO [MainThread:53] Memory usage of one proc: 122.94921875 (mb)\n[2017-03-25 21:34:27] INFO [MainThread:67] Time 00h 03m 02s, episode reward 0.0, episode length 100\n[2017-03-25 21:34:48] INFO [MainThread:53] Memory usage of one proc: 124.7265625 (mb)\n[2017-03-25 21:35:09] INFO [MainThread:53] Memory usage of one proc: 126.6015625 (mb)\n[2017-03-25 21:35:28] INFO [MainThread:67] Time 00h 04m 03s, episode reward 0.0, episode length 100\n[2017-03-25 21:35:31] INFO [MainThread:53] Memory usage of one proc: 128.44140625 (mb)\n[2017-03-25 21:35:52] INFO [MainThread:53] Memory usage of one proc: 130.33203125 (mb)\n[2017-03-25 21:36:13] INFO [MainThread:53] Memory usage of one proc: 131.453125 (mb)\n[2017-03-25 21:36:28] INFO [MainThread:67] Time 00h 05m 04s, episode reward 0.0, episode length 100\n[2017-03-25 21:36:35] INFO [MainThread:53] Memory usage of one proc: 133.35546875 (mb)\n[2017-03-25 21:36:56] INFO [MainThread:53] Memory usage of one proc: 134.6015625 (mb)\n[2017-03-25 21:37:18] INFO [MainThread:53] Memory usage of one proc: 136.1953125 (mb)\n[2017-03-25 21:37:29] INFO [MainThread:67] Time 00h 06m 05s, episode reward 0.0, episode length 100\n[2017-03-25 21:37:39] INFO [MainThread:53] Memory usage of one proc: 137.76171875 (mb)\n[2017-03-25 21:38:00] INFO [MainThread:53] Memory usage of one proc: 139.48046875 (mb)\n[2017-03-25 21:38:22] INFO [MainThread:53] Memory usage of one proc: 140.7109375 (mb)\n[2017-03-25 21:38:30] INFO [MainThread:67] Time 00h 07m 05s, episode reward 1.0, episode length 102\n[2017-03-25 21:38:43] INFO [MainThread:53] Memory usage of one proc: 142.203125 (mb)\n[2017-03-25 21:39:04] INFO [MainThread:53] Memory usage of one proc: 144.05078125 (mb)\n[2017-03-25 21:39:26] INFO [MainThread:53] Memory usage of one proc: 145.82421875 (mb)\n[2017-03-25 21:39:31] INFO [MainThread:67] Time 00h 08m 06s, episode reward 0.0, episode length 100\n[2017-03-25 21:39:47] INFO [MainThread:53] Memory usage of one proc: 147.72265625 (mb)\n[2017-03-25 21:40:09] INFO [MainThread:53] Memory usage of one proc: 148.81640625 (mb)\n[2017-03-25 21:40:30] INFO [MainThread:53] Memory usage of one proc: 150.515625 (mb)\n[2017-03-25 21:40:31] INFO [MainThread:67] Time 00h 09m 07s, episode reward 1.0, episode length 100\n[2017-03-25 21:40:51] INFO [MainThread:53] Memory usage of one proc: 152.44140625 (mb)\n[2017-03-25 21:41:12] INFO [MainThread:53] Memory usage of one proc: 153.80859375 (mb)\n[2017-03-25 21:41:32] INFO [MainThread:67] Time 00h 10m 08s, episode reward 0.0, episode length 100\n[2017-03-25 21:41:34] INFO [MainThread:53] Memory usage of one proc: 155.18359375 (mb)\n[2017-03-25 21:41:55] INFO [MainThread:53] Memory usage of one proc: 156.71484375 (mb)\n[2017-03-25 21:42:16] INFO [MainThread:53] Memory usage of one proc: 158.40234375 (mb)\n[2017-03-25 21:42:33] INFO [MainThread:67] Time 00h 11m 08s, episode reward 1.0, episode length 109\n[2017-03-25 21:42:38] INFO [MainThread:53] Memory usage of one proc: 160.1328125 (mb)\n[2017-03-25 21:42:59] INFO [MainThread:53] Memory usage of one proc: 161.53515625 (mb)\n[2017-03-25 21:43:21] INFO [MainThread:53] Memory usage of one proc: 163.41015625 (mb)<ECODE>", "isAccepted": false, "likes": null, "poster": "scientist1642" }, { "contents": "I so far found that the torch precompiled pip version worked better than my self-compiled one, but I have not found out yet why.", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "scientist1642" }, { "contents": "We’ve had a memory leak in the numpy conversion and your version didn’t have it. Updating PyTorch might help you.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "scientist1642" }, { "contents": "I am also facing the same problem for maskrcnn benchmark. I am running torch = 1.4.0", "isAccepted": false, "likes": null, "poster": "ankitsharma07" } ]
false
Why different results when multiplying in CPU than in GPU?
null
[ { "contents": "I am not sure if this is Pytorch related…apologies if not. <SCODE>In [1]: %cpaste\nPasting code; enter '--' alone on the line to stop or use Ctrl-D.\nimport numpy as np\nimport torch\n\na = torch.from_numpy(np.random.rand(5000,100000).astype(np.float32))\nb = torch.from_numpy(np.random.rand(5000,100000).astype(np.float32))\n\nc = a.cuda()\nd = b.cuda()\n\nprint(a.dot(b))\nprint(c.dot(d))\n:<EOF>\n124996952.0\n124997016.0\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "tunante" }, { "contents": "Hi, (I have also run into this e.g. when computing the mean over all images on a dataset with float32.)", "isAccepted": false, "likes": 2, "poster": "tom" }, { "contents": "Thanks, you are right about the float64. The number of different digits is similar (depends on the experiment), but they are way more closer numbers. <SCODE>import numpy as np\nimport torch\n\na = torch.from_numpy(np.random.rand(5000,100000).astype(np.float64))\nb = torch.from_numpy(np.random.rand(5000,100000).astype(np.float64))\n\nc = a.cuda()\nd = b.cuda()\n\nprint(a.dot(b))\nprint(c.dot(d))::::::::::\n:<EOF>\n125000868.65247717\n125000868.65247723\n<ECODE>", "isAccepted": false, "likes": null, "poster": "tunante" }, { "contents": "CPU: <SCODE>(((a + b) + c) + d)\n<ECODE> GPU: <SCODE>((a+b) + (c+d))\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "kmichaelkills" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "tunante" }, { "contents": "it is because GPUs have thousands of cores, and doing a map-reduce style computation best exploits the parallelism of GPUs", "isAccepted": false, "likes": 4, "poster": "smth" } ]
false
Problem reshaping an LSTM output with view() when batch_first=True
null
[ { "contents": "Hey there, I’m trying to implement a pytorch module that wraps any module and applies the wrapped module’s operation to every time step of the input. This is pretty much the same as Keras’ TimeDistributed wrapper. To do so, I want to simply reshape the input to two dimensions, apply the operation, and then reshape it back. RuntimeError: input is not contiguous at /Users/soumith/code/pytorch-builder/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:231 Hopefully, someone here can help me solve this problem… Cheers", "isAccepted": false, "likes": 1, "poster": "miguelvr" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "jekbradbury" }, { "contents": "@jekbradbury <SCODE>rnn = th.nn.LSTM(5, 10, 1, batch_first=True) # batch_first\n\nx = th.autograd.Variable(th.randn(1, 2, 5)) # batch size 1\nout, state = rnn(x)\nprint(out.is_contiguous()) # TRUE\n\nx = th.autograd.Variable(th.randn(2, 2, 5)) # batch size 2\nout, state = rnn(x)\nprint(out.is_contiguous()) # FALSE\n\nrnn = th.nn.LSTM(5, 10, 1) # seq_len first\n\nx = th.autograd.Variable(th.randn(2, 1, 5)) # batch size 1\nout, state = rnn(x)\nprint(out.is_contiguous()) # TRUE\n\nx = th.autograd.Variable(th.randn(2, 2, 5)) # batch size 2\nout, state = rnn(x)\nprint(out.is_contiguous()) # TRUE<ECODE>", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jekbradbury" } ]
false
How to turn list of varying length tensor into a tensor
null
[ { "contents": "It seems that I cannot create tensor with varying length on any dimension.", "isAccepted": false, "likes": 2, "poster": "Shihan_su" }, { "contents": "", "isAccepted": false, "likes": 5, "poster": "smth" }, { "contents": "These two functions help you pack (already) padded sequences and pad (already) packed sequences. I think the purpose is not to help you pad sequences, but to save some spaces. Check out the following example:", "isAccepted": false, "likes": 1, "poster": "ZeweiChu" }, { "contents": "", "isAccepted": false, "likes": 9, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "As illustrated in the release notes, PackedSequence holds data and a list of sequence sizes of a packed sequence batch. Then why would we have to first pad the seq to equal length manually and then pass it to pack_padded func to generate this PackedSequence object? Wouldn’t it be easier to just construct the object directly from the original seq without manual padding?", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "lgelderloos" }, { "contents": "I’m not following up on this, but I agree with you on this.", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "nilbot" }, { "contents": "For anyone wondering if PyTorch has created a function that takes a list of tensors and pads them to be the same length, the following function does the trick: Example: <SCODE>>>> import torch\n>>> l = [torch.Tensor([1., 2.]), torch.Tensor([3.])]\n>>> torch.nn.utils.rnn.pad_sequence(l, batch_first=True, padding_value=0)\ntensor([[1., 2.],\n [3., 0.]])\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "RylanSchaeffer" } ]
false
ImageNet example is crashing
null
[ { "contents": "<SCODE>Process Process-4:\nTraceback (most recent call last):\n File \"/conda3/envs/idp/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n self.run()\n File \"/conda3/envs/idp/lib/python3.5/multiprocessing/process.py\", line 93, in run\n self._target(*self._args, **self._kwargs)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/utils/data/dataloader.py\", line 36, in _worker_loop\n data_queue.put((idx, samples))\n File \"/conda3/envs/idp/lib/python3.5/multiprocessing/queues.py\", line 349, in put\n obj = ForkingPickler.dumps(obj)\n File \"/conda3/envs/idp/lib/python3.5/multiprocessing/reduction.py\", line 50, in dumps\n cls(buf, protocol).dump(obj)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/multiprocessing/reductions.py\", line 113, in reduce_storage\n fd, size = storage._share_fd_()\nRuntimeError: unable to write to file </torch_6487_1133870694> at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488757768560/work/torch/lib/TH/THAllocator.c:267<ECODE>", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" }, { "contents": "Hi, Best regards Thomas", "isAccepted": false, "likes": 2, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": 6, "poster": "Nord786" }, { "contents": "This solved it for me. Thanks!", "isAccepted": false, "likes": 1, "poster": "ml9951" } ]
false
Why I can’t use CUDA7.5 when I try to train by GPU using PyTorch
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "alan_ayu" }, { "contents": "Alan, for your particular GPU, you will have to build pytorch from source. Your GPU is too old to be supported by the binaries.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "thank you,i have build it from source and it works now", "isAccepted": false, "likes": null, "poster": "alan_ayu" } ]
false
How convert a list of torch.LongTensor to one torch.LongTensor?
null
[ { "contents": "I want to pull a batch of Variable to one Variable, or one Tensor?[ type = ‘list’ ----> type = ‘torch.LongTensor’] How can I deal with this problem.", "isAccepted": false, "likes": 4, "poster": "zeng" }, { "contents": "Hi,", "isAccepted": false, "likes": 7, "poster": "albanD" }, { "contents": "You may want to use torch.stack and torch.squeeze in conjunction. The first converts a list of tensors to a single stacked tensor, and the latter removes all dimensions=1, thus making your data compact.", "isAccepted": false, "likes": 3, "poster": "SpandanMadan" } ]
false
Training on Caltech101: Not Learning
null
[ { "contents": "Dear friends, I am trying to adapt the ImageNet example to use PyTorch to train on Caltech101, an also important dataset. I am using ImageFolder just like it is used in ImageNet example, but the model VGG from vision.models is not learning (Loss is not going to zero): Question: Do I need to explicit convert JPG to BMP in some way? Or this should not be necessary since we are using PIL library anyway? Thanks in advance, David", "isAccepted": false, "likes": null, "poster": "dlmacedo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Were you able to do this at the end? I am trying to do the same. Thanks.", "isAccepted": false, "likes": null, "poster": "QuantScientist" } ]
false
Automatic handling of data copying to/from GPU?
null
[ { "contents": "", "isAccepted": false, "likes": 1, "poster": "tunante" }, { "contents": "there is no such inbuilt method.", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
How to reach model attributes wrapped by nn.DataParallel?
null
[ { "contents": "What is the way to reach model attributes inside DataParallel?", "isAccepted": false, "likes": 10, "poster": "erogol" }, { "contents": "i come across the same problem as u met,how do u solve it,thank u", "isAccepted": false, "likes": null, "poster": "andyhx" }, { "contents": "You can access them by using <SCODE>model.module.rnn\n<ECODE> but it’s not exposed in the public API, so it might change in the future.", "isAccepted": false, "likes": 20, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": 14, "poster": "oneTaken" }, { "contents": "What if I chage the attribute by this way? Will the models on other GPUs change synchronously?", "isAccepted": false, "likes": 6, "poster": "MrTuo" }, { "contents": "<SCODE> device = \"cuda:0\"\n\n graph = Text2Mel().to(device)\n # set the training flag\n graph.train()\n<ECODE> i tried ‘dataparallel’ but i’m noob in pytorch and alway i have this error: <SCODE>AttributeError: 'DataParallel' object has no attribute 'attention'\n<ECODE> I would appreciate any help, thanks!", "isAccepted": false, "likes": 1, "poster": "Victor_Maldonado" }, { "contents": "<SCODE># simple fix for dataparallel that allows access to class attributes\nclass MyDataParallel(torch.nn.DataParallel):\n def __getattr__(self, name):\n try:\n return super().__getattr__(name)\n except AttributeError:\n return getattr(self.module, name)\n \n# def __setattr__(self, name, value):\n# try:\n# return super().__setattr__(name, value)\n# except AttributeError:\n# return setattr(self.module, name, value)\n\ndef load_weights(base_model_name, model, epoch):\n \"\"\"\n Loads previously trained weights into a model given an epoch and the model itself\n :param base_model_name: name of base model in training session\n :param model: the model to load weights into\n :param epoch: what epoch of training to load\n :return: the model with weights loaded in\n \"\"\"\n\n pretrained_dict = torch.load('{}/{}_{}_{}.pt'.format(args.model_path, base_model_name, epoch, args.lr))['state_dict']\n model_dict = model.state_dict()\n pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}\n model_dict.update(pretrained_dict)\n model.load_state_dict(model_dict)\n\n return model\n\nmodel = r2plus1d50()\nmodel = MyDataParallel(model, device_ids=[0, 1])\n\n# Copying the layer from epoch 13 to model\nepoch = 13\nhead_model = r2plus1d50()\nhead_model = MyDataParallel(head_model, device_ids=[0, 1])\nhead_model = load_weights(base_model_name, head_model, epoch)\n# head1 is the name of the attribute that we want to copy over\nhead = getattr(head_model, \"head1\")\n\n# set attribute to change the attribute\nsetattr(model.module, \"head1\", head)\n\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "hobi" } ]
false
Cublas runtime error : library not initialized at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCGeneral.c:383
null
[ { "contents": "Thank you", "isAccepted": false, "likes": null, "poster": "Dennis" }, { "contents": "I am also getting the same error when I run my code in multiple GPUs. But the error is not consistent, sometime I get it, sometime not.", "isAccepted": false, "likes": 1, "poster": "wasiahmad" }, { "contents": "I am facing the same error. So is this related with machines that have multiple GPUs?", "isAccepted": false, "likes": null, "poster": "Yufeng_Ma" }, { "contents": "I think I’ve found the workaround. When we do .cuda(), we may specify the GPU device we want to load data or model to make sure they are on the same GPU. For example, <SCODE>net = Net()\nnet = net.cuda( 0 )\n\ninput = Variable(torch.randn(1, 1, 32, 32))\ninput = input.cuda( 0 )\noutput = net(input)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Yufeng_Ma" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Rohith_AP" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "ShawnGuo" }, { "contents": "I’m also facing the same issue. Although I removed the cache files in .nv directory, same error would be raised when running my code.", "isAccepted": false, "likes": null, "poster": "ShawnGuo" }, { "contents": "Thanks!", "isAccepted": false, "likes": 5, "poster": "ws123" }, { "contents": "any help ?", "isAccepted": false, "likes": null, "poster": "vince62s" }, { "contents": "CUDA_VISIBLE_DEVICES is 0-indexed. PyTorch is also 0-indexed.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "vince62s" }, { "contents": "Steven", "isAccepted": false, "likes": null, "poster": "lifematrix" }, { "contents": "I’m finetuning the vgg19_bn on my own dataset, and I faced the same problem too.", "isAccepted": false, "likes": null, "poster": "oneTaken" }, { "contents": "Thanks! It worked for me.", "isAccepted": false, "likes": null, "poster": "zjtgit" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "guoqiang_Wei" }, { "contents": "when I use pytorch 0.3, it works. But when I use 0.4 compiled from master, my code throws this error. remove nv doesn’t work.", "isAccepted": false, "likes": 3, "poster": "tiantong" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "xypan1232" }, { "contents": "whats the source of this problem?", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "is there not a way to set these indices globally once for everything?", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "I think: <SCODE>export CUDA_VISIBLE_DEVICES=$i\n<ECODE> is what Im looking for.", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" } ]
false
Does RNN work with DataParallel wrapper?
null
[ { "contents": "If I wrap RNN with DataParallel, it seems like output is not consistent with the target size. For instance, if the batch size is 32 and 2 gpus are active then 16 instances per gpu are processed. However, these instances should be aggregated in the end to get the whole batch of 32 instances for loss function. But when I use rnn, aggregation is not happening and model outputs only 16 instances which is conflicting with the target value size. I don’t know it makes sense ?", "isAccepted": false, "likes": null, "poster": "erogol" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "This is strange. If custom RNN module returns the both RNN state and the output, DataParallel does not work and the problem above appears. If you just return output of RNN then things are fine", "isAccepted": false, "likes": null, "poster": "erogol" } ]
false
GPU OOM with Simple VGG net
null
[ { "contents": "Any suggestions are appreciated. <SCODE>\nimport os\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport torch.nn.functional as F\nimport numpy as np\n\ndef set_trainable(model, requires_grad):\n for param in model.parameters():\n param.requires_grad = requires_grad\n\nclass VGG16(nn.Module):\n def __init__(self, bn=False):\n super(VGG16, self).__init__()\n self.conv1 = nn.Sequential(nn.Conv2d(3, 64, kernel_size = 3, stride = 1, padding=1),\n #nn.ReLU(inplace=True)\n nn.ReLU(),\n nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.MaxPool2d(2))\n self.conv2 = nn.Sequential(nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.MaxPool2d(2))\n set_trainable(self.conv1, requires_grad=False)\n set_trainable(self.conv2, requires_grad=False)\n self.conv3 = nn.Sequential(nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.MaxPool2d(2))\n self.conv4 = nn.Sequential(nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.MaxPool2d(2))\n self.conv5 = nn.Sequential(nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1),\n nn.ReLU(),\n nn.MaxPool2d(4))\n self.fc6_new = nn.Linear(512 * 7 * 7, 4096)\n self.fc7_new = nn.Linear(4096, 4096)\n self.score_fc = nn.Linear(4096, 150)\n self.relu = nn.ReLU(inplace=True)\n def forward(self, im_data):\n x = self.conv1(im_data)\n x = self.conv2(x)\n x = self.conv3(x)\n x = self.conv4(x)\n x = self.conv5(x)\n x = x.view(x.size()[0], -1)\n x = self.fc6_new(x)\n x = self.relu(x)\n x = self.fc7_new(x)\n x = self.relu(x)\n x = self.score_fc(x)\n return x\n\ndef train():\n net = VGG16()\n net = net.cuda()\n lr = 0.0001\n optimizer = torch.optim.SGD([\n {'params': net.conv3.parameters()},\n {'params': net.conv4.parameters()},\n {'params': net.conv5.parameters()},\n {'params': net.fc6_new.parameters(), 'lr': lr * 10.0},\n {'params': net.fc7_new.parameters(), 'lr': lr * 10.0},\n {'params': net.score_fc.parameters(), 'lr': lr * 10.0},\n ], lr=lr, momentum=0.9, weight_decay=0.005)\n for i in xrange(1000):\n im_data = np.random.rand(32, 3, 448, 448).astype(np.float32)\n label = np.floor(np.random.rand(32) * 150).astype(np.int64)\n out = net(Variable(torch.from_numpy(im_data).cuda()))\n loss = F.cross_entropy(out, Variable(torch.from_numpy(label).cuda()), size_average=True)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n print('Iteration %d finished. Loss: %.4f' % (i, loss.data.cpu().numpy()))\n\n\nif __name__ == '__main__':\n train()\n\n\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Ethan_Zhu" }, { "contents": "If you reproduce your issue to 25 lines to 40 lines of code, it will be very helpful for others to investigate.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Ethan_Zhu" }, { "contents": "How you revised your code?", "isAccepted": false, "likes": null, "poster": "marsggbo" } ]
false
ImageFolder() hangs in wait forever
null
[ { "contents": "I tried to use Amos’ DenseNet and Soumith’s ImageNet examples and replaced CIFAR10 dataset with ImageFolder dataset: Original: Modified: But loading process hangs forever. Keyboard interrupt shows:", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "bodokaiser" }, { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" }, { "contents": "Some things I would try:", "isAccepted": false, "likes": 1, "poster": "bodokaiser" }, { "contents": "Thanks Kaiser - I am starting debugging now and will look in to all three points you mentioned.", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" }, { "contents": "Also, are you running your script in Docker?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Looks like models are hardcoded with 224x224 dimensions. I am looking into it now.", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" } ]
false
Segmentation fault when loading weight
null
[ { "contents": "Previously this runs with no problem, actually two training processes are still running (on another two GPUs), however this breaks when I want to start an additional training process.", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "it seems that you are running out of memory on the GPUs", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "No, the target GPU is idle, and there’re still 22GB memory available on this GPU.", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "I’ve tried this: <SCODE> def map_loc(storage, loc):\n if loc.startswith('cuda'):\n return storage.cuda(device)\n else:\n return storage\n print('model weights loading...')\n model.load_state_dict(torch.load(model_file,map_location=map_loc))\n print('model weights loaded')\n<ECODE> And still, exception raised: <SCODE>model weights loading...\nTHCudaCheck FAIL file=/data/users/soumith/builder/wheel/pytorch-src/torch/csrc/generic/serialization.cpp line=145 error=2 : out of memory\nTraceback (most recent call last):\n File \"PTR_evaluation_pytorch.py\", line 197, in <module>\n model.load_state_dict(torch.load(model_file,map_location=map_loc))\n File \"/home/David/App/anaconda3/lib/python3.5/site-packages/torch/serialization.py\", line 222, in load\n return _load(f, map_location, pickle_module)\n File \"/home/David/App/anaconda3/lib/python3.5/site-packages/torch/serialization.py\", line 377, in _load\n deserialized_objects[key]._set_from_file(f, offset)\nRuntimeError: cuda runtime error (2) : out of memory at /data/users/soumith/builder/wheel/pytorch-src/torch/csrc/generic/serialization.cpp:145\n<ECODE> The target device is idle with over 20GB memory free.", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "there was a bug in the serialization where remapping devices still used the device memory. this is fixed in master. i am working on binaries of version 0.1.11 and that will have this fix.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "david-leon" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Zhang_Wen" }, { "contents": "Is because the version of pytorch?", "isAccepted": false, "likes": null, "poster": "will_soon" }, { "contents": "Met the same problem today.", "isAccepted": false, "likes": null, "poster": "todpole3" }, { "contents": "I got the same problem, do you figure out how to solve it yet?", "isAccepted": false, "likes": null, "poster": "Jonson" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Jonson" }, { "contents": "<SCODE>if torch.cuda.is_available() and cfg.use_gpu is not None:\n device = torch.device(use_gpu)\nelse:\n device = torch.device(\"cpu\")\ncheckpoint_data = torch.load(checkpoint_path, map_location=device)\nmodel.load_state_dict(checkpoint_data['model'])\noptimizer.load_state_dict(checkpoint_data['optimizer'])\n<ECODE> <SCODE>checkpoint_data = dict(\n optimizer=optimizer.state_dict(),\n model=model.state_dict(),\n )\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Debatrix" } ]
false
Converted Trained PyTorch Model to Caffe
null
[ { "contents": "Is there a way to convert a pretrained pytorch model to use in inference in Caffe? I torch.save() the dictionary of the net. How can I get the trained model loaded into Caffe?", "isAccepted": false, "likes": null, "poster": "sauhaardac" }, { "contents": "at this time, there is no such conversion tool.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wk910930" }, { "contents": "Nope, I actually ended up converting all our inference code to PyTorch and my entire lab followed suit. No regrets, Pytorch is awesome!", "isAccepted": false, "likes": 4, "poster": "sauhaardac" } ]
false
How to get cuda variable gradient?
null
[ { "contents": "why the result between CPU version and GPU version is different? How do we get grad for cuda variable? Thanks. CPU version: <SCODE>import torch\nfrom torch.autograd import Variable\n\nl = torch.nn.Linear(6,1)\ninput = Variable(torch.rand(10,6), requires_grad = True)\nout = l(input)\ntarget = Variable(torch.rand(10,1))\ncrt = torch.nn.L1Loss()\nloss = crt(out, target)\nloss.backward()\nprint input.grad\n<ECODE> Output: GPU version: <SCODE>l = torch.nn.Linear(6,1).cuda()\ninput = Variable(torch.rand(10,6), requires_grad = True).cuda()\nout = l(input)\ntarget = Variable(torch.rand(10,1)).cuda()\ncrt = torch.nn.L1Loss().cuda()\nloss = crt(out, target)\nloss.backward()\nprint input.grad\n<ECODE> Output: None", "isAccepted": false, "likes": 2, "poster": "fxia22" }, { "contents": "The gradient does not work through .cuda(). So instead of use <SCODE>input = Variable(torch.rand(10,6).cuda(), requires_grad = True).\n<ECODE> Best regards Thomas", "isAccepted": false, "likes": 4, "poster": "tom" }, { "contents": "Thank you so much! -Fei", "isAccepted": false, "likes": null, "poster": "fxia22" }, { "contents": "I just stumbled into the same problem. Can somebody explain the logic behind it? Seems very counter-intuitive.", "isAccepted": false, "likes": null, "poster": "fabee" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Can DataParallel work with arbitrary forward() function?
null
[ { "contents": "I’ve been using a network with a forward function as below. As shown, the network is not constructed with nn.sequential(), and the foward() function is composed with some operations like: e8 = torch.cat((e6, c3), 1). I run with netG = torch.nn.DataParallel(netG, device_ids=[0, 1]), to run on multiple GPUs, but error msg shows something like: expected 64 batches, not 32. I understand the dataParallel try to split the full 64 batches among all GPUs. But how can I make a network like below to run on muli-GPUs ? <SCODE>class G_tconv(nn.Module):\ndef __init__(self, nc, ngf):\n super(G_tconv, self).__init__()\n\n self.conv1 = nn.Conv2d(nc, ngf, 4, 2, 1, bias=False)\n self.conv2 = nn.Conv2d(ngf, ngf * 2, 4, 2, 1, bias=False)\n self.batchnorm1 = nn.BatchNorm2d(ngf * 2)\n self.conv3 = nn.Conv2d(ngf * 2, ngf * 4, 4, 2, 1, bias=False)\n self.batchnorm2 = nn.BatchNorm2d(ngf * 4)\n self.conv4 = nn.Conv2d(ngf * 4, ngf * 8, 4, 2, 1, bias=False)\n self.batchnorm3 = nn.BatchNorm2d(ngf * 8)\n self.conv5 = nn.Conv2d(ngf * 8, ngf * 8, 4, 2, 1, bias=False)\n self.batchnorm4 = nn.BatchNorm2d(ngf * 8)\n\n self.convt1 = nn.ConvTranspose2d(1024, 128, 4, 1, 0, bias=False)\n\n self.convt2 = nn.ConvTranspose2d(ngf * 8 + 128, ngf * 8, 4, 2, 1, bias=False)\n self.batchnorm5 = nn.BatchNorm2d(ngf * 8)\n self.convt3 = nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False)\n self.batchnorm6 = nn.BatchNorm2d(ngf * 4)\n self.convt4 = nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False)\n self.batchnorm7 = nn.BatchNorm2d(ngf * 2)\n self.convt5 = nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False)\n self.batchnorm8 = nn.BatchNorm2d(ngf)\n self.convt6 = nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False)\n\n self.conv_e1 = nn.Conv2d(ngf*8, ngf*2, 1, 1, 0, bias=False)\n self.bn_e1 = nn.BatchNorm2d(ngf*2)\n\n self.conv_e2 = nn.Conv2d(ngf*2, ngf*8, 3, 1, 1, bias=False)\n self.bn_e2 = nn.BatchNorm2d(ngf*8)\n\n\n self.conv_e3 = nn.Conv2d(ngf*4, ngf, 1, 1, 0, bias=False)\n self.bn_e3 = nn.BatchNorm2d(ngf)\n\n self.conv_e4 = nn.Conv2d(ngf, ngf*4, 3, 1, 1, bias=False)\n self.bn_e4 = nn.BatchNorm2d(ngf*4)\n\n\n self.linear = nn.Linear(1024, 128)\n\ndef forward(self, batchSize, input1, input2):\n\n e2 = F.relu(self.conv1(input1))\n e3 = F.relu(self.batchnorm1(self.conv2(e2)))\n e4 = F.relu(self.batchnorm2(self.conv3(e3)))\n e5 = F.relu(self.batchnorm3(self.conv4(e4)))\n e6 = F.relu(self.batchnorm4(self.conv5(e5)))\n\n c1 = self.linear(input2.view(batchSize, 1024))\n c2 = c1.view(batchSize, 128, 1, 1)\n c3 = c2.expand(batchSize, 128, 4, 4)\n\n e8 = torch.cat((e6, c3), 1)\n d1_ = F.relu(self.batchnorm5(self.convt2(e8)))\n d1 = F.relu(self.bn_e2(self.conv_e2(F.relu(self.bn_e1(self.conv_e1(d1_))))))\n d2_ = F.relu(self.batchnorm6(self.convt3(d1)))\n d2 = F.relu(self.bn_e4(self.conv_e4(F.relu(self.bn_e3(self.conv_e3(d2_))))))\n d3_ = F.relu(self.batchnorm7(self.convt4(d2)))\n d4_ = F.relu(self.batchnorm8(self.convt5(d3_)))\n d5_ = self.convt6(F.relu(d4_))\n\n o1 = F.tanh(d5_)\n\n return o1<ECODE>", "isAccepted": false, "likes": null, "poster": "wzhang35" }, { "contents": "when you use DataParallel, just double your batch size. Wouldn’t taht fix the error you are seeing.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wzhang35" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Hi, <SCODE>class Model():\n def __init__():\n ......\n\n def forward(input_data):\n ......\n\n def sample(input_data):\n ......\n\n<ECODE> <SCODE>model = Model()\nmodel = nn.DataParallel(model, device_ids=opt.gpus, dim=1)\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "DoubtWang" }, { "contents": "Same problem. Use forward() can fullfil the need,but I still wonder if you find a way to fix it ?? Thanks", "isAccepted": false, "likes": null, "poster": "JoeHEZHAO" } ]
false
Extract feature maps from intermediate layers without modifying forward()
vision
[ { "contents": "I have this: <SCODE>def get_features_hook(self, input, output):\n print output.data.cpu().numpy().shape\n features = output.data.cpu().numpy()\n\nmodel.features.module[37].register_forward_hook(get_features_hook)\nmodel.forward(im_tensor)\n<ECODE> Is there any way to extract that features value? Thanks", "isAccepted": false, "likes": 3, "poster": "jsalbert" }, { "contents": "There are many examples in this thread: Also a search will give you a few more examples: https://discuss.pytorch.org/search?q=register_hook", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Hi, I am trying to extract feature outputs of the intermediate layers of pre-trained VGG 16 architecture and concatenate them. The in-built models in pytorch doesn’t have names for all its layers for VGG architecture. Therefore, I am unable to use register_forward_hook. Is there any other alternate way? I am trying to use something like below but I am not sure if gradients will be accumulated at the intermediate layers when I do back propagation as it is building up two computation graphs during instantiation of my model. I have something like this <SCODE>def __init__(self):\n super(ModifiedVGG, self).__init__()\n k = [29, 22]\n model = models.vgg16(pretrained=True)\n self.layer1 = nn.Sequential(*list(model.features.children())[:k[0]])\n self.layer2 = nn.Sequential(*list(model.features.children())[:k[1]])\n self.conv1 = nn.Conv2d(512, 128, 3, padding=1)\n self.upsample1 = nn.UpsamplingBilinear2d(scale_factor=2)\ndef forward(self, x):\n conv_4_3 = self.layer2(x)\n conv_4_3 = F.relu(self.conv1(conv_4_3))\n conv_5_3 = self.layer1(x)\n conv_5_3 = F.relu(self.conv1(conv_5_3))\n conv_5_3 = self.upsample1(conv_5_3)\n concat_features = torch.cat([conv_4_3, conv_5_3], 1)\n return concat_features\n<ECODE> Please let me know if this would work or not. Thank you!", "isAccepted": false, "likes": 2, "poster": "bhattad2" }, { "contents": "the simplest solution is to: copy over the vgg file change the name of the class modify the class (for example remove or replace layers as you see fit) use the function load_state_dict 196 to load the original VGG weights dict into this modified class.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Thank you. This would be the least error prone of all. Also, when I use register_forward_hook, do I need to worry about backward_hook or it will be taken care off automatically? Some more examples on register_hook would be appreciated. Thank you.", "isAccepted": false, "likes": null, "poster": "bhattad2" }, { "contents": "Thanks a lot this code works for me!", "isAccepted": false, "likes": null, "poster": "ne-bo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ssh983" } ]
false
Indexing Multi-dimensional Tensors based on 1D tensor of indices
null
[ { "contents": "Hi, I have currently tried the following methods: Using gather:\nseq_len = seq_len.view(-1, 1, 1).expand(output.size(0), 1, output.size(2))\noutput_extracted = torch.gather(output, 1, seq_len)\n Using index_select:\noutput_extracted = torch.cat([torch.index_select(output[i], 0, seq_len[i]) for i in xrange(output.size(0)])\n", "isAccepted": false, "likes": 8, "poster": "dasguptar" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "chenyangh" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "dasguptar" } ]
false
How to calculate the determinant of a variable?
null
[ { "contents": "", "isAccepted": false, "likes": 2, "poster": "maplewizard" }, { "contents": "<SCODE>import torch\nimport numpy as np\nfrom torch.autograd import Variable\n\na = Variable(torch.randn(5,5))\nnp.linalg.det(a.data.numpy())\n<ECODE>", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "In fact, I want to get the gradient of the det w.r.t. each elements in the matrix.", "isAccepted": false, "likes": null, "poster": "maplewizard" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "Veril" }, { "contents": "It’s sad that PyTorch does not have a determinant function. This basically makes it impossible (ok, very hard) to implement Gaussian mixture density networks with a full covariance matrix.", "isAccepted": false, "likes": null, "poster": "dpernes" }, { "contents": "the happy news is that between the last post and yours, we added documentations to make the Cholesky functions easier to find, so <SCODE>torch.potrf(a).diag().prod()\n<ECODE> gives you the determinant. (The functions are exactly the same as in 0.1.12, too, but be sure to use the master docs.) Best regards Thomas", "isAccepted": false, "likes": 7, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Yandong_Li" }, { "contents": "i can not use this function with new version of pytorch", "isAccepted": false, "likes": 1, "poster": "Yandong_Li" }, { "contents": "Either use the above on Tensors or Cholesky.apply with the linked austograd Function. Best regards Thomas", "isAccepted": false, "likes": 1, "poster": "tom" }, { "contents": "sure i can use this code torch.potrf(a).diag().prod() when a is a tensor but i need to do the operation to with autograd when i call backward() function. Would you please help me solve this problems", "isAccepted": false, "likes": 1, "poster": "Yandong_Li" }, { "contents": "Hi, Best regards Thomas", "isAccepted": false, "likes": 2, "poster": "tom" }, { "contents": "Thanks for your help and i found that i got the square of the determinant with the code.", "isAccepted": false, "likes": 1, "poster": "Yandong_Li" }, { "contents": "If anyone is looking at this thread: Note that potrf has gained differentiability in master/0.3. Best regards Thomas", "isAccepted": false, "likes": 1, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fehiepsi" }, { "contents": "Best regards Thomas", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fehiepsi" }, { "contents": "", "isAccepted": false, "likes": 6, "poster": "CNguyen" } ]
false
Weird error before scripts end Exception TypeError: “‘NoneType’ object is not callable” by CuDNNHandle
null
[ { "contents": "Hi, I got this exception everytime my scripts contains torch model is finished training. Exception TypeError: \"'NoneType' object is not callable\" in <bound method CuDNNHandle.__del__ of <torch.backends.cudnn.CuDNNHandle instance at 0x7f32b48d0fc8>> ignored What is the cause ? Thanks", "isAccepted": false, "likes": 1, "poster": "norm_inf" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "windweller" }, { "contents": "I have the same problem. do you have solved it ?", "isAccepted": false, "likes": null, "poster": "11145" } ]
false
What is a tape-based autograd system?
null
[ { "contents": "Hi, Thank you a lot.", "isAccepted": false, "likes": 7, "poster": "hminle" }, { "contents": "something that uses reverse-mode automatic differentiation:", "isAccepted": false, "likes": 6, "poster": "smth" } ]
false
Tunable Efficient Unitary Neural Networks (EUNN)
null
[ { "contents": "Hi, I wonder if there’s been a PyTorch implementation of, It’s something that definitely seems to be a solid piece of work !", "isAccepted": false, "likes": 2, "poster": "AjayTalati" }, { "contents": "There isn’t a PyTorch implementation of this publicly available as far as I know.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thanks! It’s in Tensorflow by one of the authors, Li Jing, Compared to LSTM, at least this is mathematically interpretable ! Multi-layer bi-directional LSTM works great, but you can’t do any theory on it?", "isAccepted": false, "likes": 1, "poster": "AjayTalati" }, { "contents": "Late to the party, but I will leave this here for anyone who bumps into this conversation.", "isAccepted": false, "likes": 2, "poster": "flaport" } ]
false
New modules in PyTorch
null
[ { "contents": "Since the area of Deep Learning is nascent, hundreds of brilliant research papers are being published every week. This requires that a Deep Learning framework should have a policy towards adding modules for the same.", "isAccepted": false, "likes": null, "poster": "Zafarullah_Mahmood" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "smth" } ]
false
[resolved] Unable to execute “python main.py”
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Soniya" }, { "contents": "Did you replace project_path by the actual project path ?", "isAccepted": false, "likes": null, "poster": "kmichaelkills" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Soniya" } ]
false
How to calculate the gradient of images?
vision
[ { "contents": "Thanks.", "isAccepted": false, "likes": 3, "poster": "Michael" }, { "contents": "I guess you could represent gradient by a convolution with sobel filters. Maybe implemented with Convolution 2d filter with require_grad=false (where you set the weights to sobel filters).", "isAccepted": false, "likes": 1, "poster": "kmichaelkills" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yanp" }, { "contents": "Here is the code. It works perfectly. G_x=conv1(Variable(x)).data.view(1,256,512)", "isAccepted": false, "likes": 6, "poster": "Blade6570" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "DhruvJawalkar" }, { "contents": "I have one of the simplest differentiable solutions. The idea comes from the implementation of tensorflow. The basic principle is: dx = I(x+1) - I(x), dy = I(y+1) - I(y), The code is as follows: <SCODE>def gradient_loss(gen_frames, gt_frames, alpha=1):\n\n def gradient(x):\n # idea from tf.image.image_gradients(image)\n # https://github.com/tensorflow/tensorflow/blob/r2.1/tensorflow/python/ops/image_ops_impl.py#L3441-L3512\n # x: (b,c,h,w), float32 or float64\n # dx, dy: (b,c,h,w)\n\n h_x = x.size()[-2]\n w_x = x.size()[-1]\n # gradient step=1\n left = x\n right = F.pad(x, [0, 1, 0, 0])[:, :, :, 1:]\n top = x\n bottom = F.pad(x, [0, 0, 0, 1])[:, :, 1:, :]\n\n # dx, dy = torch.abs(right - left), torch.abs(bottom - top)\n dx, dy = right - left, bottom - top \n # dx will always have zeros in the last column, right-left\n # dy will always have zeros in the last row, bottom-top\n dx[:, :, :, -1] = 0\n dy[:, :, -1, :] = 0\n\n return dx, dy\n\n # gradient\n gen_dx, gen_dy = gradient(gen_frames)\n gt_dx, gt_dy = gradient(gt_frames)\n #\n grad_diff_x = torch.abs(gt_dx - gen_dx)\n grad_diff_y = torch.abs(gt_dy - gen_dy)\n\n # condense into one tensor and avg\n return torch.mean(grad_diff_x ** alpha + grad_diff_y ** alpha)\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "NjuHaoZhang" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "edgarriba" }, { "contents": "You defined h_x and w_x, however you do not use these in the defined function. May I ask what the purpose of h_x and w_x are?", "isAccepted": false, "likes": null, "poster": "bumpybanana" } ]
false
Force a Tensor to live on a CPU
null
[ { "contents": "Hi, [1] <SCODE>def force_cpu_(tensor):\n tensor.cuda = types.MethodType(lambda self, *args, **kwargs: self, \n tensor)\n return tensor\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "elanmart" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" } ]
false
Set requires_grad to False but still slowing the speed
null
[ { "contents": "<SCODE>class LCNPModel(nn.Module):\n\"\"\"Container module with an encoder, a recurrent module, and a decoder.\"\"\"\n\ndef __init__(self, inputs):\n super(LCNPModel, self).__init__()\n\n ....\n self.encoder_nt = nn.Embedding(self.nnt, self.dnt)\n self.word2vec_plus = nn.Embedding(self.nt, self.dt)\n self.word2vec = nn.Embedding(self.nt, self.dt) \n\n self.LSTM = nn.LSTM(self.dt, self.dhid, self.nlayers, batch_first=True, bias=True)\n # the initial states for h0 and c0 of LSTM\n self.h0 = (Variable(torch.zeros(self.nlayers, self.bsz, self.dhid)),\n Variable(torch.zeros(self.nlayers, self.bsz, self.dhid)))\n\n .....\n\n self.init_weights(initrange)\n\n self.l2 = itertools.ifilter(lambda p: p.requires_grad == True, self.parameters())\n\ndef init_weights(self, initrange=1.0):\n self.word2vec_plus.weight.data.fill_(0)\n self.word2vec.weight.data = self.term_emb\n self.encoder_nt.weight.data = self.nonterm_emb \n\n self.word2vec.weight.requires_grad = False\n self.encoder_nt.weight.requires_grad = False \n....\n<ECODE> Hi, above is part of my code. In my code, I have a word2vec and word2vec_plus embeddings. So I would like an embedding which is initialized as word2vec pretrained vectors, but keep training it further. But when I use the optimizer, I would like to take the l2 norm of this embedding as the distance between the current embedding with the original word2vec embedding, which makes sense since I don’t want the new trained embedding to be too far away from the pretrained one. My problem is, when I set the word2vec.weight.requires_grad to False, and optimize parameters that require gradient, everything is fine but the training time is too slow after the first round. However, if I comment out everything with word2vec but only use word2vec_plus, everything is very fast. Since you can think of I am using word2vec just as a normal constant in my code, it is not supposed to slow down the model training. So my question is, if there is any way to speed up this process or is there anything that I am doing wrong? Thanks a lot!", "isAccepted": false, "likes": 1, "poster": "bilibili" }, { "contents": "It’s probably the L2 norm distance between the two embedding matrices that’s taking forever to calculate, and there isn’t really a way around that for now (you may update only parts of the word2vec_plus embedding matrix at each iteration but you have to recompute the L2 norm over the whole matrix).", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "Hi, The problem is that even when I comment out the word2vec embedding, which is a constant since I don’t require gradient for it, and optimize over everything else, I expect this is the same as not commenting out it (because the optimizer has nothing to do with it), it still gave me speed up. So I think even when requires_grad is false, the optimizer, for some reason, still looks at it?", "isAccepted": false, "likes": null, "poster": "bilibili" } ]
false
How to Classify Single Image using Loaded Net
null
[ { "contents": "Hi, I trained, saved, and can load a resnet50 net, but am not sure of how to feed in a single image for classification using the loaded net. I’ve tried using the dataloader function, but I think that is more appropriate for testing a group of images that already have labels, whereas a single image that you are trying to classify obviously would not have a label. If you could point me to the right function to use or to a general methodology to follow, I would greatly appreciate it. Sorry if this is a bit basic of a question, but for some reason I could not find much online to guide me on this. Thanks", "isAccepted": false, "likes": 6, "poster": "nikmentenson" }, { "contents": "I use something like this. I think it’s a mashup of code from various tutorials/examples so my apologies if I should be crediting someone… uses PIL and torchvision.transforms <SCODE>imsize = 256\nloader = transforms.Compose([transforms.Scale(imsize), transforms.ToTensor()])\n\ndef image_loader(image_name):\n \"\"\"load image, returns cuda tensor\"\"\"\n image = Image.open(image_name)\n image = loader(image).float()\n image = Variable(image, requires_grad=True)\n image = image.unsqueeze(0) #this is for VGG, may not be needed for ResNet\n return image.cuda() #assumes that you're using GPU\n\nimage = image_loader(PATH TO IMAGE)\n\nyour_trained_net(image)\n<ECODE> hope that helps", "isAccepted": false, "likes": 13, "poster": "tymokvo" }, { "contents": "Awesome, thank you so much. That worked.", "isAccepted": false, "likes": 1, "poster": "nikmentenson" }, { "contents": "Hi again, <SCODE>Variable containing:\n0\n[torch.LongTensor of size 1]\n<ECODE> Many thanks", "isAccepted": false, "likes": null, "poster": "nikmentenson" }, { "contents": "<SCODE>variable = Variable(torch.rand(1))\nnum = variable.data[0]\n<ECODE>", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Thanks", "isAccepted": false, "likes": null, "poster": "nikmentenson" }, { "contents": "standard casting <SCODE>int(variable.data[0])<ECODE>", "isAccepted": false, "likes": null, "poster": "tymokvo" }, { "contents": "ERROR PyTorch 0.1.11", "isAccepted": false, "likes": null, "poster": "nikmentenson" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "xwgeng" }, { "contents": "One last thing-wanted to confirm this is in fact correct:", "isAccepted": false, "likes": null, "poster": "nikmentenson" }, { "contents": "yes, provided the second argument to torch.max is looking at the correct dimension of the output Variable. Which yours should be, if you’re using a pretrained resnet. int(predicted.data[0])", "isAccepted": false, "likes": 1, "poster": "tymokvo" }, { "contents": "<SCODE># Bunch of imports go here\n\n# Convert image to Variable\ndef Torchify( aImage ):\n ptLoader = transforms.Compose([transforms.ToTensor()])\n aImage = ptLoader( aImage ).float()\n aImage = Variable( aImage, volatile=True )\n return aImage.cuda()\n\n# Load model from Checkpoint\nprint(\"=> Loading Network\")\nptModelAxial = densenet.__dict__['densenet161'](pretrained=False, num_classes=5)\nptModelAxial.classifier = nn.Linear(8832, 5)\nptModelAxial = torch.nn.DataParallel(ptModelAxial).cuda()\ndTemp = torch.load(\"best.pth.tar\")\nptModelAxial.load_state_dict(dTemp['state_dict'])\nfor p in ptModelAxial.parameters():\n p.requires_grad = False\nptModelAxial.eval()\n\nInputImg = skimage.img_as_float(skimage.io.imread(sFileName))\nptModelPreds = ptModelAxial( Torchify(InputImg) )\nprint( ptModelPreds )\n<ECODE> Error: <SCODE>Traceback (most recent call last):\n File \"extract.py\", line 298\n ptModelPreds = ptModelAxial( Torchify(InputImg) )\nFile \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 206, in __call__\n result = self.forward(*input, **kwargs)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py\", line 61, in forward\n outputs = self.parallel_apply(replicas, inputs, kwargs)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py\", line 71, in parallel_apply\n return parallel_apply(replicas, inputs, kwargs)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py\", line 45, in parallel_apply\n raise output\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py\", line 25, in _worker\n output = module(*input, **kwargs)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 206, in __call__\n result = self.forward(*input, **kwargs)\n File \"/home/keyur/kaggle/densenet.py\", line 153, in forward\n features = self.features(x)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 206, in __call__\n result = self.forward(*input, **kwargs)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/container.py\", line 64, in forward\n input = module(input)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 206, in __call__\n result = self.forward(*input, **kwargs)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/conv.py\", line 237, in forward\n self.padding, self.dilation, self.groups)\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/functional.py\", line 39, in conv2d\n return f(input, weight, bias)\nRuntimeError: expected 3D tensor<ECODE>", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "vipin14119" }, { "contents": "So I have a couple of questions here:", "isAccepted": false, "likes": null, "poster": "isalirezag" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "InnovArul" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "micklexqg" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ptrblck" }, { "contents": "sorry for so late reply, thanks!", "isAccepted": false, "likes": null, "poster": "micklexqg" }, { "contents": "Found a weird behavior here (bug), any advice? Common code: <SCODE>normalize = transforms.Normalize(\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]\n)\npreprocess = transforms.Compose([\n transforms.Resize(256),\n transforms.CenterCrop(224),\n transforms.ToTensor(),\n normalize\n # transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))\n])\n\nresnet_model = torchvision.models.resnet50(pretrained=True, num_classes=1000)\nresnet_model.eval()\n\nfor i in range(4):\n # An instance of your model.\n img_pil = Image.open(\"/home/alejandro/workspace/uav_detection/images/\" + str(i + 1) + \".jpg\")\n # img_pil.show()\n img_tensor = preprocess(img_pil).float()\n img_tensor = img_tensor.unsqueeze_(0)\n\n fc_out = resnet_model(Variable(img_tensor))\n\n output = fc_out.detach().numpy()\n print(output.argmax())\n<ECODE> <SCODE>918\n918\n918\n918\n<ECODE> Python 2.7 Anaconda and pytorch 0.4.1.post2 (downloaded with pip): <SCODE>834\n208\n285\n478\n<ECODE>", "isAccepted": false, "likes": null, "poster": "alejodosr" } ]
false
Transfer Learning Only Learns Last Class
vision
[ { "contents": "Hi, I’m trying to train a new classifier on top of the pretrained VGG16 feature module with requires_grad=False for all feature layers. Even though it converges for each of my 5 classes during training, my verification images are always scored highest for the last class that it is trained on. I am wondering if this is because of the way that my training is set up: I have 5 folders, one for each class. The training loop opens each folder, then feeds each of the images in the folder into a Variable for the forward pass, computes loss, does backprop, etc. for 2 epochs. It then moves on to the folder of the next class and does this all again. So something like: root_directory -> [class1, class2, class3, class4, class5] -> class1[image_0, image_1,…image_n] (iterate over all of these twice) -> class2… This is the classifier that I’m using: <SCODE>self.classifier = nn.Sequential(\n nn.Dropout(inplace=False),\n nn.Linear(32768, 500), # for 256x256 images\n nn.ReLU(inplace=False),\n nn.Dropout(inplace=False),\n nn.Linear(500, 100),\n nn.ReLU(inplace=False),\n nn.Linear(100, 5)\n)\n<ECODE> Attached image is the graph of loss for the whole training cycle. Thanks for any help! Edit: So I moved on to trying to use Resnet18 as the basis for this same transfer learning problem. I was thinking that my classifier might have been the problem, so I tried to follow the transfer learning tutorial as closely as possible for my own dataset. This is the training function that I am using: <SCODE>def train_network(net, img_path_list, target_dict, criterion, optimizer, loss_graph, test_dirs=None, epochs=2, print_frequency=20, test_print=False):\n \"\"\"train a network on a directory of images\"\"\"\n\n trained_net = net\n\n s = 0 # generic counter for doing stuff\n for path in img_path_list:\n\n target = target_gen(target_dict, path)\n goal = Variable(target).cuda()\n print(path)\n min_loss = 1000 # absurdly large loss\n\n for i in range(epochs):\n for filename in os.listdir(path):\n\n input = image_loader(path + '/' + filename)\n\n optimizer.zero_grad()\n\n output = net(input)\n\n if s % print_frequency == 0 and test_print is True:\n print('\\n')\n print('output: ', output.data)\n print('goal: ', goal.data)\n print('\\n')\n\n loss = criterion(output, goal)\n loss.backward()\n optimizer.step()\n\n loss_graph += [loss.data[0]]\n if s % print_frequency == 0:\n print('epoch: {} '.format(i + 1), 'loss: {}'.format(loss.data[0]))\n\n if loss.data[0] < min_loss:\n min_loss = loss.data[0]\n print('new min loss: {}'.format(min_loss))\n trained_net = copy.deepcopy(net)\n\n s += 1\n\n\n print('done!')\n return trained_net\n<ECODE> But it still has the same problem, no matter at what point I test the network on my validation set, it only ever responds with the highest probability for the last class it was trained on. This is in spite of it being able to make strong correlations during training, i.e. <SCODE>output: \n-0.0148 0.8499 0.0970 -0.0274 0.0879\n[torch.cuda.FloatTensor of size 1x5 (GPU 0)]\ngoal: \n 0\n 1\n 0\n 0\n 0\n[torch.cuda.FloatTensor of size 5 (GPU 0)]\n<ECODE> Do the input and target variables need to be in a minibatch format that contains examples of multiple classes so that all correlations are affected at every forward pass? I cannot figure out what is going on…", "isAccepted": false, "likes": null, "poster": "tymokvo" }, { "contents": "Hello, One epoch commonly is considered be a loop over the entire dataset (all classes) so that loop should be the outermost. Best regards Thomas", "isAccepted": false, "likes": 1, "poster": "tom" }, { "contents": "Thanks, Tom! This seems to be helping. I had already started making a batching script, but the data_utils help a lot.", "isAccepted": false, "likes": null, "poster": "tymokvo" } ]
false
Keyword argument error in optim.RMSprop()
null
[ { "contents": "I am wondering is there any change made to the RMSprop so that it no longer support centered version ?", "isAccepted": false, "likes": null, "poster": "Shihan_su" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Discussion: How to use a mediocre quality image dataset?
vision
[ { "contents": "I am working with 2048x2048 px medical X-ray images, and I have two datasets. One dataset is a _ dream come true <3 _ : nice, clean, and pixel-wise segmented. This really helps in training a segmentation network. I also happen to have a second dataset, which is clean and nice but does not have pixel-wise segmentation. Instead this dataset has a “Quadrant” feature which points to where the issue lies. What would you do with the second dataset? Will love to hear some thoughts!", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" } ]
false
Module within Module Backprop (NTM Implementation)
null
[ { "contents": "<SCODE>import torch\nimport torch.nn as nn\nimport torch.nn.functional as Funct\nfrom torch.autograd import Variable\nimport torch.optim as optim\n\nclass EMM_NTM(nn.Module):\n def __init__(self, *params):\n # init hidden state, etc. Make sure all Variables have requires_grad=True\n\n def forward(self, h):\n # pass forward through the memory module (this one is really long)\n\nclass FeedForwardController(nn.Module):\n def __init__(self,\n num_inputs,\n num_hidden,\n batch_size,\n num_reads=1,\n memory_dims=(128, 20)):\n\n super(FeedForwardController, self).__init__()\n\n self.num_inputs = num_inputs\n self.num_hidden = num_hidden\n self.batch_size = batch_size\n self.memory_dims = memory_dims\n\n self.in_to_hid = nn.Linear(self.num_inputs, self.num_hidden)\n self.read_to_hid = nn.Linear(self.memory_dims[1]*num_reads, self.num_hidden)\n\n def forward(self, x, read):\n\n x = x.contiguous()\n x = x.view(-1, num_flat_features(x))\n read = read.contiguous()\n read = read.view(-1, num_flat_features(read))\n\n x = Funct.relu(self.in_to_hid(x) + self.read_to_hid(read))\n\n return x\n\nclass NTM(nn.Module):\n def __init__(self,\n num_inputs,\n num_hidden,\n num_outputs,\n batch_size,\n num_reads,\n memory_dims=(128, 20)):\n super(NTM, self).__init__()\n\n self.num_inputs = num_inputs\n self.num_hidden = num_hidden\n self.num_outputs = num_outputs\n self.batch_size = batch_size\n self.num_reads = num_reads\n self.memory_dims = memory_dims\n\n self.hidden = Variable(torch.rand(batch_size, self.num_hidden), requires_grad=True)\n\n self.EMM = EMM_NTM(self.num_hidden, self.batch_size, num_reads=self.num_reads,\n num_shifts=3, memory_dims=self.memory_dims)\n # self.EMM.register_backward_hook(print) # <- an attempt to see what's happening, this doesn't print\n\n self.controller = FeedForwardController(self.num_inputs, self.num_hidden, self.batch_size,\n num_reads=self.num_reads, memory_dims=self.memory_dims)\n # self.controller.register_backward_hook(print) # <- this doesn't print either\n\n self.hid_to_out = nn.Linear(self.num_hidden, self.num_outputs)\n\n def forward(self, x):\n\n x = x.permute(1, 0, 2, 3)\n\n def step(x_t):\n r_t = self.EMM(self.hidden)\n\n # r_t.register_hook(print) # <- this one doesn't print\n\n h_t = self.controller(x_t, r_t)\n h_t = h_t.view(-1, num_flat_features(h_t))\n\n # self.hidden.register_hook(print) # <- this one prints\n\n self.hidden = Variable(h_t.data, requires_grad=True)\n out = Funct.sigmoid(self.hid_to_out(self.hidden))\n return out\n\n outs = torch.stack(\n [\n step(x_t) for x_t in torch.unbind(x, 0)\n ], 0)\n\n outs = outs.permute(1, 0, 2)\n\n return outs\n\n<ECODE> For some reason when I call backwards it doesn’t look like the gradients are getting updated. I tried adding a bunch of backward hooks to see when it stops printing, and it looks like the backward calls just aren’t happening in the child modules. Any idea how to fix this?", "isAccepted": false, "likes": 2, "poster": "bzcheeseman" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "The core of my training loop looks like this: <SCODE>ntm = NTM(num_inputs, num_hidden, num_inputs, batch, num_reads=1)\n\n try:\n ntm.load_state_dict(torch.load(\"models/copy_seqlen_{}.dat\".format(seq_len)))\n except FileNotFoundError or AttributeError:\n pass\n\n ntm.train()\n\n state = ntm.state_dict()\n\n criterion = nn.MSELoss()\n optimizer = optim.RMSprop(ntm.parameters(), lr=5e-3, weight_decay=0.0005)\n\n max_seq_len = 20\n for length in range(10, max_seq_len):\n\n test = CopyTask(length, [num_inputs, 1], num_samples=2e4)\n\n data_loader = DataLoader(test, batch_size=batch, shuffle=True, num_workers=4)\n\n for epoch in range(5):\n\n for i, data in enumerate(data_loader, 0):\n inputs, labels = data\n inputs = Variable(inputs, requires_grad=True)\n labels = Variable(labels)\n\n optimizer.zero_grad()\n ntm.zero_grad()\n outputs = ntm(inputs)\n\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n\n assert not (ntm.state_dict()['hid_to_out.bias'] == state['hid_to_out.bias'])[0] # this just breaks it on the first loop\n\n# do stuff with the outputs, plot, running loss, etc.\n<ECODE>", "isAccepted": false, "likes": null, "poster": "bzcheeseman" } ]
false
Newbie question: PyTorch not for production?
null
[ { "contents": "Really basic question here. I’ve read in a few places that PyTorch seems to be geared towards “development” and not production. Is there some reason why it is not suitable for production? The claims on the site, “The memory usage in PyTorch is extremely efficient”, “PyTorch is quite fast – whether you run small or large neural networks.” etc. seem to be all great benefits for running in production (and I suppose in development too)", "isAccepted": false, "likes": 1, "poster": "abagshaw" }, { "contents": "if you dont mind running python in production, then pytorch is ready for production. We also tend to always prioritize research flexibility over freezing specs / a more structured static model approach which is also suited for production.", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "Is PyTorch claiming performance improvements over libraries like Tensorflow even without C or cython?", "isAccepted": false, "likes": null, "poster": "abagshaw" }, { "contents": "we dont claim performance improvements over any other library, we leave that exercise to the community.", "isAccepted": false, "likes": 10, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Deeply" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "ptrblck" } ]
false
[solved] Why we need to detach Variable which contains hidden representation?
null
[ { "contents": "<SCODE># Starting each batch, we detach the hidden state from how it was previously produced.\n# If we didn't, the model would try backpropagating all the way to start of the dataset.\nhidden = repackage_hidden(hidden)\n<ECODE> I am not understanding why we need to detach hidden variable from the hidden variable associated with the previous batch of input? When gradients are computed and loss is backpropagated, weight matrices are usually affected by the chain of computation. Why the hidden variables which represents hidden states of a recurrent neural network is at a risk to get affected during backpropagation and should be detached from previous value? When the hidden variable should be detached from previous value? At the beginning of each batch? or beginning of each training epoch?", "isAccepted": false, "likes": 13, "poster": "wasiahmad" }, { "contents": "This is for doing truncated BPTT. The dependency graph of RNN can be simply viewed as this. <SCODE>c1, h1 -> c2, h2 -> c3, h3 -> ... -> cn, hn\n | | | |\n loss1 loss2 loss3 lossn\n<ECODE> If we did not truncate the history of hidden states (c, h), the back-propagated gradients would flow from the loss towards the beginning, which may result in gradient vanishing or exploding. Detailed explanation can be found here.", "isAccepted": false, "likes": 5, "poster": "chenzhekl" }, { "contents": "Yes.", "isAccepted": false, "likes": 18, "poster": "elanmart" }, { "contents": "Hello, thanks for the nice answer, it explains a lot! But I still have a question: , where input is tensor of sequences (or a packed sequence), and the initial hidden states h_0, c_0 are initialized automatically. In the example above, the hidden states are not detached manually. I am wondering if the attachment is done automatically here, or BPTT is computed based on all previous hidden states without doing any detachments? Thanks.", "isAccepted": false, "likes": 1, "poster": "RangoHU" }, { "contents": "In your examples new zero-initialized hidden states are initialized at every call to the lstm, and therefore are not connected to the previous sequence in any way. The language modeling example needs to detach them, beacuse it retains the values of hidden states between training states (but doesn’t want to backprop very far back).", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "Understood. Many thanks.", "isAccepted": false, "likes": null, "poster": "RangoHU" }, { "contents": "Hello, I come up with a specific question about the detachment op we discuss here. For example, we have a seq2seq model with an attention layer between the encoder and the decoder. According to the common implementations of attention models, the last hidden state of the encoder (say, ‘hn’) is used as the first hidden state of the decoder. My question is: is it necessary to detach hn from the encoder network?", "isAccepted": false, "likes": null, "poster": "RangoHU" }, { "contents": "It depends what you want to do. If you detach it, the encoder won’t get any gradients from that backward.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Then I suppose detaching the hidden variable from the graph during the evaluation stage is not necessary, or is it? <SCODE>def evaluate(data_source):\n # Turn on evaluation mode which disables dropout.\n model.eval()\n total_loss = 0\n ntokens = len(corpus.dictionary)\n hidden = model.init_hidden(eval_batch_size)\n for i in range(0, data_source.size(0) - 1, args.bptt):\n data, targets = get_batch(data_source, i, evaluation=True)\n output, hidden = model(data, hidden)\n output_flat = output.view(-1, ntokens)\n total_loss += len(data) * criterion(output_flat, targets).data\n hidden = repackage_hidden(hidden)\n return total_loss[0] / len(data_source)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ctyuang" }, { "contents": "During evaluation detaching is not necessary. When you evaluate there is no need to compute the gradients nor backpropagate anything. So, afaik just put your input variable as volatile and Pytorch won’t hesitate to create the backpropagation graph, it will just do a forward pass.", "isAccepted": false, "likes": null, "poster": "adrianjav" }, { "contents": "hi Adam, how do we detach the states from the output of GRU layer: <SCODE>def detach_states(self, states):\n\tif states is None:\n\t\treturn states\n\treturn [state.detach() for state in states]\n<ECODE> are output from the GRU layer. When I pass them to It gives me following error: … What I am doing wrong. I can put the entire code if this is not clear.", "isAccepted": false, "likes": null, "poster": "pp18" }, { "contents": "In fact, if you call at the beginning of each batch during training, then there will be no need to call .", "isAccepted": false, "likes": null, "poster": "entslscheia" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "tastyminerals" }, { "contents": "backpropagating through all seen data (i.e. in PyTorch just don’t detach the hidden state) use only the last input batch?", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "andrei-rusu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Messiah" } ]
false
Error about hooks in implementing customized modules
null
[ { "contents": "Hi, I just implemented an customized modules as follows: <SCODE>def forward(self, x, para):\n\t''' \n\tImplement eq 26 of ref paper for each batch.\n\tInput:\n\t\tpara: dim(seq_len, batch, 121)\n\t\tx: dim(seq_len, batch, 3)\n\t'''\n\tif x.size()[0] == para.size()[0]:\n\t\tself.seq_length = x.size()[0] - 1\n\t\ttotal_loss = 0\n\t\tfor i in range(self.seq_length):\n\t\t\t# prepare parameters\n\t\t\tself.__get_para(i, x, para)\n\t\t\tnormalpdf = self.__para2normal(self.x1, self.x2, self.m1, self.m2, self.s1, self.s2, self.rho) #dim (n_batch, 20)\n\t\t\tsingle_loss = self.__singleLoss(normalpdf)\n\t\t\ttotal_loss += single_loss\n\t\treturn total_loss\n\telse:\n\t\traise Exception(\"x and para don't match\")\n\n\ndef __get_para(self, i, x, para):\n\t'''\n\tSlice and process parameters to the right form.\n\tImplementing eq 18-23 of ref paper.\n\t'''\n\tself.batch = x.size()[1]\n\tself.e = torch.sigmoid(-para[i,:,0]) # eq 18\n\tself.parameters = para\n\n\t# slice remaining parameters and training inputs\n\tself.pi, self.m1, self.m2, self.s1, self.s2, self.rho = torch.split(self.parameters[i,:,1:], 20, dim = 1) # dim(batch, 20)\n\tself.x1 = x[i+1,:,0].resize(self.batch, 1) # dim(batch, 1)\n\tself.x2 = x[i+1,:,1].resize(self.batch, 1)\n\tself.et = x[i+1,:,2].resize(self.batch, 1)\n\t\n\t## process parameters\n\t# pi\n\tmax_pi = torch.max(self.pi, dim = 1)[0]\n\tmax_pi = max_pi.expand_as(self.pi)\n\tdiff = self.pi - max_pi\n\tred_sum = torch.sum(diff, dim = 1).expand_as(self.pi)\n\tself.pi = diff.div(red_sum)\n\n\t# sd\n\tself.s1 = self.s1.exp()\n\tself.s2 = self.s2.exp()\n\n\t# rho\n\tself.rho = self.rho.tanh()\n\n\n\t# reshape ground truth x1, x2 to match m1, m2 because broadcasting is currently not supported by pytorch\n\tself.x1 = self.x1.expand_as(self.m1)\n\tself.x2 = self.x2.expand_as(self.m2)\n\n\ndef __para2normal(self, x1, x2, m1, m2, s1, s2, rho):\n\t'''\n\tImplement eq 24, 25 of ref paper.\n\tAll input with dim(1, batch, 20)\n\t'''\n\tnorm1 = x1.sub(m1)\n\tnorm2 = x2.sub(m2)\n\ts1s2 = torch.mul(s1, s2)\n\tz = torch.pow(torch.div(norm1, s1), 2) + torch.pow(torch.div(norm2, s2), 2) - \\\n\t\t2*torch.div(torch.mul(rho, torch.mul(norm1, norm2)), s1s2)\n\tnegRho = 1 - torch.pow(rho, 2)\n\texpPart = torch.exp(torch.div(-z, torch.mul(negRho, 2)))\n\tcoef = 2*np.pi*torch.mul(s1s2, torch.sqrt(negRho))\n\tresult = torch.div(expPart, coef)\n\treturn result\n\n\t\ndef __singleLoss(self, normalpdf):\n\t'''\n\tCalculate loss for single time stamp. eq 26\n\tInput: normalpdf (1,n_batch, 20).\n\t'''\n\tepsilon = 1e-20 # floor of loss from mixture density component since initial loss could be zero\n\tmix_den_loss = torch.mul(self.pi, normalpdf)\n\tred_sum_loss = torch.sum(torch.log(mix_den_loss)) # sum for all batch\n\tend_loss = torch.sum(torch.log(torch.mul(self.e, self.et) + torch.mul(1-self.e, 1 - self.et)))\n\ttotal_loss = -red_sum_loss - end_loss\n\t\n\treturn total_loss/self.batch\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Shihan_su" }, { "contents": "Hi, <SCODE>class Loss(torch.nn.Module):\n def __init__(self):\n super(torch.nn.Module, self).__init__()\n\n ... # rest of the __init__ here\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "elanmart" } ]
false
How to do Convolution Backward manually
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "longcw" }, { "contents": "Maybe there were something I missed. Why do you want to do backward propagation without forwarding first? The computational graph is built in the forwarding pass, without which it is not possible to do a backward propagation.", "isAccepted": false, "likes": null, "poster": "chenzhekl" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "longcw" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "chenzhekl" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you for your reply.", "isAccepted": false, "likes": null, "poster": "longcw" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "magz" } ]
false
Test the finetune resnet18 model
null
[ { "contents": "<SCODE># create model\nif args.pretrained:\n print(\"=> using pre-trained model '{}'\".format(args.arch))\n model = models.__dict__[args.arch](pretrained=True)\n #modify the fc layer\n model.fc=nn.Linear(512,100)\nelse:\n print(\"=> creating model '{}'\".format(args.arch))\n model = models.__dict__[args.arch]()\n\nif args.arch.startswith('alexnet') or args.arch.startswith('vgg'):\n model.features = torch.nn.DataParallel(model.features)\n model.cuda()\nelse:\n model = torch.nn.DataParallel(model).cuda()\n\n# optionally resume from a checkpoint\nif args.resume:\n if os.path.isfile(args.resume):\n print(\"=> loading checkpoint '{}'\".format(args.resume))\n checkpoint = torch.load(args.resume)\n args.start_epoch = checkpoint['epoch']\n best_prec1 = checkpoint['best_prec1']\n model.load_state_dict(checkpoint['state_dict'])\n print(\"=> loaded checkpoint '{}' (epoch {})\"\n .format(args.resume, checkpoint['epoch']))\n else:\n print(\"=> no checkpoint found at '{}'\".format(args.resume))\n\ncudnn.benchmark = True\n<ECODE> <SCODE>(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)\n(relu): ReLU (inplace)\n(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)\n(downsample): Sequential (\n (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)\n (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)\n)\n<ECODE> here is my testing code: <SCODE>import torch\nimport torch.nn as nn\n#from __future__ import print_function\nimport argparse\nfrom PIL import Image\nimport torchvision.models as models\nimport skimage.io\nfrom torch.autograd import Variable as V\nfrom torch.nn import functional as f\nfrom torchvision import transforms as trn\n\n# define image transformation\ncentre_crop = trn.Compose([\n trn.ToPILImage(),\n trn.Scale(256),\n trn.CenterCrop(224),\n trn.ToTensor(),\n trn.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n])\nfilename=r'2780-0-20161221_0001.jpg'\nimg = skimage.io.imread(filename)\nx = V(centre_crop(img).unsqueeze(0), volatile=True)\nmodel = models.__dict__['resnet18']()\nmodel = torch.nn.DataParallel(model).cuda()\ncheckpoint = torch.load('model_best1.pth.tar')\nmodel.load_state_dict(checkpoint['state_dict'])\nbest_prec1 = checkpoint['best_prec1']\nlogit = model(x)\nprint(logit)\nprint(len(logit))\nh_x = f.softmax(logit).data.squeeze()\n<ECODE> anyone can tell me where do i go wrong and how to extrac the last averarge pooling layer features ,thank you so much!", "isAccepted": false, "likes": null, "poster": "andyhx" }, { "contents": "<SCODE># define image transformation\ncentre_crop = trn.Compose([\n trn.ToPILImage(),\n trn.Scale(256),\n trn.CenterCrop(224),\n trn.ToTensor(),\n trn.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n])\nfilename=r'2780-0-20161221_0001.jpg'\nimg = skimage.io.imread(filename)\nx = V(centre_crop(img).unsqueeze(0), volatile=True)\n\nmodel = models.__dict__['resnet18']()\nmodel.fc=nn.Linear(512,100)\ncheckpoint = torch.load('model_best1.pth.tar')\nbest_prec1 = checkpoint['best_prec1']\nmodel.load_state_dict(checkpoint['state_dict'])\nmodel = torch.nn.DataParallel(model).cuda()\nlogit = model(x)\nprint(logit)\nprint(len(logit))\nh_x = f.softmax(logit).data.squeeze()\n<ECODE> an error occured ,i have no idea why", "isAccepted": false, "likes": null, "poster": "andyhx" }, { "contents": "help wanted,many thanks,do i have add freeze code like this in training <SCODE>for param in model.parameters():\n param.requires_grad = False\n<ECODE> and update the optimize sgd", "isAccepted": false, "likes": null, "poster": "andyhx" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "my version is torch-0.1.10.post2-cp27-none-linux_x86_64.whl which i download from 14 days ago ,is there any problem with this version,can i update the whl by update command,i belive there is something wrong with my code, but i cant figure it out", "isAccepted": false, "likes": null, "poster": "andyhx" }, { "contents": "Can you try this minimal example in your interpreter and see if it changes the layer? In my PyTorch installation it works without problems. <SCODE>import torch.nn as nn\n\nclass M(nn.Module):\n def __init__(self):\n super(M, self).__init__()\n self.m = nn.Linear(2,2)\n def forward(self, x):\n return self.m(x)\n\nm = M()\nprint(m)\n# should be \n# M (\n# (m): Linear (2 -> 2)\n# )\n\nm.m = nn.Linear(3, 3)\nprint(m)\n# should be \n# M (\n# (m): Linear (3 -> 3)\n# )\n<ECODE>", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "it is same as u did", "isAccepted": false, "likes": null, "poster": "andyhx" }, { "contents": "Ok, now try doing the same thing but with resnet <SCODE>from torchvision import models\nimport torch.nn as nn\n\nm = models.resnet18()\nm.fc = nn.Linear(512, 10)\nprint(m) # see if the last layer was modified\n<ECODE> If the last layer is correctly modified, then there is an inconsistency with what you have written in the first message, and we might be missing information to help you further debug your problem", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "<SCODE> # create model\n if args.pretrained:\n print(\"=> using pre-trained model '{}'\".format(args.arch))\n model = models.__dict__[args.arch](pretrained=True)\n #xxxxxxxxxxxxxx to modify resnet 18 the fc layer xxxxxxxxxxxxxx\n model.fc=nn.Linear(512,100)\n else:\n print(\"=> creating model '{}'\".format(args.arch))\n model = models.__dict__[args.arch]()\n\n #for param in model.parameters():\n #param.requires_grad = False\n if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):\n model.features = torch.nn.DataParallel(model.features)\n model.cuda()\n else:\n model = torch.nn.DataParallel(model).cuda()\n\n # optionally resume from a checkpoint\n if args.resume:\n if os.path.isfile(args.resume):\n print(\"=> loading checkpoint '{}'\".format(args.resume))\n checkpoint = torch.load(args.resume)\n args.start_epoch = checkpoint['epoch']\n best_prec1 = checkpoint['best_prec1']\n model.load_state_dict(checkpoint['state_dict'])\n print(\"=> loaded checkpoint '{}' (epoch {})\"\n .format(args.resume, checkpoint['epoch']))\n else:\n print(\"=> no checkpoint found at '{}'\".format(args.resume))\n#xxxxxxxxxxxxxx freeze update xxxxxxxxxxxxxx\n for param in model.parameters():\n param.requires_grad = False\n \n # Replace the last fully-connected layer\n # Parameters of newly constructed modules have requires_grad=True by default\n #model.fc = torch.nn.Linear(512, 3)\n print(model)\n\n cudnn.benchmark = True\n\n # Data loading code\n traindir = os.path.join(args.data, 'train')\n valdir = os.path.join(args.data, 'val')\n normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225])\n\n train_loader = torch.utils.data.DataLoader(\n datasets.ImageFolder(traindir, transforms.Compose([\n transforms.RandomSizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n normalize,\n ])),\n batch_size=args.batch_size, shuffle=True,\n num_workers=args.workers, pin_memory=True)\n\n val_loader = torch.utils.data.DataLoader(\n datasets.ImageFolder(valdir, transforms.Compose([\n transforms.Scale(256),\n transforms.CenterCrop(224),\n transforms.ToTensor(),\n normalize,\n ])),\n batch_size=args.batch_size, shuffle=False,\n num_workers=args.workers, pin_memory=True)\n\n # define loss function (criterion) and pptimizer\n criterion = nn.CrossEntropyLoss().cuda()\n #xxxxxxxxxxxxxx try to make sgd only changing fc layer xxxxxxxxxxxxxx\n ignored_params = list(map(id, model.module.fc.parameters()))\n base_params = filter(lambda p: id(p) not in ignored_params,\n model.module.parameters())\n\n\n optimizer = torch.optim.SGD([\n {'params': base_params},\n {'params': model.module.fc.parameters()\n }], args.lr,momentum=args.momentum, weight_decay=args.weight_decay)\n # optimizer = torch.optim.SGD(model.module.fc.parameters(), args.lr,momentum=args.momentum, weight_decay=args.weight_decay)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "andyhx" }, { "contents": "<SCODE>for param in model.parameters():\n param.requires_grad = False\nfor param in model.fc.parameters():\n param.requires_grad = True\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "andyhx" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "<SCODE>Traceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/home/public/.local/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 311, in load_state_dict\n .format(name))\nKeyError: 'unexpected key \"module.conv1.weight\" in state_dict'<ECODE>", "isAccepted": false, "likes": null, "poster": "andyhx" }, { "contents": "thank u sir ,i retrain with freeze method ,and get a new model ,loading model is ok now,the classes is right,thank u!", "isAccepted": false, "likes": null, "poster": "andyhx" } ]
false
Gradient w.r.t each sample
null
[ { "contents": "Using model.[layer_name].weight.grad.data I am able to access the gradients for weights of a particular layer. However, I assume that these gradients are averaged across samples in a mini-batch. Is there a way for me to access the gradients of weights for each sample? I was able to obtain per-sample gradients for activations or neurons using register_hook, so not sure what to do about weights. Thanks!", "isAccepted": false, "likes": 2, "poster": "Suraj_Srinivas" }, { "contents": "there is no way to access the gradients wrt weight for each individual sample. Gradients wrt weights in pytorch are always accumulated (even over the mini-batch). If you want gradients wrt each sample, you will have to run each sample individually through the network. This is because both the THNN backend and CuDNN dont support individual sample gradients wrt weight.", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Max of a tensor and a scalar
null
[ { "contents": "There are workarounds, but an easy direct way of doing it would be more concise and probably efficient. Thanks!", "isAccepted": false, "likes": 2, "poster": "vadimkantorov" }, { "contents": "<SCODE>X = th.FloatTensor([\n [-1, 2],\n [2, -1]\n])\nscalar = th.FloatTensor([1])\n\nth.max(X, scalar.expand_as(X))\n<ECODE> gives you <SCODE> 1 2\n 2 1\n[torch.FloatTensor of size 2x2]\n<ECODE> Please keep in mind that I’m playing with PyTorch for few days only, so that may not be the optimal solution.", "isAccepted": false, "likes": 1, "poster": "elanmart" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "vadimkantorov" }, { "contents": "<SCODE>torch.clamp(tensor, max=value) # cmin\ntorch.clamp(tensor, min=value) # cmax\n<ECODE> Edited to reflect what vadim said below", "isAccepted": false, "likes": 10, "poster": "smth" }, { "contents": "Seems it’s the other way around: It works but is a little confusing at first.", "isAccepted": false, "likes": null, "poster": "vadimkantorov" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Maybe someone is looking for this. Is there a way to do this without that much code? <SCODE>def min_(value1, value2):\n if is_number(value1) and isinstance(value2, torch.Tensor):\n return torch.clamp(value2, max=value1)\n elif is_number(value2) and isinstance(value1, torch.Tensor):\n return torch.clamp(value1, max=value2)\n elif isinstance(value1, torch.Tensor) and isinstance(value2, torch.Tensor):\n return torch.min(value1, value2)\n\n\ndef max_(value1, value2):\n if is_number(value1) and isinstance(value2, torch.Tensor):\n return torch.clamp(value2, min=value1)\n elif is_number(value2) and isinstance(value1, torch.Tensor):\n return torch.clamp(value1, min=value2)\n elif isinstance(value1, torch.Tensor) and isinstance(value2, torch.Tensor):\n return torch.max(value1, value2)\n\nmin_(2, torch.tensor(3))\nmin_(torch.tensor(3), 2)\nmin_(torch.tensor(3), torch.tensor(2))<ECODE>", "isAccepted": false, "likes": null, "poster": "HectorAnadon" } ]
false
Segmentation Example
vision
[ { "contents": "I am trying to find a segmentation example in PyTorch. Do we have any officially blessed implementations that I could review?", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" }, { "contents": "I don’t think that there are any official implementations available yet, but you can take a look at some of these repos for semantic segmentation implementations using pytorch:", "isAccepted": false, "likes": 3, "poster": "aditya" } ]
false
Wired behavior of resize() function
null
[ { "contents": "Hi, when I call resize of Variable, the underlying data changes as follows:", "isAccepted": false, "likes": null, "poster": "Shihan_su" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "corect . but why resize change values in tensor ? It 's really weired.", "isAccepted": false, "likes": null, "poster": "pengalg" }, { "contents": "For example: <SCODE>>>> m = torch.arange(0, 25).view(5, 5)\n 0 1 2 3 4\n 5 6 7 8 9\n 10 11 12 13 14\n 15 16 17 18 19\n 20 21 22 23 24\n[torch.FloatTensor of size 5x5]\n>>> x = m[:,0]\n 0\n 5\n 10\n 15\n 20\n [torch.FloatTensor of size 5])\n>>> x.resize_(5, 1)\n 0\n 1\n 2\n 3\n 4\n[torch.FloatTensor of size 5x1]\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "colesbury" } ]
false
Is pytorch for beginners in Deep Learning?
null
[ { "contents": "I have background in machine learning but I am a beginner in deep learning. A question for which I have not seen a direct answer anywhere is if the developers want it to be something that beginners get into DL with or this is geared towards deep learning researchers. Also, at the moment there seems to be not as much tutorials available for beginners as other frameworks such as Keras. Though, for me at least, I am finding pytorch to be not as high level as Keras, which is why I am really liking it. This helps me get a sense of what is happening under the hood when I am learning a new concept.", "isAccepted": false, "likes": null, "poster": "nafizh1" }, { "contents": "It’s definitely meant to be accessible to beginners as well as researchers; part of the reason PyTorch is built the way it is so that it doesn’t have as much need for separate abstraction layers aimed at different use cases/user populations like tflearn or Keras. That should also make it easier to figure out how things work (at least until you get to the C backend) or move up/down the abstraction stack.", "isAccepted": false, "likes": 4, "poster": "jekbradbury" } ]
false
There are no graph nodes that require computing gradients
null
[ { "contents": "https://github.com/ShihanSu/sequence-pytorch", "isAccepted": false, "likes": null, "poster": "Shihan_su" }, { "contents": "what is datahp? what does the iterator return? I’m guessing that your problem is that you’re not casting the input and target data Variable type and therefore autograd can not compute the gradients… But again, I’m guessing! Didn’t spend too much time looking at your code.", "isAccepted": false, "likes": null, "poster": "miguelvr" } ]
false
How do I feed higher res images to model zoo models?
vision
[ { "contents": "Can someone please explain what needs to be changed in following model definition to feed it 512x512 or 1024x1024 dimensions? I am new from Keras and a bit lost here. Thank you!", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" }, { "contents": "The size mismatch is coming from the output of the last avgpool layer feeding into the input of the resnet.fc module. To feed it larger images you will need to strip the existing fully connected layer and replace it with one with a larger input dimension. something like: <SCODE>your_net = models.resnet18(pretrained=True)`\n\nnew_num_features = *something bigger than 512*\n\nyour_net.fc = nn.Linear(new_num_features, 1000)\n<ECODE> This will create untrained parameters, though. To use the pretrained models, I think you are restricted to image sizes below what you are using. this will show you the dimensions of the tensor that is created by whatever images you are using", "isAccepted": false, "likes": 1, "poster": "tymokvo" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "FuriouslyCurious" }, { "contents": "Hello, The CNN models like resnet, vgg and alexnet in pytorch are pretrained on which dataset?", "isAccepted": false, "likes": 1, "poster": "raaj043" }, { "contents": "Would you be willing to share your code to do that? Thanks", "isAccepted": false, "likes": null, "poster": "achaiah" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "macaodha" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "tymokvo" }, { "contents": "I’d like to train the Inceptionv3 model with high-resolution images like 512x512. Do you know how to do it?", "isAccepted": false, "likes": null, "poster": "hyy123" } ]
false
Question on batch normalization
null
[ { "contents": "I have three convolution layer: I want to apply batch normalization after the conv layer, do I need three separate batch normalization layers or just a single one in this case? Is this the same as?", "isAccepted": false, "likes": null, "poster": "ZeweiChu" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
Installation error: ‘memcpy’ was not declared in this scope
null
[ { "contents": "When I install the Pytorch package from source, it reports the following problem. My platform is: Ubuntu 16.06 + CUDA 7.5 Note that in Caffe installation, similar problem can be solved in here: By changing the CMakeLists.txt, I wonder whether we have some similar solutions in Pytorch.", "isAccepted": false, "likes": null, "poster": "ywu36" }, { "contents": "Further into this problem I found that it is caused by the version of gcc is too new: A usual way to solve it is to add a flag as: flags=-D_FORCE_INLINES before compiling. Is there any place I could insert this command installing Pytorch ?", "isAccepted": false, "likes": null, "poster": "ywu36" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for reply but the error is still there. I’m trying ‘CFLAGS’ instead of ‘CCFLAGS’.", "isAccepted": false, "likes": null, "poster": "ywu36" }, { "contents": "But still not working. Might because some other problems…", "isAccepted": false, "likes": null, "poster": "ywu36" }, { "contents": "Any update? Even I’m getting the same error.", "isAccepted": false, "likes": null, "poster": "shiningsurya" }, { "contents": "update your CUDA to 8.0", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ken_zhang" } ]
false
AttributeError: cannot assign module before Module.__init__() call
null
[ { "contents": "I am getting the following error. AttributeError: cannot assign module before Module.__init__() call I have a class as follows. <SCODE> args.nlayers, args.dropout)\n<ECODE> So, when I run the following line: question_classifier = QuestionClassifier(corpus.dictionary, embeddings_index, corpus.max_sent_length, args) What I am doing wrong here?", "isAccepted": false, "likes": 3, "poster": "wasiahmad" }, { "contents": "The first thing you should always do when you create a module is call its super constructor… So, your class should look like this: <SCODE>class QuestionClassifier(nn.Module):\n\n def __init__(self, dictionary, embeddings_index, max_seq_length, args):\n \"\"\"\"Constructor of the class\"\"\"\n super(QuestionClassifier, self).__init__()\n\n self.embedding = EmbeddingLayer(len(dictionary), args.emsize, args.dropout)\n self.encoder = EncoderRNN(args.emsize, args.nhid, args.model, args.bidirection, args.nlayers, args.dropout)\n \n self.drop = nn.Dropout(args.dropout)<ECODE>", "isAccepted": false, "likes": 13, "poster": "miguelvr" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "AbdulsalamBande" } ]
false
Unable to load SVHN dataset
vision
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Arash" }, { "contents": "update your torchvision to master via: <SCODE>pip install https://github.com/pytorch/vision/archive/master.zip\n<ECODE> That should fix it.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "tried the method mentioned, still module ‘torchvision.datasets’ has no attribute ‘FashionMNIST’", "isAccepted": false, "likes": 1, "poster": "noeagles" } ]
false
Tensor slice in pytorch?
null
[ { "contents": "Is there any function that behaves similarly to the function of tf.slice in pytorch?", "isAccepted": false, "likes": 1, "poster": "gaoking132" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "jekbradbury" }, { "contents": "I just found a similar post regarding using indexing to slice a tensor. So my question is “Is index slicing same as using narrow function?” Thanks", "isAccepted": false, "likes": null, "poster": "gaoking132" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "fmassa" }, { "contents": "Thanks. This is what I mean.", "isAccepted": false, "likes": null, "poster": "gaoking132" }, { "contents": "Please let me know if I am doing something wrong", "isAccepted": false, "likes": null, "poster": "Rohit_Keshari" }, { "contents": "so, how do we slice the tensor become patches using narrow?", "isAccepted": false, "likes": null, "poster": "nicozhou" } ]
false
Tensors are on different GPUS
null
[ { "contents": "<SCODE>img = Variable(img).cuda()\nlabel = Variable(label).cuda()\n<ECODE> <SCODE>Traceback (most recent call last):\n File \"trainer.py\", line 53, in <module>\n outputs = resnet(rmap)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py\", line 202, in __call__\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/data_parallel.py\", line 92, in forward\n outputs = self.parallel_apply(replicas, scattered, gpu_dicts)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/data_parallel.py\", line 102, in parallel_apply\n return parallel_apply(replicas, inputs, kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/parallel_apply.py\", line 50, in parallel_apply\n raise output\nRuntimeError: tensors are on different GPUs\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ycszen" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ycszen" }, { "contents": "I met the same problem in my program yesterday. It seems that this error only happens when device_ids[0] is not 0. <SCODE>(Pdb) l\n 29 for param in module.parameters():\n 30 if param in seen_params:\n 31 continue\n 32 seen_params.add(param)\n 33 B param_copies = Broadcast(device_ids)(param)\n 34 -> for param_copy, remap in zip(param_copies, param_remap):\n 35 remap[param] = param_copy\n(Pdb) p param_copies[0].get_device()\n0\n(Pdb) p param_copies[1].get_device()\n3\n(Pdb) p device_ids\n[2, 3]\n<ECODE> I am still working on this and have not got any solutions so far.", "isAccepted": false, "likes": null, "poster": "nicklhy" }, { "contents": "I got the same problem. Is it a bug of PyTorch?", "isAccepted": false, "likes": null, "poster": "melody-rain" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "nicklhy" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mehdi-shiba" }, { "contents": "Please change your code to xx.cuda(device=gpus[0]) instead of xx.cuda(device=0).", "isAccepted": false, "likes": 3, "poster": "nicklhy" }, { "contents": "By moving all variable tensors and models into gpus[0](not gpu 0), the first model replica will be on the right device you selected(gpus=[xx, xx, xx]) even though the broadcast function will still ignore it.", "isAccepted": false, "likes": null, "poster": "nicklhy" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mehdi-shiba" }, { "contents": "has this been fixed in PyTorch?", "isAccepted": false, "likes": null, "poster": "lakehanne" }, { "contents": "What’s wrong??", "isAccepted": false, "likes": null, "poster": "hyqneuron" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "stacked.twix" }, { "contents": "In case if I have, what is the solution?", "isAccepted": false, "likes": 1, "poster": "botcs" }, { "contents": "My 2 cents worth. I’m used to creating lists of Conv objects as I did in Keras and placing them into a class attribute. Turns out this doesn’t work if you’re using PyTorch and Data parallel as it has to be added using add_module.", "isAccepted": false, "likes": 3, "poster": "jarrelscy" }, { "contents": "", "isAccepted": false, "likes": 8, "poster": "smth" }, { "contents": "Thanks! That is a great tip.", "isAccepted": false, "likes": null, "poster": "jarrelscy" }, { "contents": "I am also curious what to do if my model has tensor and variables used in self?", "isAccepted": false, "likes": 1, "poster": "kits" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "furlat" } ]
false
PyTorch example with CNN-based object detection
null
[ { "contents": "Hi guys, I was wondering is there any example or at least pull request in progress regarding a PyTorch example with CNN-based object detection? For instance a short enough code on the COCO detection dataset? I looked in the examples on GitHub but at least I couldn’t find anything similar.", "isAccepted": false, "likes": 3, "poster": "pevogam" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "fmassa" }, { "contents": "switch between meta-architectures (SSD/Faster-RCNN) and base architectures (Resnet/Inception/…) test training tricks such as hard-negatives mining many other tweaks relevant for object detection", "isAccepted": false, "likes": null, "poster": "YossiB" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "fmassa" }, { "contents": "citing few more (prolly some better too)", "isAccepted": false, "likes": 4, "poster": "skrish13" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Amir_Rosenfeld" }, { "contents": "@Amir_Rosenfeld", "isAccepted": false, "likes": 1, "poster": "ignacio-rocco" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Amir_Rosenfeld" } ]
false
Resnet101 performs differently at different batch_size
null
[ { "contents": "I use resnet pretrained model to fine tune my own dataset. Everthing seems ok but I found that at the testing phase, the result is different due to different batch_size setting of test_loader. I change the pretrained model to alexnet, everthing goes well and the result is consistent. Why?", "isAccepted": false, "likes": null, "poster": "Rigel_Hsu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Sorry, I forgot it. Thanks for your help, it fixes my problem.", "isAccepted": false, "likes": null, "poster": "Rigel_Hsu" } ]
false
Monte Carlo dropout and independent dropout masks within mini batches?
null
[ { "contents": "One could parallelize MCD inference by having multiple instances of a given item in a mini-batch. However, in order for that to work the dropout masks have to be independent for all the members of a given mini-batch. Normally it would be faster to re-use a mask across members so I’m curious how it’s done in torch. Thanks.", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "In pytorch dropout masks are independent for all the samples in the minibatch (a single mask the size of the whole input is generated).", "isAccepted": false, "likes": null, "poster": "ngimel" }, { "contents": "Good to hear. Thanks.", "isAccepted": false, "likes": null, "poster": "mattmacy" } ]
false
What does next(self.parameters()).data mean?
null
[ { "contents": "<SCODE>def init_weights(self, bsz):\n \"\"\"Initialize weight parameters for the encoder.\"\"\"\n weight = next(self.parameters()).data\n num_directions = 2 if self.bidirectional else 1\n if self.rnn_type == 'LSTM':\n return (Variable(weight.new(self.n_layers * num_directions, bsz, self.hidden_size).zero_()),\n Variable(weight.new(self.n_layers * num_directions, bsz, self.hidden_size).zero_()))\n else:\n return Variable(weight.new(self.n_layers * num_directions, bsz, self.hidden_size).zero_())\n<ECODE> I have two question.", "isAccepted": false, "likes": 6, "poster": "wasiahmad" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "Response777" }, { "contents": "<SCODE>class Model(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc = nn.Linear(10,1)\n \nmodel = Model()\nweight = next(model.parameters()).data\n\nprint(weight)\n# Output : tensor([[ 0.0398, 0.0729, -0.2676, 0.2354, -0.0853, 0.1141, 0.0297, 0.0257, \\ \n -0.1303, 0.2208]])\n\nprint(type(weight), weight.device, weight.dtype, weight.requires_grad)\n# Output : (torch.Tensor, device(type='cpu'), torch.float32, False)\n\nvar = weight.new(4,5)\n\nprint(type(var), var.device, var.dtype, var.requires_grad)\n# Output : (torch.Tensor, device(type='cpu'), torch.float32, False)\n\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "manal_elaidouni" } ]
false
What does the following code is doing?
null
[ { "contents": "The following code is from the SNLI example in Pytorch.", "isAccepted": false, "likes": null, "poster": "wasiahmad" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "xwgeng" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wasiahmad" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "xwgeng" } ]
false
How to initiate parameters of layers
null
[ { "contents": "", "isAccepted": false, "likes": 3, "poster": "alan_ayu" }, { "contents": "<SCODE>m = t.nn.Conv2d(16, 33, 3, stride=2)\nm.weight.data.normal_(0, 0.01)\nm.bias.data.fill_(0)\n<ECODE> <SCODE>m = t.nn.Conv2d(16, 33, 3, stride=2)\nxavier_uniform(m.weight.data)\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "alan_ayu" } ]
false
RNNs Sorting operations autograd safe?
null
[ { "contents": "I all, I’m coding an application where I want to implement something like this: Both sentences have different lengths. When using variable length inputs, LSTMs in pytorch require the batch to be sorted by length of the sequence. Due to the fact that sentence1 and sentence2 have different sequence lengths, I need to reorder the batch first for sentence1 order and after that for sentence2 length order, feeded to the RNN and after that reorder again with the initial batch ordering to feed to the classifier the correct combination of sentence1 and sentence2. My concern is if those operations of reordering can affect to the calculation of the autograd gradients or are safe. Here below I put the code of the forward function that I’m using: <SCODE>def sort_batch(data, seq_len): \n \"\"\" Sort the data (B, T, D) and sequence lengths \n \"\"\" \n sorted_seq_len, sorted_idx = seq_len.sort(0, descending=True) \n sorted_data = data[sorted_idx] \n return sorted_data, sorted_seq_len, sorted_idx \n \n \nclass QuestCompNet(nn.Module): \n def __init__(self, feature_sz, hidden_sz, num_layers): \n super(QuestCompNet, self).__init__() \n self.rnn = nn.LSTM(input_size=feature_sz, \n hidden_size=hidden_sz, \n num_layers=num_layers) \n self.l1 = nn.Linear(2*hidden_sz, hidden_sz) \n self.l2 = nn.Linear(hidden_sz, 1) \n self.hidden_size = hidden_sz \n self.num_layers = num_layers \n def forward(self, x1, x2, seq_len1, seq_len2): \n batch_size = x1.size(0) \n assert (x1.size(0) == x2.size(0)) \n # init states of LSTMs \n h1, c1 = self.init_LSTM(batch_size) \n h2, c2 = self.init_LSTM(batch_size) \n \n # sort the batch \n x1_s, seq_len1_s, initial_idx1 = sort_batch(x1, seq_len1) \n x2_s, seq_len2_s, initial_idx2 = sort_batch(x2, seq_len2) \n \n # pack the batch \n x1_s = pack_padded_sequence(x1_s, list(seq_len1_s), batch_first=True) \n x2_s = pack_padded_sequence(x2_s, list(seq_len2_s), batch_first=True) \n # calculate forward pass of the two questions \n out1, h1 = self.rnn(x1_s, (h1, c1)) \n out2, h2 = self.rnn(x2_s, (h2, c2)) \n \n # unpack output \n out1, _ = pad_packed_sequence(out1, batch_first=True) \n out2, _ = pad_packed_sequence(out2, batch_first=True) \n \n # Index of the last output for each sequence. \n idx1 = (seq_len1_s-1).view(-1,1).expand(out1.size(0), out1.size(2)).unsqueeze(1) \n idx2 = (seq_len2_s-1).view(-1,1).expand(out2.size(0), out2.size(2)).unsqueeze(1) \n \n # last output of every sequence \n last1 = out1.gather(1, Variable(idx1)).squeeze() \n last2 = out2.gather(1, Variable(idx2)).squeeze() \n \n # restore initial ordering \n last1 = last1[initial_idx1] \n last2 = last2[initial_idx2] \n \n # prepare input for the classification layer \n class_layer = torch.cat((last1, last2), 1) \n class_layer2 = F.relu(self.l1(class_layer)) \n \n return F.sigmoid(self.l2(class_layer2)).squeeze() \n \n def init_LSTM(self, batch_size=16): \n # we get a pointer to parameters to get the type afterwards \n weight = next(self.parameters()).data \n # weight.new construct a new Tensor of the same data type and with \n # the dimensions indicated \n hidden_state = Variable(weight.new(self.num_layers, batch_size, self.hidden_size).zero_()).cuda() \n cell_state = Variable(weight.new(self.num_layers, batch_size, self.hidden_size).zero_()).cuda() \n \n return hidden_state, cell_state \n<ECODE> I call the forward function in this way: <SCODE> X1_batch, X2_batch, seq_len1, seq_len2, y_batch, _ = ds.sample(batch_size) \n X1_batch = Variable(X1_batch).cuda() \n X2_batch = Variable(X2_batch).cuda() \n y_batch = Variable(y_batch).cuda() \n seq_len1 = seq_len1.cuda() \n seq_len2 = seq_len2.cuda() \n<ECODE> Thanks in advance! Jordi", "isAccepted": false, "likes": 5, "poster": "Jordi_de_la_Torre" }, { "contents": "I’m having basically the same problem here. Did you figure out what to do?", "isAccepted": false, "likes": null, "poster": "simono" }, { "contents": "<SCODE>def forward(self, input, lengths, hidden):\n # Sort the input and lengths as the descending order\n lengths, perm_index = lengths.sort(0, descending=True)\n input = input[perm_index]\n\n packed_input = pack(input, list(lengths.data), batch_first=True)\n output, hidden = self.rnn(packed_input, hidden)\n output = unpack(output, batch_first=True)[0]\n\n # restore the sorting\n odx = perm_index.view(-1, 1).unsqueeze(1).expand(output.size(0), output.size(1), output.size(2))\n decoded = output.gather(0, odx)\n return decoded, hidden<ECODE>", "isAccepted": false, "likes": null, "poster": "jpzhou" }, { "contents": "Hey ! Please let me know if someone has figured out this issue ?", "isAccepted": false, "likes": null, "poster": "luffy" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "abhishek0318" }, { "contents": "<SCODE> sorted_lengths, sorted_id = lengths.sort(0, descending=True)\n print sorted_lengths\n\n sorted_sorted_id, initial_id = sorted_id.sort(0, descending=False)\n\n sorted_input = input[sorted_id]\n\n print sorted_input\n print sorted_input[initial_id]\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jxzhanggg" } ]
false
A question about the parameters()
null
[ { "contents": "I read two lines of codes as follows", "isAccepted": false, "likes": null, "poster": "blessxu" }, { "contents": "That 2 lines of code freezes the whole model. If you want to finetune few layers, you need to get the parameters of those layers and make the ‘requires_grad’ as True, so that you can finetune those layers.", "isAccepted": false, "likes": null, "poster": "skrish13" } ]
false
How to create a new criterion?
null
[ { "contents": "Can any one give a simple example code to show how to create a new criterion?", "isAccepted": false, "likes": null, "poster": "Jun_Deng" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "skrish13" } ]
false
Tensors as arguments
null
[ { "contents": "Hello, I am new to Pytorch so sorry for any naiveté or ignorance. I am doing a little bit of experimenting replicating the functionality of distributions in tensorflow.contrib. While working on a sample generator for random normal, I tried to pass in first erroneously a torch.Size type to randn and then after converting this to tensor, still received an error. So the args supplied to randn need to be integers then. I notice that a lot of tensorflow arguments are tensors. Let’s say I store or derive through a function call the shape or size of my normal distribution parameters as regular integers. Is there possibly a performance hit that can take place when things get more complex by having to derive and pass in integers instead of working with tensors as arguments?", "isAccepted": false, "likes": null, "poster": "lbollar" }, { "contents": "<SCODE>x = torch.randn(10, 20)\ny = torch.randn(*x.size()) # should work\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Deep/Sharp Mask or Mask R-CNN
vision
[ { "contents": "Greetings and apologies if this is just a lazy question, but has anyone implemented either of the FAIR deep/sharp mask or the more recent r-cnn papers? Thanks, really enjoying learning about pytorch.", "isAccepted": false, "likes": 3, "poster": "ceberly" }, { "contents": "From what I know, there are no public implementations of these papers in pytorch yet.", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ceberly" }, { "contents": "i’m also thinking of hacking up MaskRCNN (but not deepmask / sharpmask). Maybe we could work off a common repo.", "isAccepted": false, "likes": 6, "poster": "smth" }, { "contents": "I would love that. Not sure if I’m qualified to help but it would be a good learning experience.", "isAccepted": false, "likes": 1, "poster": "ceberly" }, { "contents": "I would be interested, too. Although from the looks of it, it won’t be a trivial project. It will be great to implement the instance segmentation component. I haven’t found that piece in any of the existing implementations of fast/faster RCNN. Everyone seems to stop at object detection. Also the pytorch ecosystem for segmentation is a bit small, so this could be a big win for pytorch getting a state of the art segmentation model into the model zoo.", "isAccepted": false, "likes": 2, "poster": "bfortuner" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Akshay_Deep_Lamba" }, { "contents": "did anyone start this? I tried getting either of these 2 work, but they dont work: I would love to be able to do this in pytorch.", "isAccepted": false, "likes": null, "poster": "deepcode" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Tgaaly" }, { "contents": "Any update on Pytorch implementation of Mask RCNN ?", "isAccepted": false, "likes": null, "poster": "tshrjn" }, { "contents": "Commenting for updates", "isAccepted": false, "likes": null, "poster": "Cat" }, { "contents": "Maybe someone can give me a hint on what this modification is for and how it works or a paper which descripes it. Thanks in advance", "isAccepted": false, "likes": null, "poster": "pfis" }, { "contents": "Commenting for updates. Did anyone try this?", "isAccepted": false, "likes": null, "poster": "rohun" }, { "contents": "I’m trying to implement Deepmask with Pytorch, so far I have defined the Joint Loss Function, and the model’s learn-able parameters. I was working on the training phase, and as the paper says that training must be done in an alternative back-propagation fashion across the two branches, I have written the code for the same. But there is some problem with training, I tried to train the model with a Fake Dataset, for minibatches other than the first mini-batch the loss of the model is turning out to be NAN. Can somebody help me with this?", "isAccepted": false, "likes": null, "poster": "jvj_iit" } ]
false
Why torch.cat() makes a cuda Variable to a non-cuda Variable?
null
[ { "contents": "I am trying to run a code where I am getting error in the following code snippet. <SCODE>combined_representation = torch.cat([self.encoder_hidden_states1[last_time_step_sent1][0],\n self.encoder_hidden_states2[last_time_step_sent2][0]], 1)\nif self.config.cuda:\n combined_representation = combined_representation.cuda()\n\nprint(combined_representation.size()) # prints torch.Size([16, 600])\nscores = self.linear(combined_representation)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "wasiahmad" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Support for bidirectional_dynamic_rnn?
null
[ { "contents": "Has any work been done on supporting dynamic unrolling of inputs as in TF’s {bidirectional_}dynamic_rnn?", "isAccepted": false, "likes": 1, "poster": "mattmacy" }, { "contents": "<SCODE>#TensorFlow (should be run once, during `__init__`)\ncond = lambda i, h: i < tf.shape(words)[0]\ncell = lambda i, h: rnn_unit(words[i], h)\ni = 0\n_, h = tf.while_loop(cond, cell, (i, h0))\n\n\n#PyTorch (should be run for every batch, during `forward`)\nh = h0\nfor word in words:\n h = rnn_unit(word, h)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "jekbradbury" }, { "contents": "Thanks. The python/ops/rnn.py code is so involved / convoluted I thought that there must be something more going on that that.", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ngimel" }, { "contents": "Also, note that you can use pack_padded_sequence 93 to allow sequences of different length inside a minibatch it’s perfectly fine to use minibatches with different sequence lengths in every call to LSTM If your application allows that, using nn.LSTM instead of manually unrolled nn.LSTMCell you can easily observe a 10x speedups.", "isAccepted": false, "likes": null, "poster": "elanmart" }, { "contents": "Can I just pack/pad the sentences in the minibatch and feed that to a BiLSTM?", "isAccepted": false, "likes": null, "poster": "mattmacy" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ngimel" } ]
false
What torch.cuda.set_device() does?
null
[ { "contents": "If it sets the GPU for execution, how can I set multiple GPUs to run my experiment? For example, I want to tell to pytorch that you should use two GPUs (if available) to run my experiment. How can I achieve that?", "isAccepted": false, "likes": null, "poster": "wasiahmad" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "xwgeng" }, { "contents": "Can you give an example in order to use multi-GPU?", "isAccepted": false, "likes": null, "poster": "wasiahmad" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "xwgeng" } ]
false
torch.nn.DataParallel does not accept None for device_ids in 0.1.11
null
[ { "contents": "I just updated to .11 and I think the api for calling torch.nn.DataParallel has changed. In the previous version if I only had one GPU I would call the function with None passed as the device_ids. Now if I pass None I get the following error: <SCODE> File \"/home/jtremblay/anaconda2/envs/py3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py\", line 96, in data_parallel\n output_device = device_ids[0]\nTypeError: 'NoneType' object is not subscriptable\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "jtremblay" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "meijieru" }, { "contents": "Here is the script I used, along with it’s output: <SCODE>import torch\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport platform\n\nprint('Python version: ' + platform.python_version())\nprint(torch.__version__)\n\n\nprint('Trying out device_ids=None')\n\nmodel = nn.Linear(20, 30).cuda()\nnet = torch.nn.DataParallel(model, device_ids=None)\n\n\ninp = Variable(torch.randn(128,20).cuda(), requires_grad=True)\nout = net(inp)\nout.backward(torch.ones(out.size()).cuda())\n\nprint('Passed')\n<ECODE> Output: <SCODE>Python version: 3.6.0\n0.1.11+b13b701\nTrying out device_ids=None\nPassed\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "For now, you can do: device_ids = list(range(torch.cuda.device_count()))", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Sorry for the late reply I am travelling. I should have provided an example or do a PR. I have been using the ids from now. It was an easy fix for upgrading my scripts. But thank you so much for your time.", "isAccepted": false, "likes": null, "poster": "jtremblay" }, { "contents": "Hello, I am getting Torch: unable to mmap memory: you tried to mmap 0GB error. I have 12 GB RAM, 1 GPU core and the datasize is 7GB. Ideally it should not give this error. I think i am making mistake in cuda and dataparallel, but unable to figure it out. Attached image contains the details. Please help!!", "isAccepted": false, "likes": null, "poster": "raaj043" } ]
false
[resolved] PySlice_AdjustIndices Error
null
[ { "contents": "I am using python 3.6 on Ubuntu 16.04. I have just updated pytorch via conda. But, I got an following error when torch was imported. Please help me how to solve this problem.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "this makes no sense yet, because python 3.6 should be having that. Let’s try to figure this out. Can you please paste your output of the following command: <SCODE>ldd /home/sypark/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "He had the same problem. The output is below.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "it would help me if you gave the output of the command: <SCODE>python --version\n<ECODE> Thanks.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "My python version is below. <SCODE>Python 3.6.0 :: Anaconda 4.3.1 (64-bit)\n<ECODE> Also, I updated pytorch by the command as <SCODE>conda update pytorch torchvision -c soumith\n<ECODE> Thanks.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "this is helpful, thank you. I’ll try to fix this error by tomorrow in the new binaries.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thanks for your help!!", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I will check it. Thanks for your help.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "Thanks.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "what is your application? is your code available somewhere to look?", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "My application is Inception-resnetv2 based autoencoder. I would like to share my code to solve this problem, but I am not allowed to do this by the company for a while. I will post the github address if I am allowed. Sorry.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "So what shoud i do then? How can I solve this?", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "leegang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "conda install pytorch torchvision cuda80 -c soumith @smth", "isAccepted": false, "likes": null, "poster": "leegang" }, { "contents": "that is weird. what is your output of: <SCODE>python --version\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "leegang" }, { "contents": "<SCODE>conda uninstall pytorch\nconda install pytorch -c soumith\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "<SCODE>[ec2-user@ip-172-31-40-200 Notebooks]$ conda install pytorch -c soumith\nFetching package metadata .....\n......\nSolving package specifications: .\n\nPackage plan for installation in environment /home/ec2-user/anaconda3:\n\nThe following NEW packages will be INSTALLED:\n\n pytorch: 0.1.11-py360_4cu80 soumith [cuda80]\n\nProceed ([y]/n)?\ny\n\nec2-user@ip-172-31-40-200 Notebooks]$ conda list | grep pytorch\n pytorch 0.1.11 py360_4cu80 [cuda80] soumith<ECODE>", "isAccepted": false, "likes": null, "poster": "leegang" } ]
false
Train OpenNMT and TypeError: NoneType object is not callable
null
[ { "contents": "Env: python3.5, Pytorch 0.1.10.1 <SCODE> shrink = True\n if shrink:\n src = src[0:100]\n tgt = tgt[0:100]\n<ECODE> then, I ran python preprocess.py -train_src data/src-train.txt -train_tgt data/tgt-train.txt -valid_src data/src-val.txt -valid_tgt data/tgt-val.txt -save_data data/demo Then it rans ok for a while before an error appeared: <SCODE>(dlnd-tf-lab) ->python train.py -data data/demo.train.pt -save_model demo_model\nNamespace(batch_size=64, brnn=False, brnn_merge='concat', curriculum=False, data='data/demo.train.pt', dropout=0.3, epochs=13, extra_shuffle=False, gpus=[], input_feed=1, layers=2, learning_rate=1.0, learning_rate_decay=0.5, log_interval=50, max_generator_batches=32, max_grad_norm=5, optim='sgd', param_init=0.1, pre_word_vecs_dec=None, pre_word_vecs_enc=None, rnn_size=500, save_model='demo_model', start_decay_at=8, start_epoch=1, train_from='', train_from_state_dict='', word_vec_size=500)\nLoading data from 'data/demo.train.pt'\n * vocabulary size. source = 24999; target = 35820\n * number of training sentences. 100\n * maximum batch size. 64\nBuilding model...\n* number of parameters: 58121320\nNMTModel (\n (encoder): Encoder (\n (word_lut): Embedding(24999, 500, padding_idx=0)\n (rnn): LSTM(500, 500, num_layers=2, dropout=0.3)\n )\n (decoder): Decoder (\n (word_lut): Embedding(35820, 500, padding_idx=0)\n (rnn): StackedLSTM (\n (dropout): Dropout (p = 0.3)\n (layers): ModuleList (\n (0): LSTMCell(1000, 500)\n (1): LSTMCell(500, 500)\n )\n )\n (attn): GlobalAttention (\n (linear_in): Linear (500 -> 500)\n (sm): Softmax ()\n (linear_out): Linear (1000 -> 500)\n (tanh): Tanh ()\n )\n (dropout): Dropout (p = 0.3)\n )\n (generator): Sequential (\n (0): Linear (500 -> 35820)\n (1): LogSoftmax ()\n )\n)\n\nTrain perplexity: 29508.9\nTrain accuracy: 0.0216306\nValidation perplexity: 4.50917e+08\nValidation accuracy: 3.57853\n\nTrain perplexity: 1.07012e+07\nTrain accuracy: 0.06198\nValidation perplexity: 103639\nValidation accuracy: 0.944334\n\nTrain perplexity: 458795\nTrain accuracy: 0.031198\nValidation perplexity: 43578.2\nValidation accuracy: 3.42942\n\nTrain perplexity: 144931\nTrain accuracy: 0.0432612\nValidation perplexity: 78366.8\nValidation accuracy: 2.33598\nDecaying learning rate to 0.5\n\nTrain perplexity: 58696.8\nTrain accuracy: 0.0278702\nValidation perplexity: 14045.8\nValidation accuracy: 3.67793\nDecaying learning rate to 0.25\n\nTrain perplexity: 10045.1\nTrain accuracy: 0.0457571\nValidation perplexity: 26435.6\nValidation accuracy: 4.87078\nDecaying learning rate to 0.125\n\nTrain perplexity: 10301.5\nTrain accuracy: 0.0490849\nValidation perplexity: 24243.5\nValidation accuracy: 3.62823\nDecaying learning rate to 0.0625\n\nTrain perplexity: 7927.77\nTrain accuracy: 0.062812\nValidation perplexity: 7180.49\nValidation accuracy: 5.31809\nDecaying learning rate to 0.03125\n\nTrain perplexity: 4573.5\nTrain accuracy: 0.047421\nValidation perplexity: 6545.51\nValidation accuracy: 5.6163\nDecaying learning rate to 0.015625\n\nTrain perplexity: 3995.7\nTrain accuracy: 0.0549085\nValidation perplexity: 6316.25\nValidation accuracy: 5.4175\nDecaying learning rate to 0.0078125\n\nTrain perplexity: 3715.81\nTrain accuracy: 0.0540765\nValidation perplexity: 6197.91\nValidation accuracy: 5.86481\nDecaying learning rate to 0.00390625\n\nTrain perplexity: 3672.46\nTrain accuracy: 0.0540765\nValidation perplexity: 6144.18\nValidation accuracy: 6.01392\nDecaying learning rate to 0.00195312\n\nTrain perplexity: 3689.7\nTrain accuracy: 0.0528286\nValidation perplexity: 6113.55\nValidation accuracy: 6.31213\nDecaying learning rate to 0.000976562\nException ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x118b19b70>\nTraceback (most recent call last):\n File \"/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/weakref.py\", line 117, in remove\nTypeError: 'NoneType' object is not callable\n<ECODE> Could you tell me how to fix it? Thanks!", "isAccepted": false, "likes": null, "poster": "dl4daniel" }, { "contents": "Hello, Best regards Thomas", "isAccepted": false, "likes": 1, "poster": "tom" }, { "contents": "Thanks a lot, Tom! Thanks again!", "isAccepted": false, "likes": null, "poster": "dl4daniel" } ]
false
Difference between nn.Container and nn.Module?
null
[ { "contents": "Recently when I start to construct network module , I got confused that some people inherit from nn.Container while others choose nn.Module. Could someone make clear of the question ?", "isAccepted": false, "likes": 3, "poster": "liygcheng" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "smth" } ]
false
Extract sub region of conv feature map
null
[ { "contents": "Is there any module in pytorch can extract sub region of conv feature map such as spatial transformer (given the affine matrix) or roi pooling layer(given the window’s 4-tuple coordinates)?", "isAccepted": false, "likes": 2, "poster": "Rigel_Hsu" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "blackyang" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "blackyang" }, { "contents": "the rois are defined as [roi_id , left_coord , top_coord, right_coord, bot_coord] = [ roi_id , x1, y1 , x2, y2 ] I noticed that it is not mentioned in the code, so maybe it can help someone.", "isAccepted": false, "likes": 7, "poster": "simo23" } ]
false
What should I do if the output of my model can’t be the input of the loss function?
null
[ { "contents": "I’m trying to implement a policy gradient method in RL and the output of my model need some more calculations before computing the loss. What should I do with my output and the loss function in such case?", "isAccepted": false, "likes": null, "poster": "pointW" }, { "contents": "Maybe posting some code of the calculations you want do might be helpful to understand what you want to do?", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "Here is part of my codes: <SCODE> self.steps_done = 0\n self.matches_done = 0\n self.win_count = 0\n\ndef forward(self, x):\n x = F.relu(self.conv1(x))\n x = F.relu(self.conv2(x))\n x = F.relu(self.conv3(x))\n x = F.relu(self.conv4(x))\n x = F.relu(self.conv5(x))\n x = F.relu(self.conv6(x))\n x = F.relu(self.conv7(x))\n x = x.view(x.size(0), -1)\n x = F.softmax(x)\n return x\n<ECODE> <SCODE>output = model(Variable(epstate.type(dtype)))\n\ndiscounted_epr = discount_rewards(epreward)\ndiscounted_epr -= torch.mean(discounted_epr)\ndiscounted_epr /= torch.std(discounted_epr)\n\ndiscounted_epr.resize_(discounted_epr.size()[0], 1)\ndiscounted_epr = discounted_epr.expand(discounted_epr.size()[0], 81)\n\nepy = Variable(epy, requires_grad=False)\ndiscounted_epr = Variable(discounted_epr, requires_grad=False)\n\nloss = (epy - output).mul(discounted_epr).pow(2).mean()\n\noptimizer.zero_grad()\nloss.backward()\noptimizer.step()<ECODE>", "isAccepted": false, "likes": null, "poster": "pointW" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pointW" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "AjayTalati" } ]
false
How to repeat a vector batch wise?
null
[ { "contents": "<SCODE>>>> images_vec.size() \ntorch.Size([100, 784])\n<ECODE> <SCODE>torch.Size([100, 784, 28])\n<ECODE> Here’s the batch loading code, <SCODE>import torch \nimport torchvision.datasets as dsets\nimport torchvision.transforms as transforms\n\n# Hyper Parameters\nsequence_length = 28\ninput_size = 28*28\nbatch_size = 100\n\n# MNIST Dataset\ntrain_dataset = dsets.MNIST(root='../data_tmp/',\n train=True, \n transform=transforms.ToTensor(),\n download=True)\n\n# Data Loader (Input Pipeline)\ntrain_loader = torch.utils.data.DataLoader(dataset=train_dataset,\n batch_size=batch_size, \n shuffle=True)\n\nfor i, (images, labels) in enumerate(train_loader):\n #images_repeated = images.view(-1, sequence_length, input_size) \n images_vec = images.view(-1, input_size)\n #images_repeated = torch.randn(100,784,28) # pre-allocate\n images_repeated = torch.randn(batch_size, input_size, sequence_length) # pre-allocate\n #for j in range(sequence_length):\n #images_repeated [,,j] = images_vec\n y_true = labels\n if i == 1:\n break\n\nimages.size()\nimages_vec.size() \n<ECODE> Cheers, Aj", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "OK, answer my own question, this seems to work? <SCODE>>>> images_repeated = images_vec.repeat(1,sequence_length) \n>>> images_repeated = images_repeated.view(-1,input_size, sequence_length)\n>>> images_repeated.size()\ntorch.Size([100, 784, 28])\n<ECODE> I guess it could be checked by something like this? <SCODE>images_repeated[99,783,0] == images_vec\n<ECODE> It should be all true? I expect there must be a more robust or cleaner way to do all this though?", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "<SCODE>X = torch.randn(100, 700)\nX = X.unsqueeze(2).repeat(1, 1, 28)\n<ECODE> Or you can use expand: this will only create a view, without copying any data. <SCODE>X = torch.randn(100, 700)\nX = X.unsqueeze(2).expand(100, 700, 28)\n<ECODE>", "isAccepted": false, "likes": 14, "poster": "elanmart" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "What I mean is, <SCODE>X = torch.randn(100, 700)\nX_tmp = X.unsqueeze(2).repeat(1, 1, 28)\n\nX[1] == X_tmp[1,:,0] # should be all true - want to check it vectorially\n<ECODE>", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "elanmart" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "AjayTalati" }, { "contents": "If you want to stay in pytorch (e.g. for GPU arrays), you can do (X[1] != X_tmp[1,:,0]).sum()==0 . The caveat is that NaN != NaN (X != X seemed to be easiest way to check for NaN in pytorch a while back). Best regards Thomas", "isAccepted": false, "likes": 2, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "feipan664" }, { "contents": "<SCODE>X = torch.randn(100, 700)\nX = X.unsqueeze(2).expand(100, -1, -1)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Honghe" } ]
false
Predict output by model does not need call .forward
null
[ { "contents": "I am confused. Thank you.", "isAccepted": false, "likes": 6, "poster": "karlTUM" }, { "contents": "Hi, Best regards Thomas", "isAccepted": false, "likes": 10, "poster": "tom" }, { "contents": "This looks like an idiom that needs to be explicitely mentioned in the docs, it took me some time to find out and then I searched to find this answer and be sure about what I understood by guessing.", "isAccepted": false, "likes": null, "poster": "dashesy" } ]
false
Do any PyTorch AMIs exist on AWS?
null
[ { "contents": "I searched pytorch and didn’t see any.", "isAccepted": false, "likes": null, "poster": "ethancaballero" }, { "contents": "Edit: ofc. if you don’t need gpu, anaconda alone will suffice.", "isAccepted": false, "likes": 1, "poster": "elanmart" }, { "contents": "Ubuntu 16.04 + Python3 + PyTorch ami just appeared:", "isAccepted": false, "likes": null, "poster": "ethancaballero" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mjdietzx" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "i dont know why you have to download CUDA (or especially CuDNN). We already ship both with our binaries. The only thing required is an NVIDIA driver on the machine.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "b0noi" } ]
false
Copying weights from one net to another
null
[ { "contents": "Is there a canonical method to copy weights from one network to another of identical structure?", "isAccepted": false, "likes": 6, "poster": "rbrigden" }, { "contents": "<SCODE>net1.load_state_dict(net2.state_dict())\n<ECODE>", "isAccepted": false, "likes": 17, "poster": "fmassa" }, { "contents": "How does deep copy / canonical copy differ from normal weights loading?", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" }, { "contents": "it avoids you from having to do <SCODE>model1 = Model()\nmodel2 = Model()\n\nmodel2.load_state_dict(model1.state_dict())\n<ECODE> and instead you only do <SCODE>model1 = Model()\nmodel2 = copy.deepcopy(model1)\n<ECODE>", "isAccepted": false, "likes": 6, "poster": "fmassa" }, { "contents": "As far as I have seen the code “load_state_dict copies only parameters and buffers”. Does deepcopy also copies only _parameters and _buffers or the hooks as well?", "isAccepted": false, "likes": null, "poster": "shubhamjain0594" }, { "contents": "deep copy will recursively copy every member of an object, so it copies everything", "isAccepted": false, "likes": 2, "poster": "qq456cvb" }, { "contents": "If using deepcopy in my experience, optimizer does not work …", "isAccepted": false, "likes": 2, "poster": "spnova12" }, { "contents": "Example: weights_new = k*weights_old + (1-k)*weights_new How can I do this?", "isAccepted": false, "likes": 3, "poster": "Navneet_M_Kumar" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "bhushans23" }, { "contents": "Hi, have you found a effective way to do this thing?", "isAccepted": false, "likes": null, "poster": "D-X-Y" }, { "contents": "I am doing this for each layer. I believe there must be a better method but this works for now.", "isAccepted": false, "likes": 1, "poster": "Navneet_M_Kumar" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Navneet_M_Kumar" }, { "contents": "I think you need to reinitialize the optimizer using the new copied model and then you can copy the optimizer inner values from one to the other, a bit of a mess. I would probably stop at reinitializing the optimizer.", "isAccepted": false, "likes": null, "poster": "roee" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "bhushans23" }, { "contents": "Less messy than what I assumed, good to know! thx!", "isAccepted": false, "likes": null, "poster": "roee" }, { "contents": "does one know to to properly solve the polykia averaging issue mentioned? The solution mentioned doesn’t work I believe:", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "<SCODE>def polyak_update(polyak_factor, target_network, network):\n for target_param, param in zip(target_network.parameters(), network.parameters()):\n target_param.data.copy_(polyak_factor*param.data + target_param.data*(1.0 - polyak_factor))\n<ECODE>", "isAccepted": false, "likes": 6, "poster": "Navneet_M_Kumar" }, { "contents": "load_state_dict worked for me. copy.deepcopy failed for me with run time error “Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment”", "isAccepted": false, "likes": null, "poster": "Krishna_Garg" } ]
false
RuntimeError : Gradients are not CUDA tensors
null
[ { "contents": "I am getting the following error while experimenting with a simple lstm layer. The forward propagation is fine but while computing loss.backward() I am getting the following error. any suggestions about why I am doing wrong?", "isAccepted": false, "likes": null, "poster": "Rizwan_Parvez" }, { "contents": "Can you post your simple script to see what you are doing wrong?", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I’m encountering the same error too. Followed is my train() function. def train(input_batch, input_batch_length, target_batch,target_batch_length, batch_size): <SCODE># Zero gradients of both optimizers\nencoderchar_optimizer.zero_grad()\nencoder_optimizer.zero_grad()\ndecoder_optimizer.zero_grad()\n\nencoder_input = Variable(torch.FloatTensor(len(input_batch),batch_size,500))\n\nfor ix , w in enumerate(input_batch): \n w = w.contiguous().view(15,batch_size)\n reshaped_input_length = [ x[ix] for x in input_batch_length] # [15 ,.. 30 times] * 128\n if USE_CUDA: \n w = w.cuda()\n #reshaped_input_length = Variable(torch.LongTensor(reshaped_input_length )).cuda()\n hidden_all , output = encoderchar(w,reshaped_input_length)\n encoder_input[ix] = output.transpose(0,1).contiguous().view(batch_size,-1)\n if USE_CUDA: \n encoder_input = encoder_input.cuda()\n \ntemporary_target_batch_length = [15] * batch_size\n\n#if USE_CUDA: \n #target_batch_length = Variable(torch.LongTensor(target_batch_length )).cuda()\n\nencoder_hidden_all, encoder_output = encoder(encoder_input, target_batch_length)\ndecoder_input = Variable(torch.LongTensor([SOS_token] * batch_size))\ndecoder_hidden = encoder_output\n\nmax_target_length = max(temporary_target_batch_length)\nall_decoder_outputs = Variable(torch.zeros(max_target_length, batch_size, decoder.output_size))\n\n# Move new Variables to CUDA\nif USE_CUDA:\n decoder_input = decoder_input.cuda()\n all_decoder_outputs = all_decoder_outputs.cuda()\n target_batch = target_batch.cuda()\n ##Added by Satish \n encoder_hidden_all = encoder_hidden_all.cuda()\n encoder_output = encoder_output.cuda()\n decoder_hidden = decoder_hidden.cuda()\n# Run through decoder one time step at a time\nfor t in range(max_target_length):\n decoder_output, decoder_hidden, decoder_attn = decoder(\n decoder_input, decoder_hidden, encoder_hidden_all\n )\n\n all_decoder_outputs[t] = decoder_output\n decoder_input = target_batch[t] # Next input is current target\n if USE_CUDA:\n decoder_input = decoder_input.cuda()\n\nif USE_CUDA:\n all_decoder_outputs = all_decoder_outputs.cuda()\n<ECODE> <SCODE>loss = masked_cross_entropy(\n all_decoder_outputs.transpose(0, 1).contiguous(), # -> batch x seq\n target_batch.transpose(0, 1).contiguous(), # -> batch x seq\n target_batch_length\n)\nloss.backward()\n\n# Clip gradient norms\necc = torch.nn.utils.clip_grad_norm(encoderchar.parameters(), clip)\nec = torch.nn.utils.clip_grad_norm(encoder.parameters(), clip)\ndc = torch.nn.utils.clip_grad_norm(decoder.parameters(), clip)\n\n# Update parameters with optimizers\nencoderchar_optimizer.step()\nencoder_optimizer.step()\ndecoder_optimizer.step()\n\nreturn loss.data[0], ec, dc\n<ECODE> any inputs what I’m doing wrong?", "isAccepted": false, "likes": null, "poster": "cheekala" } ]
false
PackedSequence with DataLoader
null
[ { "contents": "Hey, I’m trying to reproduce some previous work I did with Theano in PyTorch, with RNNs. I want to be able to mask the sequences I pass as input to an RNN based model. This should be easy enough… However, there’s a couple of annoying issues that are bugging me: Cheers!", "isAccepted": false, "likes": 2, "poster": "miguelvr" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "What’s the best way to sort / rearrange tensors in PyTorch?", "isAccepted": false, "likes": 4, "poster": "miguelvr" }, { "contents": "is this correct usage? <SCODE>import torch\nx = torch.autograd.Variable(torch.randn(5, 3, 2))\nl = torch.nn.Linear(2, 2)\nr = torch.nn.RNN(2, 2)\n\npx = torch.nn.utils.rnn.pack_padded_sequence(x, [5, 3, 1])\nph = torch.nn.utils.rnn.PackedSequence(l(px.data), px.batch_sizes)\npy, h = r(ph)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ShigekiKarita" }, { "contents": "Yes, that looks right to me.", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "Is this true? It is not working for me since nn.Linear requires attribute ‘dim’.", "isAccepted": false, "likes": 2, "poster": "stefanonardo" }, { "contents": "I’m observing the same behavior. I don’t think linear layers can take packed sequences as inputs.", "isAccepted": false, "likes": null, "poster": "eplu" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "nlgranger" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Moore" }, { "contents": "Is there possible any better way now to apply the packed sequence with the linear layers?", "isAccepted": false, "likes": null, "poster": "Moore" } ]
false
How to store all hidden representation for each of the word in a sentence using LSTM?
null
[ { "contents": "I am facing a problem to store the hidden variables (h and c) while I am looping through the words in a sentence. I am trying to store all the hidden state representation for each word in the sentence and use it for later computation. For example, if we have a sentence with 10 words, I want to store all the hidden states for each word in different Variables. I am using LSTM based simple encoder as follows. <SCODE>encoder_output, encoder_hidden = self.encoder(embedded, encoder_hidden)\n<ECODE> RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation I have spent significant amount of time to find out the problem and found the reason but I am unable to solve it. I have tried using a list of Variable which should work but still getting the same error. <SCODE>embedded = self.embedding(input_variable)\nencoder_output, encoder_hidden = self.encoder(embedded, (hidden_states[idx], cell_states[idx]))\nhidden_states.append(encoder_hidden[0])\ncell_states.append(encoder_hidden[1])\n<ECODE> Even I tried to clone a Variable, but the problem is not resolved. Any help would be much appreciated. Thanks.", "isAccepted": false, "likes": null, "poster": "wasiahmad" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wasiahmad" } ]
false
[Sovled] Pytorch-example openmt, ImportError: No module named utils.rnn
null
[ { "contents": "what cause this problem and how to solve this problem? the whl links depend on what you’ve installed before. <SCODE>pip uninstall http://download.pytorch.org/whl/cu80/torch-0.1.9.post2-cp27-none-linux_x86_64.whl \n<ECODE> <SCODE>pip install http://download.pytorch.org/whl/cu80/torch-0.1.10.post2-cp27-none-linux_x86_64.whl \n<ECODE>", "isAccepted": false, "likes": null, "poster": "zeng" } ]
false
CudaTensor Variable
null
[ { "contents": "I tried to wrapper autograd.Variable to send the data to the GPU every time I construct a Variable <SCODE>class Variable(autograd.Variable):\n def __init___(self, data, *args, **kwargs):\n data = data.cuda()\n super(Variable, self).__init___(data, *args, **kwargs)\n \n a = torch.randn(1,1) \n print(a)\n print(Variable(a))\n print(Variable(a.cuda()))\n<ECODE> However, I got the output as follows: <SCODE>-0.2344\n[torch.FloatTensor of size 1x1]\n\nVariable containing:\n-0.2344\n[torch.FloatTensor of size 1x1]\n\nVariable containing:\n-0.2344\n[torch.cuda.FloatTensor of size 1x1 (GPU 0)]\n<ECODE> Does anyone get the same problem? Thank you!", "isAccepted": false, "likes": null, "poster": "IouJenLiu" }, { "contents": "Here’s a proper working such wrapper. <SCODE>def Variable(data, *args, **kwargs):\n return autograd.Variable(data.cuda(), *args, **kwargs)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "It works. Thanks a lot!", "isAccepted": false, "likes": null, "poster": "IouJenLiu" } ]
false
ERROR:optimizer got an empty parameter list
null
[ { "contents": "I am trying to build network with a new structure,The model is built with two separated class.When I run it,the error says that :optimizer got an empty parameter list.I am new to PyTorch and I don’t know what causes the error.Can you give me some suggestions?Thank you!", "isAccepted": false, "likes": 3, "poster": "alan_ayu" }, { "contents": "<SCODE>self.myparameters = [Parameter1, Parameter2, ...]\n<ECODE> <SCODE>self.myparameters = nn.ParameterList(Parameter1, Parameter2, ...)\n<ECODE>", "isAccepted": false, "likes": 10, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "alan_ayu" }, { "contents": "Hi. I am also facing a similar kind of scenario. I am trying to implement a simple GAN with generator and discriminator being respective classes. <SCODE># Generator Neural Network Model (1 hidden layer)\nclass GNet(nn.Module):\n\n def __init__(self, input_size, hidden_size, image_size):\n super(GNet, self).__init__()\n\n self.model = nn.Sequential(\n nn.Linear(input_size, hidden_size),\n nn.ReLU(),\n nn.Linear(hidden_size, image_size),\n )\n def forward(self, x):\n\n return self.model(x)\n\n# Discriminator Neural Network Model (2 hidden layer)\n\nclass DNet(nn.Module):\n def __init__(self, image_size, hidden_size):\n\n super(DNet, self).__init__()\n self.model = nn.Sequential(\n nn.Linear(image_size, hidden_size),\n nn.ReLU(),\n nn.Linear(hidden_size, hidden_size),\n nn.ReLU(),\n nn.Linear(hidden_size, num_classes),\n )\n\n def forward(self, x):\n return self.model(x)\n\n\nG = GNet(input_size, hidden_size, image_size)\nD = DNet(image_size, hidden_size)\nG_params = G.parameters()\nD_params = D.parameters()\nG_solver = optim.Adam(G_params, lr=1e-3)\nD_solver = optim.Adam(D_params, lr=1e-3)\n<ECODE> I am getting the error message in the line of G_solver: <SCODE>ValueError: optimizer got an empty parameter list\n<ECODE> Why I am I not getting the parameter list even if I am calling D.parameters()", "isAccepted": false, "likes": 1, "poster": "avisek_lahiri" }, { "contents": "Do: <SCODE>G_params = list(G.parameters())\nD_params = list(D.parameters())\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "<SCODE>class Netz ( nn.Module ):\n def _init_(self):\n super ( Netz, self ).__init__ ()\n self.conv1 = nn.Conv2d ( 1, 10, kernel_size=5 )\n self.conv2 = nn.Conv2d ( 10, 20, kernel_size=5 )\n self.conv_dropout = nn.Dropout2d ()\n self.fc1 = nn.Linear ( 320, 60 )\n self.fc2 = nn.Linear ( 60, 10 )\n\n def forward(self, x):\n x = self.conv1 ( x )\n x = F.max_pool2D ( x, 2 )\n x = F.relu ( x )\n x = self.conv2 ( x )\n x = self.conv_dropout ( x )\n x = F.max_pool2D ( x, 2 )\n x = F.relu ( x )\n x = x.view ( -1, 320 )\n x = F.relu ( self.fc1 ( x ) )\n x = self.fc2 ( x )\n return F.log_softmax ( x )\n\nmodel = Netz()\nmodel.cuda()\n\noptimizer = optim.SGD( model.parameters (), lr=0.1, momentum=0.8 )\ndef train(epoch):\n model.train ()\n for batch_id, (data, target) in enumerate ( train_data ):\n data = data.cuda ()\n target = target.cuda ()\n data = Variable ( data )\n target: Variable = Variable ( target )\n optimizer.zero_grad ()\n out = model ( data )\n criterion = F.nll_loss\n loss = criterion ( out, target )\n loss.backward ()\n optimizer.step ()\n print ( 'Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format ( epoch, batch_id * len ( data ),\n len ( train_data.dataset ),\n 100. * batch_id / len ( train_data ),\n loss.data[0] ) )\n for epoch in range ( 1, 30 ):\n train ( epoch )\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Insane" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "Insane" }, { "contents": "I am having the same issue with a different architecture. Specifically, I am trying to build a stacked LSTM, where each layer may have a different number of units. It would be very convenient to pass a list of layer sizes as an argument and have the same function build the network as appropriate. Every attempt has failed: \nThe most obvious way is to build lists of ‘torch.nn.LSTM’ and hidden layers, and manage them directly. This fails, I think, because they are “hidden” in a list, even though defined in the init routine.\n \nSimilarly, but for completeness, building a dict of layers with integer keys failed, for what I believe is the same reason.\n \nA rather horrible attempt to use ‘vars(self)’ to force-construct meaningful variable names failed in a more complicated way, and honestly, it’s a terrible idea anyway.\n What is the accepted way to do what I’m trying to do, without re-implementing the whole LSTM architecture myself? Is there a way to force the registration of the LSTM objects with the parameters list?", "isAccepted": false, "likes": null, "poster": "Novak" }, { "contents": "I made the same error with you, so embarrassed…", "isAccepted": false, "likes": null, "poster": "Shikai_Qiu" }, { "contents": "Hi, <SCODE>self.myparameters = nn.ParameterList(self.w, self.out_w1, self.out_b1, self.out_w2, self.out_b2)\n<ECODE> <SCODE>self.w = torch.randn((NUM_INPUT, NUM_HIDDEN), requires_grad=True)\n<ECODE> Error Message is as follows; <SCODE>TypeError Traceback (most recent call last)\n<ipython-input-75-31d214f55eb8> in <module>()\n 33 return out2\n 34 \n---> 35 model = Model().to(device)\n 36 optimizer = optim.SGD(model.parameters(), lr=lr)\n 37 criterion = nn.CrossEntropyLoss()\n\n<ipython-input-75-31d214f55eb8> in __init__(self)\n 24 self.out_b2= torch .randn((NUM_CLASSES), requires_grad=True)\n 25 \n---> 26 self.myparameters = nn.ParameterList(self.w, self.out_w1, self.out_b1, self.out_w2, self.out_b2)\n 27 \n 28 def forward(self, x):\n\nTypeError: __init__() takes from 1 to 2 positional arguments but 6 were given\n<ECODE> What does the message mean? and how to solve?", "isAccepted": false, "likes": null, "poster": "111137" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "liyz15" }, { "contents": "<SCODE>class Model(nn.Module):\n def __init__(self):\n super(Model, self).__init__()\n \n # Gate-Weight\n self.w = torch.randn((NUM_INPUT, NUM_HIDDEN), requires_grad=True)\n \n # Gate-Selector\n self.sel = torch.zeros(NUM_INPUT, NUM_HIDDEN)\n \n # Input Vector\n self.fw_x = torch.zeros(NUM_INPUT, NUM_HIDDEN)\n \n # Output Vector\n self.fw_h = torch.zeros(NUM_INPUT, NUM_HIDDEN)\n \n # Output Layer\n self.out_w1= torch.randn((NUM_HIDDEN), requires_grad=True)\n self.out_b1= torch.randn((NUM_INPUT), requires_grad=True)\n self.out_w2= torch.randn((NUM_CLASSES, NUM_INPUT), requires_grad=True)\n self.out_b2= torch.randn((NUM_CLASSES), requires_grad=True)\n \n #self.myparameters = nn.ParameterList([self.w, self.out_w1, self.out_b1, self.out_w2, self.out_b2])\n\n def forward(self, x):\n fw_prop(self, x)\n out1 = torch.matmul(self.fw_h, self.out_w1) + self.out_b1\n out2 = torch.matmul(self.out_w2, out1) + self.out_b2\n\n return out2\n \nmodel = Model().to(device)\noptimizer = optim.SGD(model.parameters(), lr=lr)\ncriterion = nn.CrossEntropyLoss()\n<ECODE> Then; <SCODE>---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\n<ipython-input-12-c94710f76e27> in <module>()\n 36 model = Model().to(device)\n 37 #model.myparameters = torch.nn.Parameter(model.w, model.out_w1, model.out_b1, model.out_w2, model.out_b2)\n---> 38 optimizer = optim.SGD(model.parameters(), lr=lr)\n 39 criterion = nn.CrossEntropyLoss()\n\n1 frames\n/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py in __init__(self, params, defaults)\n 43 param_groups = list(params)\n 44 if len(param_groups) == 0:\n---> 45 raise ValueError(\"optimizer got an empty parameter list\")\n 46 if not isinstance(param_groups[0], dict):\n 47 param_groups = [{'params': param_groups}]\n\nValueError: optimizer got an empty parameter list\n<ECODE> So I did both of <SCODE>ParameterList(...)\n<ECODE> and <SCODE>ParameterList([...])\n<ECODE> Last one has error of; <SCODE>---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n<ipython-input-13-87872cf3c699> in <module>()\n 34 return out2\n 35 \n---> 36 model = Model().to(device)\n 37 optimizer = optim.SGD(model.parameters(), lr=lr)\n 38 criterion = nn.CrossEntropyLoss()\n\n4 frames\n<ipython-input-13-87872cf3c699> in __init__(self)\n 25 self.out_b2= torch.randn((NUM_CLASSES), requires_grad=True)\n 26 \n---> 27 self.myparameters = nn.ParameterList([self.w, self.out_w1, self.out_b1, self.out_w2, self.out_b2])\n 28 \n 29 def forward(self, x):\n\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py in __init__(self, parameters)\n 360 super(ParameterList, self).__init__()\n 361 if parameters is not None:\n--> 362 self += parameters\n 363 \n 364 def _get_abs_string_index(self, idx):\n\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py in __iadd__(self, parameters)\n 389 \n 390 def __iadd__(self, parameters):\n--> 391 return self.extend(parameters)\n 392 \n 393 def __dir__(self):\n\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py in extend(self, parameters)\n 416 offset = len(self)\n 417 for i, param in enumerate(parameters):\n--> 418 self.register_parameter(str(offset + i), param)\n 419 return self\n 420 \n\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in register_parameter(self, name, param)\n 155 raise TypeError(\"cannot assign '{}' object to parameter '{}' \"\n 156 \"(torch.nn.Parameter or None required)\"\n--> 157 .format(torch.typename(param), name))\n 158 elif param.grad_fn:\n 159 raise ValueError(\n\nTypeError: cannot assign 'torch.FloatTensor' object to parameter '0' (torch.nn.Parameter or None required)\n<ECODE> The iterative means of; <SCODE> 417 for i, param in enumerate(parameters):\n--> 418 self.register_parameter(str(offset + i), param)\n<ECODE> Is my understanding a correct?", "isAccepted": false, "likes": null, "poster": "111137" }, { "contents": "<SCODE> self.fc1 = nn.Linear(NUM_HIDDEN, 1)\n self.fc2 = nn.Linear(NUM_INPUT, NUM_CLASSES)\n<ECODE> with transpose, then no error, so probably error is on a shape.", "isAccepted": false, "likes": null, "poster": "111137" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "David_Whyte" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "David_Whyte" }, { "contents": "How can i solve it ? thank you", "isAccepted": false, "likes": null, "poster": "Erun" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ptrblck" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "moaz_taha" }, { "contents": "Easy mistake to make, but it took me a while to figure out", "isAccepted": false, "likes": null, "poster": "mikey_t" }, { "contents": "Hello I am also facing the same issue. Can anyone please help me. <SCODE> self.relu = nn.ReLU()\n \ndef forward(self, x):\n out = self.l1(x)\n out = self.relu(out)\n out = self.l2(out)\n out = self.relu(out)\n out = self.l3(out)\n #no activation and no softmax\n return out\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Rafay" } ]
false
Illegal memory access in encoder-decoder network
null
[ { "contents": "I am implementing an encoder-decoder network in which the encoder takes a 5d input and compresses it to a 4d output, while the decoder takes a 4d input and up-samples to a 4d output. There are shortcut connections which pass 4d slices of encoder feature maps to the decoder. There are also 4d slices of MaxPool indices passed from encoder to decoder. I am getting “illegal memory access” errors. Here is a minimal example: <SCODE>import torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.autograd import Variable\nimport torch.utils.data\n\n#---------------------------------------------------------------------------\n# define encoder-decoder network and optimizer\n\nclass encoder_decoder(nn.Module):\n def __init__(self):\n super().__init__()\n self.enc_conv1 = nn.Conv3d(1,8, kernel_size=(1,3,3), stride=1, padding=(0,1,1))\n self.pool = nn.MaxPool3d(kernel_size=(1,2,2), stride=(1,2,2), return_indices=True)\n self.enc_conv2 = nn.Conv3d(8,8, kernel_size=(3,1,1), stride=1, padding=0)\n \n self.dec_conv1 = nn.Conv2d(8,8, kernel_size=3, stride=1, padding=1)\n self.unpool = nn.MaxUnpool2d(kernel_size=2, stride=2)\n self.dec_conv2 = nn.Conv2d(16,1, kernel_size=3, stride=1, padding=1)\n\n def forward(self, net_input):\n out = net_input\n out = self.enc_conv1(out) # 1x3x64x64 -> 8x3x64x64\n shortcut = out[:,:,1] # reference to center slice of out\n out, indices = self.pool(out) # 8x3x64x64 -> 8x3x32x32\n indices = indices[:,:,1] # reference to center slice of indices\n out = self.enc_conv2(out) # 8x3x32x32 -> 8x1x32x32\n \n out = out.squeeze(2) # 8x1x32x32 -> 8x32x32\n out = self.dec_conv1(out) # 8x32x32 -> 8x32x32\n out = self.unpool(out, indices) # 8x32x32 -> 8x64x64\n out = torch.cat((shortcut,out),1) # 8x64x64 -> 16x64x64\n out = self.dec_conv2(out) # 16x64x64 -> 1x64x64\n \n return out\n\nnet = encoder_decoder()\nnet.cuda()\n\ncriterion = nn.MSELoss()\ncriterion.cuda()\n\noptimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.5)\n\n#----------------------------------------------------------------------------\n# define dataset and dataloader\n\nclass create_dataset(torch.utils.data.Dataset):\n def __init__(self):\n self.data = [ (torch.rand(1,3,64,64),\n torch.rand(1,64,64)) for i in range(100) ]\n\n def __getitem__(self, idx):\n return self.data[idx]\n\n def __len__(self):\n return len(self.data)\n\ndataset = create_dataset()\nprint('Loaded ' + str(len(dataset)) + ' training examples')\ndataloader = torch.utils.data.DataLoader(dataset, batch_size=8, shuffle=True, num_workers=8)\n\n#----------------------------------------------------------------------------\n# training loop\n\nprint('Start training loop')\nfor epoch in range(4):\n\t\n print('Epoch: ' + str(epoch))\n net.train()\n\n for training_idx, (input_batch,target_batch) in enumerate(dataloader):\n print('Training batch: ' + str(training_idx))\n input_batch = Variable(input_batch.cuda())\n target_batch = Variable(target_batch.cuda())\n\n optimizer.zero_grad()\n output_batch = net(input_batch)\n err = criterion(output_batch, target_batch)\n err.backward()\n optimizer.step()\n<ECODE> Here is the output: <SCODE>$ CUDA_LAUNCH_BLOCKING=1 python error.py \nLoaded 100 training examples\nStart training loop\nEpoch: 0\nTraining batch: 0\nTraining batch: 1\nTHCudaCheck FAIL file=/py/conda-bld/pytorch_1490895093647/work/torch/lib/THCUNN/generic/SpatialMaxUnpooling.cu line=43 error=77 : an illegal memory access was encountered\nTraceback (most recent call last):\n File \"error.py\", line 78, in <module>\n output_batch = net(input_batch)\n File \"/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 206, in __call__\n result = self.forward(*input, **kwargs)\n File \"error.py\", line 31, in forward\n out = self.unpool(out, indices) # 8x32x32 -> 8x64x64\n File \"/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 206, in __call__\n result = self.forward(*input, **kwargs)\n File \"/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/nn/modules/pooling.py\", line 304, in forward\n self.padding, output_size)\n File \"/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/nn/functional.py\", line 277, in max_unpool2d\n return f(input, indices)\n File \"/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/nn/_functions/thnn/pooling.py\", line 177, in forward\n self.output_size[1], self.output_size[0])\nRuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /py/conda-bld/pytorch_1490895093647/work/torch/lib/THCUNN/generic/SpatialMaxUnpooling.cu:43\n<ECODE>", "isAccepted": false, "likes": null, "poster": "abweiss" }, { "contents": "Hi Alex, Now if that does not happen, I am thinking it might be a CUDA Unpooling bug, and I want to spend a few hours tracking this down. If you can help me by giving me a script that I can run (with dummy data and stuff) that reproduces this issue, I’m happy to run it and fix the issue within a week or less.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "<SCODE>pool3d = nn.MaxPool3d(kernel_size=2,stride=2,return_indices=True)\nimg3d = Variable(torch.rand(1,1,4,4,4))\nout, indices = pool3d(img3d)\nprint(indices)\n<ECODE> <SCODE>Variable containing:\n(0 ,0 ,0 ,.,.) = \n 4.6117e+18 -9.2234e+18\n 3.0065e+10 4.2950e+09\n\n(0 ,0 ,1 ,.,.) = \n 6.5536e+04 8.5213e+14\n -3.4588e+18 4.6117e+18\n[torch.LongTensor of size 1x1x2x2x2]\n<ECODE> Second, and this is the really “weird behavior” stuff, my minimal example stops producing an error when I change these lines of code: <SCODE>input_batch = Variable(input_batch.cuda())\ntarget_batch = Variable(target_batch.cuda())\n<ECODE> to this: <SCODE>input_batch = Variable(input_batch)\ntarget_batch = Variable(target_batch)\ninput_batch = input_batch.cuda()\ntarget_batch = target_batch.cuda()\n<ECODE> I’m not sure what to make of that. When I run the code on the CPU, I get an error complaining about the pooling indices, which makes sense. However, when running on the GPU, I get no error at all, as long as I move the inputs and targets to the GPU in a very particular way (otherwise I get an illegal memory access error). That’s dangerous behavior, because I wouldn’t have even noticed that the pooling indices are all messed up if I had written my code in a slightly different way. (Another seemingly random thing is that removing the shortcut connection also eliminates the error, so this is all very weird.) Finally, for future reference, regarding the correctness of my minimal example, I need to be more careful when slicing the pooling indices. The pooling indices are supposed to refer to coordinates in the input Tensor given in flattened spatial coordinates. So, what I really should have written is: <SCODE>pre_pool_shape = out.size[2:]\nout, indices = self.pool(out)\nindices = indicies[:,:,pre_pool_shape[0]//2] # only want the center slice\nindices %= pre_pool_shape[1]*pre_pool_shape[2] # flattened coords for 2d slice\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "abweiss" }, { "contents": "Moving the call to cuda() our of the declaration of the Variable solved it for me, too. And I’m using the latest version of pytorch.", "isAccepted": false, "likes": null, "poster": "Amir_Rosenfeld" } ]
false
Skip layer in pyTorch
null
[ { "contents": "Thank you all for the help in this fabulous forum. I was a Torch user, and new to pytorch. Right now I want to do something like “skip connection”. Below is the code I used in torch. How can I make a “skip connection” in pytorch <SCODE>main = nn.Sequential()\n...\nlocal conc = nn.ConcatTable()\nlocal conv = nn.Sequential()\nconv:add(SpatialConvolution(...))\nconc:add(nn.Identity())\nconc:add(conv)\nmain:add(conc)\nmain:add(nn.CAddTable())<ECODE>", "isAccepted": false, "likes": 3, "poster": "wzhang35" }, { "contents": "Assuming proper padding for compatible sizes - <SCODE>input = Variable(torch.Tensor(...))\nconv_out =self.conv(input)\nout = conv_out + input\n\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "DiffEverything" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "mattmacy" } ]
false
Torch.transpose is too slow in GPU,slower than CPU
null
[ { "contents": "<SCODE>import torch\nimport time\nfrom torch.autograd import Variable\n\nx = Variable(torch.randn(100,500))\ncputimes = []\nfor sampl in (1000, 10000, 100000, 1000000):\n start = time.time()\n for i in range(sampl):\n y = torch.transpose(x,0,1)\n end = time.time()\n cputimes.append(end-start)\nprint(cputimes)\n\nx = x.cuda(device_id=2)\ngputimes = []\nfor sampl in (1000, 10000, 100000, 1000000):\n start = time.time()\n for i in range(sampl):\n y = torch.transpose(x,0,1)\n end = time.time()\n gputimes.append(end-start)\nprint(gputimes)\n<ECODE> test2: <SCODE>In [16]: torch.cuda.set_device(2)\n\nIn [17]: %timeit torch.transpose(torch.FloatTensor(20,100),1,0)\nThe slowest run took 26.26 times longer than the fastest. This could mean that an intermediate result is being cached.\n100000 loops, best of 3: 1.72 µs per loop\n\nIn [18]: %timeit torch.transpose(torch.cuda.FloatTensor(20,100),1,0)\nThe slowest run took 21.21 times longer than the fastest. This could mean that an intermediate result is being cached.\n100000 loops, best of 3: 3.05 µs per loop<ECODE>", "isAccepted": false, "likes": null, "poster": "Kyle" }, { "contents": "It case my model run very slowly,how can I solve it", "isAccepted": false, "likes": null, "poster": "Kyle" }, { "contents": "Hi, This code sample is slow only because of the python loop which calls c functions. To make it faster, you need to find a way to remove this loop. If in your case, you want to transpose a bunch of matrices, you could for example stack them in a single tensor and then call transpose on this tensor.", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "<SCODE>class NoOp(torch.autograd.Function):\n def __init__(self):\n super().__init__()\n \n def forward(self, x):\n return x\n\ndef print_times(x, func, msg):\n start = time.time()\n for i in range(1000000):\n _ = func(x)\n t = time.time() - start\n print(\"{}: {:.5f}\".format(msg, t))\n\ntensor = torch.randn(100, 500)\nndarray = tensor.numpy()\nvariable = Variable(tensor)\n \nprint_times(tensor, lambda x: x, \"Python noop\")\nprint_times(ndarray, lambda x: x.transpose(), \"Numpy transpose\")\nprint_times(tensor, lambda x: x.t(), \"Torch transpose\")\nprint_times(variable, lambda x: NoOp()(x), \"Autograd noop\")\nprint_times(variable, lambda x: x.t(), \"Autograd transpose\")\n\n# output:\n#\n# Python noop: 0.07554\n# Numpy transpose: 0.23783\n# Torch transpose: 0.49813\n# Autograd noop: 1.95098\n# Autograd transpose: 3.72835\n<ECODE>", "isAccepted": false, "likes": null, "poster": "elanmart" }, { "contents": "Actually I changed my code and remove the transpose op, but the model still run slower on GPU than CPU, here is some of my model code . <SCODE>x = x.unsqueeze(1)\nx0 = F.relu(self.conv0_0(x)).squeeze(3)\nx1 = F.relu(self.conv0_1(x)).squeeze(3)\nx = torch.cat((x0,x1),1)\n\nx = x.unsqueeze(1)\nx0 = F.relu(self.conv1_0(x)).squeeze(2)\nx1 = F.relu(self.conv1_1(x)).squeeze(2)\n\nx0 = F.max_pool1d(x0, x0.size(2)).squeeze(2)\nx1 = F.max_pool1d(x1, x1.size(2)).squeeze(2)\nx = torch.cat((x0,x1),1)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Kyle" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" } ]
false
Problem using softmax
null
[ { "contents": "here is the Variable before softmax after softmax, the biggest one of them became 1 and others became 0 my forward function <SCODE>def forward(self, x):\n x = F.relu(self.conv1(x))\n x = F.relu(self.conv2(x))\n x = F.relu(self.conv3(x))\n x = F.relu(self.conv4(x))\n x = F.relu(self.conv5(x))\n x = F.relu(self.conv6(x))\n x = F.relu(self.conv7(x))\n x = x.view(x.size(0), -1)\n x = F.softmax(x)\n return x\n<ECODE> is there something wrong in my usage of softmax?", "isAccepted": false, "likes": null, "poster": "pointW" }, { "contents": "Best regards Thomas", "isAccepted": false, "likes": 4, "poster": "tom" }, { "contents": "the number is too large,its exp overflow. I I think you should add batchnorm layer before convolution layer.", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pointW" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pointW" } ]
false
Strange output of torch.max()
null
[ { "contents": "<SCODE>_, max_probs = torch.max(output_var, 2)\nprint output_var.size()\nprint max_probs.size()\nprint torch.max(max_probs)\n<ECODE> the outputs: <SCODE>(10L, 26L. 37L)\n(10L. 26L, 1L)\n37\n<ECODE> So size of output_var is (10L, 26L. 37L) and with <SCODE>_, max_probs = torch.max(output_var, 2)\n<ECODE> It is a bug of pytorch?", "isAccepted": false, "likes": null, "poster": "melody-rain" }, { "contents": "Hi Melody, <SCODE>import torch\nfrom torch.autograd import Variable\n\noutput_var = Variable(torch.randn(10, 26, 37).cuda())\n\n_, max_probs = torch.max(output_var, 2)\nprint output_var.size()\nprint max_probs.size()\n\nfor i in range(1000):\n assert torch.max(max_probs).data[0] == 36\n<ECODE> <SCODE>nvidia-smi\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
How do I know whether pytorch is built with magma-cuda80?
null
[ { "contents": "<SCODE>---------------------------------------------------------------------------\nRuntimeError Traceback (most recent call last)\n<ipython-input-6-9c860559505b> in <module>()\n----> 1 torch.svd(torch.rand(3,3).cuda())\n\nRuntimeError: No CUDA implementation of 'gesvd'. Install MAGMA and rebuild cutorch (http://icl.cs.utk.edu/magma/) at /data/wanggu/software/pytorch/torch/lib/THC/generic/THCTensorMathMagma.cu:280\n<ECODE>", "isAccepted": true, "likes": null, "poster": "wangg12" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "smth" }, { "contents": "I set the cmake prefix to which conda,do I need to set it again when I rebuild pytorch?", "isAccepted": true, "likes": null, "poster": "wangg12" }, { "contents": "export CMAKE_PREFIX_PATH=\"$(dirname $(which conda))/../\"", "isAccepted": true, "likes": 2, "poster": "smth" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "wangg12" } ]
true
The best way to concatenate final hidden of two networks in the bidirectional case
null
[ { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "VladislavPrh" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" } ]
false
Neural Style Transfer demo in Pytorch
vision
[ { "contents": "Hi, I am happy for any feedback!", "isAccepted": false, "likes": 5, "poster": "leongatys" }, { "contents": "Thanks for sharing Leon! and thanks for creating Neural Style.", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
Slicing input Variables and backpropagation
null
[ { "contents": "Thank you in advance!", "isAccepted": false, "likes": 2, "poster": "ckanbak" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jekbradbury" }, { "contents": "Yes, but in this case I have to pass the new Variable from the network once more, right?", "isAccepted": false, "likes": 1, "poster": "ckanbak" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "vitchyr" }, { "contents": "Hi, is this problem solved in v0.2?", "isAccepted": false, "likes": null, "poster": "daksunt" } ]
false
[resolved] Runtime error assigning hidden state
null
[ { "contents": "I am using a GRUCell at each time-step to pass input and hidden state and get back the new hidden state in the following way: <SCODE>hidden_states[i, j, :] = GRUCell(input, hidden_states[i, j, :])\n<ECODE> where <SCODE>hidden_states = Variable(torch.zeros(N, N, 1, hidden_state_size)).cuda()\n<ECODE> As you can notice that instead of sending a batch of inputs, I am sending one input at a time (hence, the 1 in the tensor size). When I run this, I get a runtime error as follows: <SCODE>RuntimeError: in-place operations can be only used on variables that don't share storage with any other variables, but detected that there are 2 objects sharing it\n<ECODE> Can anyone explain how I have created another variable with the same storage as hidden_states?", "isAccepted": false, "likes": null, "poster": "vvanirudh" }, { "contents": "Resolved. I was using hidden_states elsewhere in the code and creating a new variable that shared the storage.", "isAccepted": false, "likes": null, "poster": "vvanirudh" } ]
false
Multiply feature map by a learnable scalar
null
[ { "contents": "", "isAccepted": false, "likes": 4, "poster": "Alexey_Chernyavskiy" }, { "contents": "<SCODE>matrix = Variable(torch.rand(3,3))\nscalar = Variable(torch.rand(1), requires_grad=True)\noutput = matrix * scalar.expand_as(matrix)\n<ECODE>", "isAccepted": false, "likes": 7, "poster": "fmassa" }, { "contents": "in my Model definition: <SCODE> self.multip = torch.autograd.Variable(torch.rand(1).cuda(), requires_grad=True)\n self.multip = self.multip.cuda()\n<ECODE> in my Model forward: <SCODE>def forward(self, x):\n x1 = self.relu(self.conv1(x))\n x2 = self.relu(self.conv2(x))\n x1 = x1 * self.multip.expand_as(x1) # multiply x1 output by learnable parameter \"multip\"\n x = torch.add(x1, x2)\n return x\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Alexey_Chernyavskiy" }, { "contents": "Quick example: <SCODE>class Model1(nn.Module):\n def __init__(self):\n super(Model1, self).__init__()\n self.multp = Variable(torch.rand(1), requires_grad=True)\n\nclass Model2(nn.Module):\n def __init__(self):\n super(Model2, self).__init__()\n self.multp = nn.Parameter(torch.rand(1)) # requires_grad is True by default for Parameter\n\nm1 = Model1()\nm2 = Model2()\n\nprint('m1', list(m1.parameters()))\nprint('m2', list(m2.parameters()))\n<ECODE>", "isAccepted": false, "likes": 10, "poster": "fmassa" }, { "contents": "Now, how can I specify a special learning rate for this new param? I used to do something like this for my convolutional layers to make a specific smaller learning rate for layer ‘conv2’. UPD: I replaced my ‘custom’ Adam (see above) that had different learning rates by a regular Adam which has the same learning rate for all the layers, and it worked - my multp started to become updated during the course of training (it also worked with SGD). However my question with setting a special learning rate for the nn.Parameter remains.", "isAccepted": false, "likes": 1, "poster": "Alexey_Chernyavskiy" }, { "contents": "You can simply do something like <SCODE>optimizer = optim.Adam([{'params':[model.multp], 'lr':opt.lr*0.1}], lr=opt.lr)\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "fmassa" }, { "contents": "Great, it worked! Thank you.", "isAccepted": false, "likes": null, "poster": "Alexey_Chernyavskiy" }, { "contents": "Hey, I was wondering if is possible to mutiply each feature map by a different scalar?", "isAccepted": false, "likes": null, "poster": "Andrefmds" }, { "contents": "Maybe using a 1x1 conv with group=n_channels?", "isAccepted": false, "likes": 2, "poster": "Chenhongyi_Yang" }, { "contents": "Hi did you work it out? I also need to do something similar.", "isAccepted": false, "likes": null, "poster": "mbutt" } ]
false
Efficient way to make datasets?
null
[ { "contents": "So I’m using a script to turn a directory of images in 5 subdirectories into a single tensor of size (730, 3, 256, 256) and a label tensor of size (730, 5) for 5 classes and then torch.utils.data to turn that into a TensorDataset and make/shuffle batches. The batches are then moved to the GPU individually at each iteration through the dataset during training. However, this isn’t a tenable practice for a very large dataset. Is there a better way to do this that I’m not seeing in the docs? It seems like there should be a simpler way to read images from disk into shuffled batches rather than having to put the whole thing into two tensors in system memory.", "isAccepted": false, "likes": null, "poster": "tymokvo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Thanks! That’s pretty handy", "isAccepted": false, "likes": null, "poster": "tymokvo" } ]
false