paper_id
stringlengths
10
19
venue
stringclasses
14 values
focused_review
stringlengths
249
8.29k
point
stringlengths
59
672
NIPS_2019_1364
NIPS_2019
weakness of the paper. Questions: * In which cases the assumptions of theorems 3,4 hold? In addition to SLC, they have some matroid related assumptions. Since these results intend to demonstrate the power of the SLC class, these should be discussed in more detail. * How the diversity related \alpha enters the mixing bounds? It seems that the bound depends very weakly on \alpha only through \nu(S_0). Edit following author's response: I'm inclined to keep my score of 6. This is due to following reasons: 1) I still find the theoretical contribution ok but not particularly strong, given existing results. As mentioned in the review, it is a weak, unpractical bound, and the proof, in itself, does not provide particular mathematical novelty. 2) The claims that "in practice the mixing time is even better" are not nearly sufficiently supported by the experiments, and therefore the evidence provided to practitioners is very limited. 3) My question regarding dependence on $\alpha$ was not answered in a satisfactory manner. I would expect a more explicit dependence on $\alpha$, since with higher diversity the problem should be more complicated. If this is not reflected in the bounds, it means the bounds are very loose.
1) I still find the theoretical contribution ok but not particularly strong, given existing results. As mentioned in the review, it is a weak, unpractical bound, and the proof, in itself, does not provide particular mathematical novelty.
NIPS_2019_1225
NIPS_2019
1. Determining hyperparameters and reporting complexity 1.1. The paper requires setting “accuracy goals” when encountering a new task. However, it might be unclear which accuracy can be reached and the paper is opaque how these accuracy goals are determined e.g. when comparing to prior work. To reach optimal performance algorithm 1 might need significant manual intervention. 1.1.1. How are the “accuracy goals” determined (especially for Table 6,7)? 1.1.2. What happens if growing the network does not lead to achieving the accuracy goal? E.g. increasing the network capacity might lead to stronger overfitting and a reduced accuracy? 1.2. The approach may need many iterations to retrain the model to meet the “accuracy goal” (both w.r.t. growing and compressing) 1.3. How much is the model grown, how much is picked, how much is compressed? It would be interesting to see this for the different models in Table 6, as well as the accuracy targets. 1.4. It would be good to report the memory overhead from the binary masks and relate this to memory-based approached such as GEM, A-GEM, and generative replay. 2. Experimental Evaluation 2.1. Ablations 2.1.1. The paper claims that “Another distinction of our approach is the “picking” step “. However, this aspect is not ablated. 2.2. Experiments on CIFAR. The comparison on CIFAR is not convincing 2.2.1. The continual learning literature has extensive experiments on this dataset and the paper only compares to one approach (DEN). 2.2.2. It is unclear if DEN is correctly used/evaluated. It would have been more convincing if the authors used the same setup as in the DEN paper to make sure the comparison is fair/correct. 3. Motivation 3.1. The paper claims forgetting is fully avoided due to the usage of a mask. While it is true that *after* model compression no further forgetting happens, but there is an accuracy drop during pruning, in contrast to e.g. regularization-based methods. Specifically, the original value (before pruning) is not recoverable and hence should be reported as forgetting. 4. The checklist is not fully accurate. The paper does not provide error bars and std-deviation for experiments. 5. Minor: 5.1. Grammar issue in word “determining” in the 4th paragraph on page 3. 5.2. On page 3, in “Method overview” it says “An overview of our method is depicted below” whereas it should directly refer to Figure 1 because Figure 1 is on page 2 5.3. On page 6, right below Figure 2, it says “in all experiments, but realize DEN”. Word “realize” does not fit into the context. 5.4. In future, please use the submission template (not the camera-ready version) so that line numbers on the margins can be used to easily refer to the text. I lean more towards accept: The overall convincing results (especially Table 6) and overall novel model outweigh the limitations discussed above.
2. Experimental Evaluation 2.1. Ablations 2.1.1. The paper claims that “Another distinction of our approach is the “picking” step “. However, this aspect is not ablated. 2.2. Experiments on CIFAR. The comparison on CIFAR is not convincing 2.2.1. The continual learning literature has extensive experiments on this dataset and the paper only compares to one approach (DEN). 2.2.2. It is unclear if DEN is correctly used/evaluated. It would have been more convincing if the authors used the same setup as in the DEN paper to make sure the comparison is fair/correct.
NIPS_2018_567
NIPS_2018
(bias against subgroups, uncertainty on certain subgroups), in applications for fair decision making. The paper is clearly structured, well written and very well motivated. Except for minor confusions about some of the math, I could easily follow and enjoyed reading the paper. As far as I know, the framework and particularly the application to fairness is novel. I believe the general idea of incorporating and adjusting to human decision makers as first class citizens of the pipeline is important for the advancement of fairness in machine learning. However, the framework still seems to encompass a rather minimal technical contribution in the sense that both a strong theoretical analysis and exhaustive empirical evaluation are lacking. Moreover, I am concerned about the real world applicability of the approach, as it mostly seems to concern situations with a rather specific (but unknown) behavior of the decision maker, which typically does not transfer across DMs, needs to be known during training. I have trouble thinking of situations where sufficient training data, both ground truth and the DMs predictions, are available simultaneously. While the authors do a good job evaluating various aspects of their method (one question about this in the detailed comments), those are only two rather simplistic synthetic scenarios. Because of the limited technical and experimental contribution, I heavy-heartedly tend to vote for rejection of the submission, even though I am a big fan of the motivation and approach. Detailed Comments - I like the setup description in Section 2.1. It is easy to follow and clearly describes the technical idea of the paper. - I have trouble understanding (the proof of) the Theorem (following line 104). You show that eq (6) and eq (7) are equal for appropriately chosen $\gamma_{defer}$. However, (7) is not the original deferring loss from eq (3). Shouldn't the result be that learning to defer and rejection learning are equivalent if for the (assumed to be) constant DM loss, $\alpha$ happens to be equal to $\gamma_{reject}$? In the theorem it sounds as if they were equivalent independent of the parameter choices for $\gamma_{reject}$ and $\alpha$. The main takeaway, namely that there is a one-to-one correspondence between rejection learning with cost $\gamma_{reject}$ and learning to defer with a DM with constant loss $\alpha$, is still true. Is there a specific reason why the authors decided to present the theorem and proof in this way? - The authors highlight various practical scenarios in which learning to defer is preferable and detail how it is expected to behave. However, this practicability seems to be heavily impaired by the strong assumptions necessary to train such model, i.e., availability of ground truth and DM's decisions for each DM of interest, where each is expected to have their own specific biases/uncertainties/behaviors during training. - What does it mean for the predictions \hat{Y} to follow an (independent?) Bernoulli equation (12) and line 197? How is p chosen, and where does it enter? Could you improve clarity by explicitly stating w.r.t. what the expectations in the first line in (12) are taken (i.e., where does p enter explicitly?) Shouldn't the expectation be over the distribution of \hat{Y} induced by the (training) distribution over X? - In line 210: The impossibility results only hold for (arguably) non-trivial scenarios. - When predicting the Charlson Index, why does it make sense to treat age as a sensitive attribute? Isn't age a strong and "fair" indicator in this scenario? Or is this merely for illustration of the method? - In scenario 2 (line 252), does $\alpha_{fair}$ refer to the one in eq (11)? Eq. (11) is the joint objective for learning the model (prediction and deferral) given a fixed DM? That would mean that the autodmated model is encouraged to provide unfair predictions. However, my intuition for this scenario is that the (blackbox) DM provides unfair decisions and the model's task is to correct for it. I understand that the (later fixed) DM is first also trained (semi synthetic approach). Supposedly, unfairness is encouraged only when training DM as a pre-stage to learning the model? I encourage the authors to draw the distinction between first training/simulating the DM (and the corresponding assumptions/parameters) and then training the model (and the corresponding assumptions/parameters) more clearly. - The comparison between the deferring and the rejecting model is not quite fair. The rejecting model receives a fixed cost for rejecting and thus does not need access to DM during training. This already highlights that it cannot exploit specific aspects (e.g., additional information) of the DM. On the other hand, while the deferring model can adaptively pass on those examples to DM, on which the DM performs better, this requires access to DM's predictions during training. Since DMs typically have unique/special characteristics that could vary greatly from one DM to the next, this seems to be a strong impairment for training a deferring model (for each DM individually) in practice? While the adaptivity of learning to defer unsurprisingly constitutes an advantage over rejection learning, it comes at the (potentially large) cost of relying on more data. Hence, instead of simply showing its superiority over rejection learning, one should perhaps evaluate this tradeoff? - Nitpicking: I find "above/below diagonal" (add a thin gray diagonal to the plot) easier to interpret than "above/below 45 degree", which sounds like a local property (e.g., not the case where the red line saturates and has "0 degrees"). - Is the slight trend of the rejecting model on the COMPAS dataset in Figure 4 to defer less on the reliable group a property of the dataset? Since rejection learning is non-adaptive, it is blind to the properties of DM, i.e., one would expect it to defer equally on both groups if there is no bias in the data (greater variance in outcomes for different groups, or class imbalance resulting in higher uncertainty for one group). - In lines 306-307 the authors argue that deferring classifiers have higher overall accuracy at a given minimum subgroup accuracy (MSA). Does that mean that at the same error rate for the subgroup with the largest error rate (minimum accuracy), the error rate on the other subgroups is on average smaller (higher overall accuracy)? This would mean that the differences in error rates between subgroups are larger for the deferring classifier, i.e., less evenly distributed, which would mean that the deferring classifier is less fair? - Please update the references to point to the conference/journal versions of the papers (instead of arxiv versions) where applicable. Typos line 10: learning to defer ca*n* make systems... line 97: first "the" should be removed End of line 5 of the caption of Figure 3: Fig. 3a (instead of Figs. 3a) line 356: This reference seems incomplete?
- Nitpicking: I find "above/below diagonal" (add a thin gray diagonal to the plot) easier to interpret than "above/below 45 degree", which sounds like a local property (e.g., not the case where the red line saturates and has "0 degrees").
NIPS_2016_386
NIPS_2016
, however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which I mark with ** in the minor comments below. Hopefully I am missing something silly. One also has to wonder about the practicality of such algorithms. The main algorithm relies on an estimate of the payoff for the optimal policy, which can be learnt with sufficient precision in a "short" initialisation period. Some synthetic experiments might shed some light on how long the horizon needs to be before any real learning occurs. A final note. The paper is over length. Up to the two pages of references it is 10 pages, but only 9 are allowed. The appendix should have been submitted as supplementary material and the reference list cut down. Despite the weaknesses I am quite positive about this paper, although it could certainly use quite a lot of polishing. I will raise my score once the ** points are addressed in the rebuttal. Minor comments: * L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0? * L177: "(OCO )" -> "(OCO)" and similar things elsewhere * L176: You might want to mention that the learner observes the whole concave function (full information setting) * L223: I would prefer to see a constant here. What does the O(.) really mean here? * L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards. * L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes? * The algorithm only stops /after/ it has exhausted its budget. Don't you need to stop just before? (the regret is only trivially affected, so this isn't too important). * L213: \tilde \mu is undefined. I guess you mean \tilde \mu_t, but that is also not defined except in Corollary 1, where it just given as some point in the confidence ellipsoid in round t. The result holds for all points in the ellipsoid uniformly with time, so maybe just write that, or at least clarify somehow. ** L435: I do not see how this follows from Corollary 2 (I guess you meant part 1, please say so). So first of all mu_t(a_t) is not defined. Did you mean tilde mu_t(a_t)? But still I don't understand. pi^*(X_t) is (possibly random) optimal static strategy while \tilde \mu_t(a_t) is the optimistic mu for action a_t, which may not be optimistic for pi^*(X_t)? I have similar concerns about the claim on the use of budget as well. * L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else. * L178: Why not say what Omega is here. Also, OMD is a whole family of algorithms. It might be nice to be more explicit. What link function? Which theorem in [32] are you referring to for this regret guarantee? * L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give. * It would be nice to have more interpretation of theta (I hope I got it right), since this is the most novel component of the proof/algorithm.
* L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards.
ICLR_2021_1208
ICLR_2021
- It is unclear to me what scientific insight we get from this model and formalism over the prior task-optimized approaches. For instance, this model (as formulated in Section 2.3) is not shown to be a prototype approximation to these non-linear RNN models that exhibit emergent behavior. So it is not clear that your work provides any further “explanation” as to how these nonlinear models attain such solutions purely through optimization on a task. - Furthermore, I am not really sure how “emergent” the hexagonal grid patterns really are in this model. Given partitioning of the generator matrices into blocks in Section 2.5, it almost seems by construction we would get hexagonal grid patterns and it would be very hard for the model to learn anything different. While the ideas of this paper are mathematically elegant, I do not see the added utility these models provide over prior approaches nor how they provide a deeper explanation of the surprising emergent grid firing patterns observed in task-optimized nonlinear RNNs. For these reasons, I recommend rejection.
- It is unclear to me what scientific insight we get from this model and formalism over the prior task-optimized approaches. For instance, this model (as formulated in Section 2.3) is not shown to be a prototype approximation to these non-linear RNN models that exhibit emergent behavior. So it is not clear that your work provides any further “explanation” as to how these nonlinear models attain such solutions purely through optimization on a task.
NIPS_2021_2191
NIPS_2021
of the paper: [Strengths] The problem is relevant. Good ablation study. [Weaknesses] - The statement in the intro about bottom up methods is not necessarily true (Line 28). Bottom-up methods do have a receptive fields that can infer from all the information in the scene and can still predict invisible keypoints. - Several parts of the methodology are not clear. - PPG outputs a complete pose relative to every part’s center. Thus O_{up} should contain the offset for every keypoint with respect to the center of the upper part. In Eq.2 of the supplementary material, it seems that O_{up} is trained to output the offset for the keypoints that are not farther than a distance \textit{r) to the center of corresponding part. How are the groundtruths actually built? If it is the latter, how can the network parts responsible for each part predict all the keypoints of the pose. - Line 179, what did the authors mean by saying that the fully connected layers predict the ground-truth in addition to the offsets? - Is \delta P_{j} a single offset for the center of that part or it contains distinct offsets for every keypoint? - In Section 3.3, how is G built using the human skeleton? It is better to describe the size and elements of G. Also, add the dimensions of G,X, and W to better understand what DGCN is doing. - Experiment can be improved: - For instance, the bottom-up method [9] has reported results on crowdpose dataset outperforming all methods in Table 4 with a ResNet-50 (including the paper one). It will be nice to include it in the tables - It will be nice to evaluate the performance of their method on the standard MS coco dataset to see if there is a drop in performance in easy (non occluded) settings. - No study of inference time. Since this is a pose estimation method that is direct and does not require detection or keypoint grouping, it is worth to compare its inference speed to previous top-down and bottom-up pose estimation method. - Can we visualize G, the dynamic graph, as it changes through DGCN? It might give an insight on what the network used to predict keypoints, especially the invisible ones. [Minor comments] In Algorithm 1 line 8 in Suppl Material, did the authors mean Eq 11 instead of Eq.4? Fig1 and Fig2 in supplementary are the same Spelling Mistake line 93: It it requires… What does ‘… updated as model parameters’ mean in line 176 Do the authors mean Equation 7 in line 212? The authors have talked about limitations in Section 5 and have mentioned that there are not negative societal impacts.
- PPG outputs a complete pose relative to every part’s center. Thus O_{up} should contain the offset for every keypoint with respect to the center of the upper part. In Eq.2 of the supplementary material, it seems that O_{up} is trained to output the offset for the keypoints that are not farther than a distance \textit{r) to the center of corresponding part. How are the groundtruths actually built? If it is the latter, how can the network parts responsible for each part predict all the keypoints of the pose.
NIPS_2017_262
NIPS_2017
that are addressed below: * Most of the theoretical work presented here are built upon prior work, it is not clear what is the novelty and research contribution of the paper. * The figures are small and almost unreadable * It doesn't clearly state how equation 5, follows from equation 4 * It is not clear how \theta^{t+1/2} come into the picture. Explain * S^{*} and S^{~} are very important parameters in the paper, yet they were not properly defined. In line 163 it is claimed that S^{~} is defined in Assumption 1, but one can see the definition is not proper, it is rather cyclic. * Since the comparison metric here is wall-clock time, it is imperative that the implementation of the algorithms be the same. It is not clear that it is guaranteed. Also, the size of the experimental data is quite small. * If we look into the run-times of DCPN for sim_1k, sim_5k, and sim_10k and compare with DC+ACD, we see that DCPN is performing better, which is good. But the trend line tells us a different story; between 1k and 10k data the DCPN run-time is about 8x while the competitor grows by only 2x. From this trend it looks like the proposed algorithm will perform inferior to the competitor when the data size is larger e.g., 100K. * Typo in line 106
* The figures are small and almost unreadable * It doesn't clearly state how equation 5, follows from equation 4 * It is not clear how \theta^{t+1/2} come into the picture. Explain * S^{*} and S^{~} are very important parameters in the paper, yet they were not properly defined. In line 163 it is claimed that S^{~} is defined in Assumption 1, but one can see the definition is not proper, it is rather cyclic.
ICLR_2022_1887
ICLR_2022
W1: The proposed method is a combination of existing loss. The novelty of technical contribution is not very strong. W2: The proposed hybrid loss is argued that it is beneficial as the Proxy-NCA loss will promotes learning new knowledge better (first paragraph in Sec. 3.4), rather than less catastrophic forgetting. But the empirical results show that the proposed method exhibits much less forgetting than the prior arts (Table. 2). The argument and the empirical results are not well aligned. Also, as the proposed method seems promoting learning new knowledge, it is suggested to empirically validate the benefit of the proposed approach by a measure to evaluate the ability to learn new knowledge (e.g., intransigence (Chaudhry et al., 2018)). W3: Missing important comparison to Ahn et al.'s method in Table 3 (and corresponding section, titled "comparison with Logits Bias Solutions for Conventional CIL setting"). W4: Missing analyses of ablated models (Table 4). The proposed hybrid loss exhibits meaningful empirical gains only in CIFAR100 (and marginal gain in CIFAR10), comparing "MS loss with NCM (Gen)" and "Hybrid loss with NCM (Gen)". But there is no descriptive analysis for it. W5: Lack of details of Smooth datasets in Sec. 4.3. W6: Missing some citation (or comparison) using logit bias correction in addition to Wu et al., 2019 and Anh et al., 2020 Kang et al., 2020: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9133417 Mittal et al., 2021: https://openaccess.thecvf.com/content/CVPR2021W/CLVision/papers/Mittal_Essentials_for_Class_Incremental_Learning_CVPRW_2021_paper.pdf W7: Unclear arguments or arguments lack of supporting facts 4th para in Sec.1 'It should be noticed that in online CIL setting the data is seen only once, not fully trained, so it is analogous to the low data regime in which the generative classifier is preferable.' Why? 5th line of 2nd para in Sec. 3.1.3 'This problem becomes more severe as the the number of classes increases.' Lack of supporting facts W8: Some mistakes in text (see details in notes below) and unclear presentations Note Mistakes in text End of 1st para in Sec.1: intelligence agents -> intelligent agents 3th line of 1 para in Sec. 3.2: we can inference with -> we can conduct inference with 1st line of 1st para in Sec. 3.3 we choose MS loss as training... -> we choose the MS loss as a training... MS loss is a state-of... -> MS loss is the state-of... 6th line of Proposition 1: 'up to an additive constant and L' -> 'up to an additive constant c and L' Improvement ideas in presentations Fig. 1: texts in legends and axis labels should be larger At the beginning of page 6: Proposition (1) -> Proposition 1. --> (1) is confused with Equation 1. Captions and legend's font should be larger (similar to text size) in Fig. 2 and 3.
1: texts in legends and axis labels should be larger At the beginning of page 6: Proposition (1) -> Proposition 1. --> (1) is confused with Equation 1. Captions and legend's font should be larger (similar to text size) in Fig. 2 and 3.
vKViCoKGcB
ICLR_2024
- Out of the listed baselines, to the best of my knowledge only Journey TRAK [1] has been explicitly used for diffusion models in previous work. As the authors note, Journey TRAK is not meant to be used to attribute the *final* image $x$ (i.e., the entire sampling trajectory). Rather, it is meant to attribute noisy images $x_t$ (i.e., specific denoising steps along the sampling trajectory). Thus, the direct comparison with Journey TRAK in the evaluation section is not on equal grounds. - For the counterfactual experiments, I would have liked to see a comparison against Journey TRAK [1] used at a particular step of the the sampling trajectory. In particular, [1, Figure 2] shows a much larger effect of removing high-scoring images according to Journey TRAK, in comparison with CLIP cosine similarity. - Given that the proposed method is only a minor modification of existing methods [1, 2], I would have appreciated a more thorough attempt at explaining/justifying the changes proposed by the authors. [1] Kristian Georgiev, Joshua Vendrow, Hadi Salman, Sung Min Park, and Aleksander Madry. The journey, not the destination: How data guides diffusion models. In Workshop on Challenges in Deployable Generative AI at International Conference on Machine Learning (ICML), 2023. [2] Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, and Aleksander Madry. Trak: Attributing model behavior at scale. In International Conference on Machine Learning (ICML), 2023.
- For the counterfactual experiments, I would have liked to see a comparison against Journey TRAK [1] used at a particular step of the the sampling trajectory. In particular, [1, Figure 2] shows a much larger effect of removing high-scoring images according to Journey TRAK, in comparison with CLIP cosine similarity.
NIPS_2017_370
NIPS_2017
- There is almost no discussion or analysis on the 'filter manifold network' (FMN) which forms the main part of the technique. Did authors experiment with any other architectures for FMN? How does the adaptive convolutions scale with the number of filter parameters? It seems that in all the experiments, the number of input and output channels is small (around 32). Can FMN scale reasonably well when the number of filter parameters is huge (say, 128 to 512 input and output channels which is common to many CNN architectures)? - From the experimental results, it seems that replacing normal convolutions with adaptive convolutions in not always a good. In Table-3, ACNN-v3 (all adaptive convolutions) performed worse that ACNN-v2 (adaptive convolutions only in the last layer). So, it seems that the placement of adaptive convolutions is important, but there is no analysis or comments on this aspect of the technique. - The improvements on image deconvolution is minimal with CNN-X working better than ACNN when all the dataset is considered. This shows that the adaptive convolutions are not universally applicable when the side information is available. Also, there are no comparisons with state-of-the-art network architectures for digit recognition and image deconvolution. Suggestions: - It would be good to move some visual results from supplementary to the main paper. In the main paper, there is almost no visual results on crowd density estimation which forms the main experiment of the paper. At present, there are 3 different figures for illustrating the proposed network architecture. Probably, authors can condense it to two and make use of that space for some visual results. - It would be great if authors can address some of the above weaknesses in the revision to make this a good paper. Review Summary: - Despite some drawbacks in terms of experimental analysis and the general applicability of the proposed technique, the paper has several experiments and insights that would be interesting to the community. ------------------ After the Rebuttal: ------------------ My concern with this paper is insufficient analysis of 'filter manifold network' architecture and the placement of adaptive convolutions in a given CNN. Authors partially addressed these points in their rebuttal while promising to add the discussion into a revised version and deferring some other parts to future work. With the expectation that authors would revise the paper and also since other reviewers are fairly positive about this work, I recommend this paper for acceptance.
- From the experimental results, it seems that replacing normal convolutions with adaptive convolutions in not always a good. In Table-3, ACNN-v3 (all adaptive convolutions) performed worse that ACNN-v2 (adaptive convolutions only in the last layer). So, it seems that the placement of adaptive convolutions is important, but there is no analysis or comments on this aspect of the technique.
NIPS_2016_499
NIPS_2016
- The proposed method is very similar in spirit to the approach in [10]. It seems that the method in [10] can also be equipped with scoring causal predictions and the interventional data. If otherwise, why [10] cannot use these side information? - The proposed method reduces the computation time drastically compared to [10] but this is achieved by reducing the search space to the ancestral graphs. This means that the output of ACI has less information compared to the output of [10] that has a richer search space, i.e., DAGs. This is the price that has been paid to gain a better performance. How much information of a DAG is encoded in its corresponding ancestral graph? - Second rule in Lemma 2, i.e., Eq (7) and the definition of minimal conditional dependence seem to be conflicting. Taking Z’ in this definition to be the empty set, we should have that x and y are independent given W, but Eq. (7) says otherwise.
- The proposed method reduces the computation time drastically compared to [10] but this is achieved by reducing the search space to the ancestral graphs. This means that the output of ACI has less information compared to the output of [10] that has a richer search space, i.e., DAGs. This is the price that has been paid to gain a better performance. How much information of a DAG is encoded in its corresponding ancestral graph?
fkvdewFFN6
ICLR_2024
1. The overall framework seems quite similar to the Seldonian algorithm design of Thomas et al. (2019), e.g., see Fig. 1 of Thomas et al. (2019)). Although it is true that Thomas et al. (2019) only considered fair classification experiments, as mentioned in this paper's related works, the proposed FRG also has an objective function related to the expressiveness of the representation, and some of the details even match; for instance, the discussions on "$1 - \delta$ confidence upper bound" on pg. 4 are quite similar to the caption of Fig. 1 of Thomas et al. (2019). Then, the question boils down to what is the novel contribution of this work, and my current understanding is that this is a simple application of Thomas et al. (2019) to fair representation learning. Of course, there are theoretical analysis, practical considerations and good experimental results that are specific to fair representation learning, but I believe that (as I will elaborate below) there are some problems that need to be addressed. Lastly, I believe that Thomas et al. (2019) should be given much more credit than the current draft. 2. Although the paper focuses on the high-probability fair representation construction (which should be backed theoretically in a solid way, IMHO), there are too many components (for "practicality") that are theoretically unjustified. There are three such main components: doubling the width of the confidence interval to "avoid" overfitting, introducing the hyperparameters $\gamma$ and $v$ for upper bounding $\Delta_{DP}$, and approximating the candidate optimization. 3. Also, the theoretical discussions can use some improvements. Although directly related to fair representation, the current theorems follow directly from the algorithm design itself and the well-known property of mutual information to $\Delta_{DP}$. For instance, I was expecting some sample complexity-type results for not returning NSF, e.g., given confidence levels, what is the sufficient amount of training data points that would not return NSF. 4. Lastly, if the authors meant for this paper to be a practical paper, then it should be clearly positioned that way. For instance, the paper should allocate much more space to the experimental verifications and do more experiments. Right now, the experiments are only done for two datasets, both of which consider binary sensitive attributes. In order to show the proposed FRG's versatility, the paper should do a more thorough experimental evaluation of various datasets of different characteristics with multiple group and/or nonbinary sensitive attributes, trade-off (Pareto front) between fairness and performance (or any of such kind), even maybe controlled synthetic experiments. **Summary**. Although the framework is simple and has promising experiments, I believe that there is still much to be done. In its current form, the paper's contribution seems to be incremental and not clear.
3. Also, the theoretical discussions can use some improvements. Although directly related to fair representation, the current theorems follow directly from the algorithm design itself and the well-known property of mutual information to $\Delta_{DP}$. For instance, I was expecting some sample complexity-type results for not returning NSF, e.g., given confidence levels, what is the sufficient amount of training data points that would not return NSF.
ICLR_2021_2896
ICLR_2021
of paper On the plus side: This paper is well written, the experiments are fairly well done, the results are good, and the outlined approach is sensible. On the minus side: It isn't clear what the scientific contribution is. Using a known network structure on a known feature to perform a known process doesn't really provide much insight. I do feel that there is room for insight in this paper, but the authors stick to a very high-level description of their experiments. Recommendation I would recommend rejection. I think this is good work, but it doesn't convey to me anything that I didn't already know (other than this particular combination of feature and network seems to work well). I expect papers to teach me something, not just report a finding without any analysis or explanation. As is, the paper is really not much more than what is simply contained in table 1. This paper could have in it a lot more information and analysis to make it a much stronger submission, but the current approach is simply a report on a single point estimate. Yes it seems to work well, but I don't believe that just that is the standard for a strong publication. Questions for clarifications: • A hugely influential factor in multi-source localization is how one defines a peak. This is not described adequately in this paper and I think needs some clarification. I understand that for each time frame we obtain a location posterior. But it isn't clear how to a) Find out how many speakers are active, and b) finding out the peaks that correspond to each speaker. If we have e.g. two speakers and six substantial nonzero posterior values, how does one pick the two that correspond to a speaker? We cannot use the two loudest ones, they might be both from the same speaker. What if the amplitude difference between the speakers is large? This is not a question that can be brushed away so easily and should be directly addressed. • It would be interesting to see something akin to a confusion matrix. Arrays are not equally good at detecting all directions and understanding how this algorithm behaves with respect to classic array behavior would be an interesting bit of information. Provide additional feedback with the aim to improve the paper • I wouldn't use the terms "spectral and phase". The distinction should be "magnitude and phase"; "spectral" usually implies the whole complex-valued output of the DFT. But that entire sentence is a little out of place, since you claim that phase is more useful than magnitudes, and then you use the complex data directly. I would simply state which features you use and leave it at that. The extra discussion there is an unnecessary distraction. • "since it is known to encapsulate the spatial fingerprint for each sound source", I assume when you say source you imply a spatial location. I believe "source" tends to be usually interpreted as the actual sound (hence "source separation"), which is independent of the location. • Your VAD description is puzzling. What is stated in the paper simply discards any TF bins that have a magnitude of less than epsilon. If this is what you do I wouldn't call it a VAD, you are simply discarding TF bins with zero magnitude that will result in a division by zero. A VAD is supposed to look for the presence of speech (not just energy), and is also very unlikely to be defined over frequency, it's usually only over time. • I understand that for the sake of conforming to current APIs, complex numbers are being avoided here. But, for something on paper, it strikes me as quite strange to stack real and imaginary parts when a complex-valued representation is clearly much easier to describe and manipulate. I can predict your rebuttal to this point, but papers are supposed to set forth the science from which we will write code, and not directly describe the code.
• Your VAD description is puzzling. What is stated in the paper simply discards any TF bins that have a magnitude of less than epsilon. If this is what you do I wouldn't call it a VAD, you are simply discarding TF bins with zero magnitude that will result in a division by zero. A VAD is supposed to look for the presence of speech (not just energy), and is also very unlikely to be defined over frequency, it's usually only over time.
NIPS_2021_40
NIPS_2021
/Questions: I only have minor suggestions: 1.) In the discussion, it may be worth including a brief discussion on the empirical motivation for a time-varying Q ^ t and S t , as opposed to a fixed one as in Section 4.2. For example, what is the effect on the volatility of α t and also on the average lengths of the predictive intervals when we let Q ^ t and S t vary with time? 2.) I found the definition of the quantile a little confusing, an extra pair of brackets around the term ( 1 | D | ∑ ( X r , Y r ) ∈ D 1 S ( X r , Y r ) ≤ s ) might help, or maybe defining the bracketed term separately if space allows. 3.) I think there are typos in Lines 93, 136, 181 (and maybe in the Appendix too): should it be Q ^ t ( 1 − α t ) instead? ##################################################################### Overall: This is a very interesting extension to conformal prediction that no longer relies on exchangeability but is still general, which will hopefully lead to future work that guarantees coverage under weak assumptions. I believe the generality also makes this method useful in practice. The authors have described the limitations of their theory, e.g. having a fixed Q ^ with time.
1.) In the discussion, it may be worth including a brief discussion on the empirical motivation for a time-varying Q ^ t and S t , as opposed to a fixed one as in Section 4.2. For example, what is the effect on the volatility of α t and also on the average lengths of the predictive intervals when we let Q ^ t and S t vary with time?
NIPS_2020_471
NIPS_2020
1. The descriptions in joint inference is not very clear. I cannot get how the refine process do according to Equ. 2. It would be great if the authors can clear this part during the rebuttal and polish this part in the final version. 2. I have some doubts about the definations in Table1. What's the different between anchor-based regression and the regression in RepPoints? in RetinaNet, there is also only a one-shot regression. And in ATSS, this literature has proved that the regression methods do not influence a lot. The method that directly regresses [w, h] to the center point is good enough. While RepPoints regresses distance to the location of feature maps. I think there is no obvious difference between the two methods. I hope the authors can clarify this problem. If not, the motivations here is not solid enough. 3. It would be great if the authors can analyze the computational costs and inference speeds for the proposed method.
2. I have some doubts about the definations in Table1. What's the different between anchor-based regression and the regression in RepPoints? in RetinaNet, there is also only a one-shot regression. And in ATSS, this literature has proved that the regression methods do not influence a lot. The method that directly regresses [w, h] to the center point is good enough. While RepPoints regresses distance to the location of feature maps. I think there is no obvious difference between the two methods. I hope the authors can clarify this problem. If not, the motivations here is not solid enough.
ICLR_2022_2810
ICLR_2022
The clarity of the writing could be improved substantially. Descriptions are often vague, which makes the technical details harder to understand. I think it's fine to give high-level intuitions separate from low-level details, but the current writing invites confusion. For example, at the start of Section 3, the references to buffers and clusters are vague. The text refers readers to where these concepts are described, but the high-level description doesn't really give a clear picture, making the text that follows harder to understand. Ideas are not always presented clearly. For example: may only exploit a small part of it, making most of the goals pointless.``` - Along the same lines, at the start of the Experiments section, when reading ```the ability of DisTop to select skills to learn``` I am left to wonder what this "ability" and "selection" refers to. This is not a criticism of word choice. The issue is that the previous section did not set up these ideas. - Sections of the results do not seem to actually address the experimental question they are motivated by (that is, the question at the paragraph header). In general, this paper tends to draw conclusions that seem only speculatively supported by the results. - Overall, the paper is not particularly easy to follow. The presentation lacks a clear intuition for how the pieces fit together and the experiments have little to hang on to as a result. - The conclusions drawn from the experiments are not particularly convincing. While there is some positive validation, demonstration of the *topology* learning's success is lacking. There are some portions of the appendix that get at this, but the analysis feels incomplete. Personally, I am much more convinced by a demonstration that the underlying pieces of the algorithm are viable than by seeing that, when they are all put together, the training curves look better. ### Questions/Comments: - The second paragraph of 2.1 is hard to follow. If the technical details are important, it may make more sense to work them into a different area of the text. - The same applies to 2.2. The technical details are hard to follow. - You claim "In consequence, we avoid using a hand engineered environment-specific scheduling" on page 4. Does this suggest that the $\beta$ parameter and the $\omega'$ update rate are environment independent? - Why do DisTop and Skew-Fit have such different starting distances for Visual Pusher (Figure 1, left middle)? - It is somewhat strange phrasing to describe Skew-Fit as having "favorite" environments (page 6).
- Overall, the paper is not particularly easy to follow. The presentation lacks a clear intuition for how the pieces fit together and the experiments have little to hang on to as a result.
ICLR_2023_4699
ICLR_2023
1. The guidance over SIFT feature space is good. However, the perceptual losses (such as VGG feature loss) are also considered effective. The authors should clarify their choice, otherwise this contribution is weakened. 2. Assumption 3.1. says the loss of TKD is assumed less than IYOR. However, eqn.7 tells a different story. 3. Assumption 3.1. may not hold in real cases. One cannot increase the parameter number of the teacher network when applying a KD algorithm. 4. This paper can become more solid if IYOR is used in some modern i2i methods. i.e, StyleFlow, EGSDE etc. 5. The student and refinement networks are trained simultaneously. Which may improve the performance of the teacher network. Is the comparison fair? Please provide KID/FID metrics of your teacher network.
5. The student and refinement networks are trained simultaneously. Which may improve the performance of the teacher network. Is the comparison fair? Please provide KID/FID metrics of your teacher network.
NIPS_2018_553
NIPS_2018
- This paper misses a few details in model design and experiments: A major issue is the "GTA" / "DET" feature representation in Table 1. As stated in section 4.1, image regions are extracted from ground-truth / detection methods. But what is the feature extractor used on top of those image regions? Comparing resnet / densenet extracted features with vgg / googlenet feature is not fair. - The presentation of this paper can be further improved. E.g. paragraph 2 in intro section is a bit verbose. Also breaking down overly-long sentences into shorter but concise ones will improve fluency. Some additional comments: - Figure 3: class semantic feature should be labeled as "s" instead of "c"? - equation 1: how v_G is fused from V_I? please specify. - equation 5: s is coming from textual representations (attribute / word to vec / PCA'ed TFIDF). It might have positive / negative values? However the first term h(W_{G,S}, v_G) is post ReLU and can only be non-negative? - line 157: the refined region vector is basically u_i = (1 + attention_weight) * v_i. since attention weight is in [0, 1] and sums up to 1 for all image regions. this refined vector would only scales most important regions by a factor of two before global pooling? Would having a scaling variable before attention weight help? - line 170: class semantic information is [not directly] embedded into the network? - Equation 11: v_s and u_G are both outputs from trained-network, and they are not normalized? So minimize L-2 loss could be simply reducing the magnitude of both vectors? - Line 201: the dimensionality of each region is 512: using which feature extractor? - Section 4.2.2: comparing number of attention layers is a good experiment. Another baseline could be not using Loss_G? So attention is only guided by global feature vector. - Table 4: what are the visual / textual representations used in each method? otherwise it is unclear whether the end-to-end performance gain is due to proposed attention model.
- line 157: the refined region vector is basically u_i = (1 + attention_weight) * v_i. since attention weight is in [0, 1] and sums up to 1 for all image regions. this refined vector would only scales most important regions by a factor of two before global pooling? Would having a scaling variable before attention weight help?
qJ0Cfj4Ex9
ICLR_2024
- Goal Misspecification: Failures on the ALFRED benchmark often occurred due to goal misspecification, where the LLM did not accurately recover the formal goal predicate, especially when faced with ambiguities in human language. - Policy Inaccuracy: The learned policies sometimes failed to account for low-level, often geometric details of the environment. - Operator Overspecification: Some learned operators were too specific, e.g., the learned SliceObject operator specified a particular type of knife, leading to planning failures if that knife type was unavailable. - Limitations in Hierarchical Planning: The paper acknowledges that it doesn't address some core problems in general hierarchical planning. For instance, it assumes access to symbolic predicates representing the environment state and doesn't tackle finer-grained motor planning. The paper also only considers one representative pre-trained LLM and not others like GPT-4.
- Goal Misspecification: Failures on the ALFRED benchmark often occurred due to goal misspecification, where the LLM did not accurately recover the formal goal predicate, especially when faced with ambiguities in human language.
lGDmwb12Qq
ICLR_2025
1. I think the innovation of this paper is limited. In this paper, I think the main improvement comes from taking the disparity range below 0 into consideration, eliminating the negative impact on the scheme based on distributed supervision in the disparity range below 0. But with a fixed extended disparity range, i.e., 16, I think it's hard to fit the distribution of the scenarios. Do I need to set a new extended range to fit the distribution range in a new scenario? I think this is an offset that is highly relevant to the scene. 2. I think the improvement of this method over SOTA methods such as IGEV is small. Does this mean that there is no multi-peak distribution problem in iterative optimization schemes similar to IGEV? I suggest that the author analyze the distribution of disparities produced by IGEV compared to other baselines to determine why the effect is not significantly improved on IGEV. And I have another concern. Currently, SOTA schemes are basically iterative frameworks similar to IGEV. Is it difficult for Sampling-Gaussian to significantly improve such frameworks?
2. I think the improvement of this method over SOTA methods such as IGEV is small. Does this mean that there is no multi-peak distribution problem in iterative optimization schemes similar to IGEV? I suggest that the author analyze the distribution of disparities produced by IGEV compared to other baselines to determine why the effect is not significantly improved on IGEV. And I have another concern. Currently, SOTA schemes are basically iterative frameworks similar to IGEV. Is it difficult for Sampling-Gaussian to significantly improve such frameworks?
R4h5PXzUuU
ICLR_2025
1. Limited Explanation of Failures: Although the paper provides examples of failure cases, it does not fully delve into the underlying causes or offer detailed solutions for these issues. While the authors acknowledge the problem of overconfidence in models such as GPT-4o with ReGuide, further exploration of these limitations could strengthen the study. There is still not a robust method to effectively introduce LVLMs to the OoD tasks. 2. Model-Specific Insights: The paper focuses on generic findings across models, but a deeper investigation into how specific models (e.g., GPT-4o vs. InternVL2) behave differently when ReGuide is applied could add nuance to the conclusions. For example, the differences in false positive rates (FPR) between models with and without ReGuide should be presented for a better comparison. 3. Scalability and Practicality: While the ReGuide method shows a promising direction, the computational overhead and API limitations mentioned in the paper could present challenges for practical, large-scale implementation. This issue is touched upon but not sufficiently addressed in terms of how ReGuide might be optimized for deployment at scale. Meanwhile, the inference cost analysis can further improve the paper's quality and inspire further work.
2. Model-Specific Insights: The paper focuses on generic findings across models, but a deeper investigation into how specific models (e.g., GPT-4o vs. InternVL2) behave differently when ReGuide is applied could add nuance to the conclusions. For example, the differences in false positive rates (FPR) between models with and without ReGuide should be presented for a better comparison.
ICLR_2021_2892
ICLR_2021
- Proposition 2 seems to lack an argument why Eq 16 forms a complete basis for all functions h. The function h appears to be defined as any family of spherical signals parameterized by a parameter in [-pi/2, pi/2]. If that’s the case, why eq 16? As a concrete example, let \hat{h}^\theta_lm = 1 if l=m=1 and 0 otherwise, so constant in \theta. The only constant associated Legendre polynomial is P^0_0, so this h is not expressible in eq 16. Instead, it seems like there are additional assumptions necessary on the family of spherical functions h to let the decomposition eq 16, and thus proposition 2, work. Hence, it looks like that proposition 2 doesn’t actually characterize all azimuthal correlations. - In its discussion of SO(3) equivariant spherical convolutions, the authors do not mention the lift to SO(3) signals, which allow for more expressive filters than the ones shown in figure 1. - Can the authors clarify figure 2b? I do not understand what is shown. - The architecture used for the experiments is not clearly explained in this paper. Instead the authors refer to Jiang et al. (2019) for details. This makes the paper not self-contained. - The authors appear to not use a fast spherical Fourier transform. Why not? This could greatly help performance. Could the authors comment on the runtime cost of the experiments? - The sampling of the Fourier features to a spherical signal and then applying a point-wise non-linearity is not exactly equivariant (as noted by Kondor et al 2018). Still, the authors note at the end of Sec 6 “This limitation can be alleviated by applying fully azimuthal-rotation equivariant operations.”. Perhaps the authors can comment on that? - The experiments are limited to MNIST and a single real-world dataset. - Out of the many spherical CNNs currently in existence, the authors compare only to a single one. For example, comparisons to SO(3) equivariant methods would be interesting. Furthermore, it would be interesting to compare to SO(3) equivariant methods in which SO(3) equivariance is broken to SO(2) equivariance by adding to the spherical signal a channel that indicates the theta coordinate. - The experimental results are presented in an unclear way. A table would be much clearer. - An obvious approach to the problem of SO(2) equivariance of spherical signals, is to project the sphere to a cylinder and apply planar 2D convolutions that are periodic in one direction and not in the other. This suffers from distortion of the kernel around the poles, but perhaps this wouldn’t be too harmful. An experimental comparison to this method would benefit the paper. Recommendation: I recommend rejection of this paper. I am not convinced of the correctness of proposition 2 and proposition 1 is similar to equivariance arguments made in prior work. The experiments are limited in their presentation, the number of datasets and the comparisons to prior work. Suggestions for improvement: - Clarify the issue around eq 16 and proposition 2 - Improve presentation of experimental results and add experimental details - Evaluate the model of more data sets - Compare the model to other spherical convolutions Minor points / suggestions: - When talking about the Fourier modes as numbers, perhaps clarify if these are reals or complex. - In Def 1 in the equation it is confusing to have theta twice on the left-hand side. It would be clearer if h did not have a subscript on the left-hand side.
- When talking about the Fourier modes as numbers, perhaps clarify if these are reals or complex.
NIPS_2016_431
NIPS_2016
1. It seems the asymptotic performance analysis (i.e., big-oh notation of the complexity) is missing. How is it improved from O(M^6)? 2. On line 205, it should be Fig. 1 instead of Fig. 5.1. In latex, please put '\label' after the '\caption', and the bug will be solved.
2. On line 205, it should be Fig. 1 instead of Fig. 5.1. In latex, please put '\label' after the '\caption', and the bug will be solved.
IQ0BBfbYR2
ICLR_2025
1. Writing should be seriously improved. It is really tedious to get through the paper. Often terms are not defined properly and the reader really has to rely on the context to understand used terms. This is not possible always. See a list of writing issues below: * I find parts of Fig. 2 unclear. The caption should describe the key features of the model. Why is there a connection between Decoder and External classifier but no arrow. What is it supposed to mean? * line 224 Should describe $d, \kappa$ clearly and what they denote. Is the feature encoding coming from an external network, the classifier $f$ itself etc. What is the distance metric? Euclidean distance? * Implicit/Explicit classifier should be described clearly when describing CoLa-DCE in Sec. 4. What is their input, output domains, what are their roles etc. I guess one of them is $f$. I assume they originate from prior literature but they also seem central to your method so need clear concise description. * line 249: Should define $\mathcal{N}$ or state its the set of natural numbers (can be confused with normal distribution as you use it in Eq. 1). * line 259 Why is the external classifier modelled as p(x|y) (which should be for diffusion model). Wouldn't the classifier be modelled as p(y|x)? * The notations $h, g$ almost appear out of the blue. What are their input, output domains. It is the same issue with $\delta$ but it at least has some brief description. * line 260. You provide absolutely no explanation of notations about concepts, how they are represented, what the binary constraints denote exactly. It is difficult to understand $\lambda_1, ..., \lambda_k$ and $\theta_1, ..., \theta_k$. Are the lambda's just subset of natural numbers from 1 to K? Your notation for $\theta$ also does not seem consistent. For starters, line 236 has it going from 0 to k while at other places it is 1 to k. More importantly, Eq. 7 makes it seem like $\theta$'s denote subset of indices but they are supposed to be binary masks. I am not sure what are these representations exactly, beyond the basic idea that they control which concepts to condition on. * There are two terms for datasets (line 219, 226) $X', \hat{X}$. What is the difference between the two? Is the reference data classification training/validation dataset and the other test data? * line 222 "As the model perception of the data shall be represented, the class predictions of the model are used to determine class affiliation." Really odd phrasing, please make it more clear. * line 319-320 "Using the intermediate ... high flip ratios" I am not sure how you are drawing this conclusion from Tab. 1. Please elaborate on this how you come to this conclusion. * line 469 What is $attr$ supposed to denote exactly. The attribution map for a particular feature/concept? If yes, why are you computing absolute magnitude for relative alignment? Is it the Frobenius norm of the difference of attributions? It should be described more clearly. * Please explain more clearly what the "confidence" metric is? Is it the difference between classifier's probability for the initially predicted class before and after the modifications? * line 295 mentions l2 norm between original and counterfactual image. Was it supposed to be a metric in Tab. 1? 2. The quantitative metrics of CoLa-DCE seem weak. LDCE seems to clearly outperform CoLA-DCE on "Flip-ratio" and "Confidence" metrics while being close on FID.
* The notations $h, g$ almost appear out of the blue. What are their input, output domains. It is the same issue with $\delta$ but it at least has some brief description.
ICLR_2022_2421
ICLR_2022
Weakness: 1.Some typos such as “TRAFE-OFFS” in the title of section 4.1. 2.The 24 different structures generated by random premutation in section 4.1 should be explained in more detail. 3.The penultimate sentence of Section 3.3 states that "iterative greedy search can avoid the suboptimality of the resulting scaling strategy on a particular model", which is not a serious statement because the results of the iterative greedy search are also suboptimal solutions. 4.The conclusion of "Cost breakdown can indicate the transferability effectiveness" in Figure 7 is not sufficient. We cannot extend the conclusion obtained from a few specific experiments to any different hardware devices or different architectures. 5.Why not use the same cost for different devices instead of flops, latency, and 1/FPS for different hardware? 6.The result comparison of "Iteratively greedy Search" versus "random search" on the model structure should be supplemented.
6.The result comparison of "Iteratively greedy Search" versus "random search" on the model structure should be supplemented.
ARR_2022_205_review
ARR_2022
- Missing ablations: It is unclear from the results how much performance gain is due to the task formulation, and how much is because of pre-trained language models. The paper should include results using the GCPG model without pre-trained initializations. - Missing baselines for lexically controlled paraphrasing: The paper does not compare with any lexically constrained decoding methods (see references below). Moreover, the keyword control mechanism proposed in this method has been introduced in CTRLSum paper (He et al, 2020) for keywork-controlled summarization. - Related to the point above, the related work section is severely lacking (see below for missing references). Particularly, the paper completely omits lexically constrained decoding methods, both in related work and as baselines for comparison. - The paper is hard to follow and certain sections (particularly the Experimental Setup) needs to made clearer. It was tough to understand exactly where the exemplar/target syntax was obtained for different settings, and how these differed between training and inference for each of those settings. - The paper should include examples of generated paraphrases using all control options studies (currently only exemplar-controlled examples are included in Figure 5). Also, including generation from baseline systems for the same examples would help illustrate the differences better. Missing References: Syntactically controlled paraphrase generation: Goyal et al., ACL2020, Neural Syntactic Preordering for Controlled Paraphrase Generation Sun et al. EMNLP2021, AESOP: Paraphrase Generation with Adaptive Syntactic Control <-- this is a contemporaneous work, but would be nice to cite in next version. Keyword-controlled decoding strategies: Hokamp et al. ACL2017, Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search Post et al, NAACL 2018, Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation Other summarization work that uses similar technique for keyword control: He et al, CTRLSum, Towards Generic Controllable Text Summarization
- Missing ablations: It is unclear from the results how much performance gain is due to the task formulation, and how much is because of pre-trained language models. The paper should include results using the GCPG model without pre-trained initializations.
ICLR_2021_329
ICLR_2021
Weakness - I do not see how MFN 'largely outperforms' existing baseline methods. It is difficult to identify the quality difference between output from the proposed method and SIREN -- shape representation seems to even prefer SIREN's results (what is the ground truth for Figure 5a). The paper is based on the idea of replacing compositional models with recursive, multiplicative ones, though neither the theory nor the results are convincing to prove this linear approximation is better. I have a hard time getting the intuition of the advantages of the proposed method. - this paper, and like other baselines (e.g. SIREN) do not comment much on the generalization power of these encoding schemes. Apart from image completion, are there other experiments showing the non-overfitting results, for example, on shape representation or 3D tasks? - the proposed model has shown to be more efficient in training, and I assume it is also more compact in size, but there is no analysis or comments on that? Suggestions - Result figures are hard to spot differences against baselines. It's recommended to use a zoom or plot the difference image to show the difference. - typo in Corollary 2 -- do you mean linear combination of Gabor bases? - It's recommended to add reference next to baseline names in tables (e.g. place citation next to 'FF Positional' if that refers a paper method) - In Corollary 1, $\Omega$ is not explicitly defined (though it's not hard to infer what it means).
- It's recommended to add reference next to baseline names in tables (e.g. place citation next to 'FF Positional' if that refers a paper method) - In Corollary 1, $\Omega$ is not explicitly defined (though it's not hard to infer what it means).
NIPS_2022_2286
NIPS_2022
Weakness 1. It is hard to understand what the axes are for Figure 1. 2. It is unclear what the major contributions of the paper are. Analyzing previous work does not constitute as a contribution. 3. It is unclear how the proposed method enables better results. For instance, Table 1 reports similar accuracies for this work compared to the previous ones. 4. The authors talk about advantages over the previous work in terms of efficiency however the paper does not report any metric that shows it is more efficient to train with this proposed method. 5. Does the proposed method converge faster compared to previous algorithms? 6. How does the proposed methods compare against surrogate gradient techniques? 7. The paper does not discuss how the datasets are converted to spike domain. There are no potential negative societal impacts. One major limitation of this work is applicability to neuromorphic hardware and how will the work shown on GPU translate to neuromorphic cores.
1. It is hard to understand what the axes are for Figure 1.
ICLR_2022_2971
ICLR_2022
The experiment part is kind of weak. Only experiments of CIFAR-100 are conducted and only DeiT-small/-base are compared. 1. For comparison with DeiT, can you add DeiT variants with the same number of layers, heads, hidden dimension as CMHSA to do a fair comparison? DeiT with 12 layers performs worse than CMHSA with 6 layers on CIFAR-100 is as expected, thus not a convincing comparison. 2. If possible, can you add CNN models to show if CMHSA would actually make ViT performs better/on-par with CNN models, which should be the ultimate goal of training ViT in low-data regime, otherwise one would pretrain ViT on large scale dataset or just use CNN models. 3. If possible, results on ImageNet can be more convincing of the proposed method
3. If possible, results on ImageNet can be more convincing of the proposed method
NIPS_2022_2138
NIPS_2022
Weakness: 1 The main contribution of this paper is about the software, but the theoretical contribution is overstated. The proof of the theorem is quite standard and I do not get some new insight from it. 2 Direct runtime comparisons with existing methods are missing. The proposed approach is based on implicit differentiation which usually requires additional computational costs. Thus, the direct runtime comparison is necessary to demonstrate the efficiency of the proposed approach. 3 Recently, implicit deep learning has attracted many attentions, which is very relevant to the topic of this paper. An implementation example of implicit deep neural networks should be included. Moreover, many Jacobian-free methods e.g., [1-3] have been proposed to reduce the computational cost. The comparisons (runtime and accuracy) with these methods are preferred. [1] Fung, Samy Wu, et al. "Fixed point networks: Implicit depth models with Jacobian-free backprop." (2021). [2] Geng, Zhengyang, et al. "On training implicit models." Advances in Neural Information Processing Systems 34 (2021): 24247-24260. [3] Ramzi, Zaccharie, et al. "SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models." arXiv preprint arXiv:2106.00553 (2021).
2 Direct runtime comparisons with existing methods are missing. The proposed approach is based on implicit differentiation which usually requires additional computational costs. Thus, the direct runtime comparison is necessary to demonstrate the efficiency of the proposed approach.
ICLR_2021_1744
ICLR_2021
Weakness: This work simply applies the meta-learning method into the federated learning setting. I can’t see any technical contribution, either in the meta-learning perspective or the federated perspective. The experimental results are not convincing because the data partition is not for federated learning. Reusing data partition in a meta-learning context is unrealistic for a federated learning setting. The title is misleading or over-claimed. Only the adaptation phase costs a few rounds, but the communication cost of the meta-training phase is still high. The non-IID partition is unrealistic. The authors simply reuse the dataset partitions used in the meta-learning context, which is not a real federated setting. Or in other words, the proposed method can only work in the distribution which is similar to the meta-learning setting. Some meta earning-related benefits are intertwined with reducing communication costs. For example, the author claimed the proposed method has better generalization ability, however, this is from the contribution of the meta-learning. More importantly, this property can only be obvious when the data distribution cross-clients meet the assumption in the context of meta-learning. The comparison is unfair to FedAvg. At least, we should let FedAvg use the same clients and dataset resources as those used in Meta-Training and Few-Rounds adaptation. “Episodic training” is a term from meta-learning. I suggest the authors introduce meta-learning and its advantage first in the Introduction. Few-shot FL-related works are not fully covered. Several recent published knowledge distillation-based few-shot FL should be discussed. Overall Rating I tend to clearly reject this paper because: 1) the proposed framework is a simple combination of meta-learning and federated learning. I cannot see any technical contribution. 2) Claiming the few round adaptations can reduce communication costs for federated learning is misleading, since the meta-training phase is also expensive. 3) the data partition is directly borrowed from meta-learning, which is unrealistic in federated learning. ---------after rebuttal-------- The rebuttal does not convince me with evidence, thus I keep my overall rating. I hope the author can obviously compare the total cost of meta-learning phase plus FL fine-tuning phase with other baselines.
1) the proposed framework is a simple combination of meta-learning and federated learning. I cannot see any technical contribution.
ICLR_2023_2406
ICLR_2023
1. However, my major concern is that the contribution is insufficient. In general, the authors studied the connection between the complementary and the model robustness but without further studies on how to leverage such characteristics to improve model robustness. Even though this paper could be the first work to study this connection, the conclusion could be easily and intuitively obtained, i.e., when multimodal complementary is higher, the robustness is more delicate when one of the modalities is corrupted. Except for the analysis of the connection between complementary and robustness, it is expected to see more insightful findings or possible solutions. 2. The proposed metric is calculated on the features extracted by some pre-trained models. So the pre-trained models are necessary for metric computing which is contradictory to that the metric is used to measure the multimodal data complementary. In addition, in my opinion, the metric is unreliable since the model participates in the metric calculation and will inevitably affect the calculation results. 3. There are many factors that will affect the model's robustness. The multimodal data complementary is one of them. However, multimodal data complementary is not solely determined by the data itself. For example, classification on MS-COCO data is obviously less complementary than VQA on MS-COCO data. As mentioned by the author, the VQA task requires both modalities for question answering, accordingly the complementary is determined by the multimodal and the target task. However, I didn't see much further discussion about these possible factors.
1. However, my major concern is that the contribution is insufficient. In general, the authors studied the connection between the complementary and the model robustness but without further studies on how to leverage such characteristics to improve model robustness. Even though this paper could be the first work to study this connection, the conclusion could be easily and intuitively obtained, i.e., when multimodal complementary is higher, the robustness is more delicate when one of the modalities is corrupted. Except for the analysis of the connection between complementary and robustness, it is expected to see more insightful findings or possible solutions.
hbon6Jbp9Q
ICLR_2025
- The pruning method does not appear to offer much beyond the method of feature reweighted representational similarity analysis, which is quite popular (see Kaniuth and Hebert, 2022; NeuroImage). In fact, it is essentially a particular limited case of FR-RSA, where the weights of features are either 0 or 1. The authors do not appear well aware of the literature, as only 20 references are made. - I found the technique of using multiple different feature spaces (the 25 feature space of Mitchell et. al to fit voxel encoding models, then the full/pruned glove model to analyze similarities within clusters) to be convoluted and potentially circular. - As the technique is not particularly novel, it is important that the authors deliver some clear novel findings about brain function. The abstract only lists one "From a neurobiological perspective, we find that brain regions encoding social and cognitive aspects of lexical items consistently also represent their sensory-motor features, though the reverse does not hold." I did not find the case for this finding to be particularly strong. I welcome the authors to make the case more strongly. - The figures are poorly made, certainly well below the bar of ICLR, and do not communicate much if anything that will affect how researcher's think about semantic organization in the brain. For example, while an interesting approach of clustering brain regions is used, these regions are never visualized. In the one plot that attempts to explain some differences across brain regions, the authors use arbitrary number indices for brain regions; at minimum, anatomical labels are needed. However, in 2024, it is expected that a strong paper on this topic can make elegant visualizations of the cortical surface. Practitioners in this field understand the importance of such visualizations for relating findings to pre-existing conceptual notions of cortical organization, and for driving further intuition that will affect future research. - The authors waste precious space presenting fits to training data (see Table 1, "complete dataset", which reports the representational similarity after selecting the features that optimize to improve that representational similarity). Only the cross-validated results are worth presenting. - Focusing on which clusters are "best" rather than what the differences in representation are between them, seems an odd choice given the motivation of the paper. - Averaging voxels across subjects is likely to drastically reduce the granularity of the possible findings, since there is no expectation in voxel-level alignment of fine-grained conceptual information, but only larger-scale information. I believe it would be better to construct the clusters using all subject's individual data in a group-aligned space, where the same methods can otherwise be used, but individual voxel's are kept independent and not averaged across subjects.
- Focusing on which clusters are "best" rather than what the differences in representation are between them, seems an odd choice given the motivation of the paper.
vXSCD3ToCS
ICLR_2025
- The paper appears to require daily generation of the dynamic road network topology using the tree-based adjacency matrix generation algorithm. The efficiency of this process remains unclear. Additionally, since the topologies undergo minimal changes between consecutive days, and substantial information is shared across these days, it raises the question of whether specialized algorithms are available to accelerate this topology generation. - The authors present performance results only for two districts, D06 and D11. It is recommended to extend the reporting to include experimental results from the remaining seven districts. - There is an inconsistency in the layout of the document: Figure 5 referred to on line 215 of Page 4, yet it is located on Page 7. - The caption for Figure 7 is incorrect, and should be corrected to "Edge Dynamics" from "Node Dynamics". - It is recommended that some recent related studies be discussed in the paper, particularly focusing on their performance with this dataset. [1] UniST: A Prompt-Empowered Universal Model for Urban ST Prediction. KDD2024. [2] Fine-Grained Urban Flow Prediction. WWW2021. [3] When Transfer Learning Meets Cross-City Urban Flow Prediction: Spatio-Temporal Adaptation Matters. IJCAI2022. [4] Spatio-Temporal Self-Supervised Learning for Traffic Flow Prediction. AAA2023.
- The caption for Figure 7 is incorrect, and should be corrected to "Edge Dynamics" from "Node Dynamics".
o6D5yTpK8w
EMNLP_2023
1. An incremental study of this topic based on previous works. - [Bao IJCAI 2022] Aspect-based Sentiment Analysis with Opinion Tree Generation, IJCAI 2022 (OTG method) - [Bao ACL Finding 2023] Opinion Tree Parsing for Aspect-based Sentiment Analysis, Findings of ACL 2023 The techniques involved in the proposed framework have been commonly used for ABSA, including graph pre-training, opinion tree generation and so on, and it seems not surprising enough to combine them together. The experimental results only show the performance can be improved, but lack of the explanations of why the performance can be improved. 2. The experimental results are not exactly convincing, by comparing with the main results in [Bao IJCAI 2022] and [Bao ACL Finding 2023]. For example, - For the scores of OTG method, [Bao ACL Finding 2023] < this paper < [Bao IJCAI 2022]. Note that this is a significant difference, for example, on the Restaurant dataset, for F1 score, [Bao ACL Finding 2023] 0.6040 < this paper 0.6164 < [Bao IJCAI 2022] 0.6283; on the laptop dataset, for F1 score, [Bao ACL Finding 2023] 0.3998 < this paper 0.4394 < [Bao IJCAI 2022] 0.4544 - On the laptop dataset, for F1 score, although the scores of OTG method in this paper 0.4394 < the scores of proposed method in this paper 0.4512; The scores of OTG method in [Bao IJCAI 2022] 0.4544 > the scores of proposed method in this paper 0.4512; There are also other significant differences on the performance of the baseline methods in these papers. 3. It could be convincing to discuss case studies and error studies to highlight the effectiveness of each proposed component. For example, this paper mentions that the Element-level Graph Pre-training abandons the strategy of capturing the complex structure but focuses directly on the core elements. However, without case study, it is less convincing to figure it out. An example of case study can be found in “Graph pre-training for AMR parsing and generation”.
3. It could be convincing to discuss case studies and error studies to highlight the effectiveness of each proposed component. For example, this paper mentions that the Element-level Graph Pre-training abandons the strategy of capturing the complex structure but focuses directly on the core elements. However, without case study, it is less convincing to figure it out. An example of case study can be found in “Graph pre-training for AMR parsing and generation”.
ICLR_2023_935
ICLR_2023
1 The traditional DCI framework may already be considered explicitness(E) and size(S). For instance, to evaluate the disentanglement (D) of different representation methods, you may need to use a fixed capacity of probing (f), and the latent size should also be fixed. DCI and ES may be entangled with each other. For instance, if you change the capacity of probing or the latent size, then the DCI evaluation also changes correspondingly. The reviewer still needs clarification on the motivation for considering explicitness(E) and size(S) as extra evaluation. 2 Intuitively, explicitness(E) and size(S) may be highly related to the given dataset. The different capacity requirements in the 3rd paragraph may be due to the input modality difference. Given a fixed dataset, the evaluation of disentanglement should provide enough capacity and training time which is powerful enough to achieve the DCI evaluation. If the capacity of probing needs to be evaluated, then the training time, cost, and learning rate may also be considered because they may influence the final value of DCI.
1 The traditional DCI framework may already be considered explicitness(E) and size(S). For instance, to evaluate the disentanglement (D) of different representation methods, you may need to use a fixed capacity of probing (f), and the latent size should also be fixed. DCI and ES may be entangled with each other. For instance, if you change the capacity of probing or the latent size, then the DCI evaluation also changes correspondingly. The reviewer still needs clarification on the motivation for considering explicitness(E) and size(S) as extra evaluation.
jxgz7FEqWq
EMNLP_2023
1. The comparison experiment with the MPOP method is lacking (See Missing References [1]). MPOP method is a lightweight fine-tuning method based on matrix decomposition and low-rank approximation, and it has a strong relevance to the method proposed in the paper. However, there is no comparison with this baseline method in the paper. 2. In Table 4, only the training time of the proposed method and AdaLoRA is compared, lacking the efficiency comparison with LoRA, Bitfit, and Adapter. 3. In the experimental section of the paper, the standard deviation after multiple experiments is not provided. The improvement brought by SoRA compared with the baseline is quite limited, which may be due to random fluctuations. The author should clarify which effects are within the range of standard deviation fluctuations and which are improvements brought by the SoRA method.
3. In the experimental section of the paper, the standard deviation after multiple experiments is not provided. The improvement brought by SoRA compared with the baseline is quite limited, which may be due to random fluctuations. The author should clarify which effects are within the range of standard deviation fluctuations and which are improvements brought by the SoRA method.
ICLR_2022_3010
ICLR_2022
1. Missing important related works The related works are not well-reviewed. To be specific, only two papers of 2020 are discussed and recent papers of 2021 are missing. For example, the following papers are very close to the topic this paper addresses: Dynamic Fusion With Intra-and Inter-Modality Attention Flow for Visual Question Answering, CVPR 2019. Deep Modular Co-Attention Networks for Visual Question Answering, CVPR 2019. Uniter: Universal image-text representation learning, ECCV 2020. Visualbert: A simple and performant baseline for vision and language. ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision. 2. Compared baselines are not advanced enough. VQA is a very popular research topic and plenty of approaches are benchmarked on the VQA-v2 dataset and the GQA dataset. However, many recent advanced methods are missed. For example, only one method after 2019 is compared in Table 3. 3. Missing some details of model implementation. What are the models used to extract entity-level, noun phrase level, and sentence level features in section 3.1.2? 4. This paper is not well organized. The layout of this paper is a bit rushed. For example, The font size of some annotations of Figure1 and Figure 2 is relatively small. And these two figures are not drawn explicitly enough. Table 2 is inserted wrongly inside of a paragraph. Top two lines on page 6 are in the wrong format.
4. This paper is not well organized. The layout of this paper is a bit rushed. For example, The font size of some annotations of Figure1 and Figure 2 is relatively small. And these two figures are not drawn explicitly enough. Table 2 is inserted wrongly inside of a paragraph. Top two lines on page 6 are in the wrong format.
NIPS_2019_1049
NIPS_2019
- While the types of interventions included in the paper are reasonable computationally, it would be important to think about whether they are practical and safe for querying in the real world. - The assumption of disentangled factors seems to be a strong one given factors are often dependent in the real world. The authors do include a way to disentangle observations though, which helps to address this limitation. Originality: The problem of causal misidentification is novel and interesting. First, identifying this phenomenon as an issue in imitation learning settings is an important step towards improved robustness in learned policies. Second, the authors provide a convincing solution as one way to address distributional shift by discovering the causal model underlying expert action behaviors. Quality: The quality of the work is high. Many details are not included in the main paper, but the appendices help to clarify some of the confusion. The authors evaluated the approach on multiple domains with several baselines. It was particularly helpful to see the motivating domains early on with an explanation of how the problem exists in these domains. This motivated the solution and experiments at the end. Clarity: The work was very well-written, but many parts of the paper relied on pointers to the appendices so it was necessary to go through them to understand the full details. There was a typo on page 3: Z_t → Z^t. Significance: The problem and approach can be of significant value to the community. Many current learning systems fail to identify important features relevant for a task due to limited data and due to the training environment not matching the real world. Since there will almost always be a gap between training and testing, developing approaches that learn the correct causal relationships between variables can be an important step towards building more robust models. Other comments: - What if the factors in the state are assumed to be disentangled but are not? What will the approach do/in what cases will it fail? - It seems unrealistic to query for expert actions at arbitrary states. One reason is because states might be dangerous, as the authors point out. But even if states are not dangerous, parachuting to a particular state would be hard practically. The expert could instead be simply presented a state and asked what they would do hypothetically (assuming the state representations of the imitator and expert match, which may not hold), but it could be challenging for an expert to hypothesize what he or she would do in this scenario. Basically, querying out of context can be challenging with real users. - In the policy execution mode, is it safe to execute the imitator’s learned policy in the real world? The expert may be capable of acting safely in the world, but given that the imitator is a learning agent, deploying the agent and accumulating rewards in the real world can be unsafe. - On page 7, there is a reference to equation 3, which doesn’t appear in the main submission, only in the appendix. - In the results section for intervention by policy execution, the authors indicate that the current model is updated after each episode. How long does this update take? - For the Atari game experiments, how is the number of disentangled factors chosen to be 30? In general, this might be hard to specify for an arbitrary domain. - Why is the performance for DAgger in Figure 7 evaluated at fewer intervals? The line is much sharper than the intervention performance curve. - The authors indicate that GAIL outperforms the expert query approach but that the number of episodes required are an order of magnitude higher. Is there a reason the authors did not plot a more equivalent baseline to show a fair comparison? - Why is the variance on Hopper so large? - On page 8, the authors state that the choice of the approach for learning the mixture of policies doesn’t matter, but disc-intervention obtains clearly much higher reward than unif-intervention in Figures 6 and 7, so it seems like it does make a difference. ----------------------------- I read the author response and was happy with the answers. I especially appreciate the experiment on testing the assumption of disentanglement. It would be interesting to think about how the approach can be modified in the future to handle these settings. Overall, the work is of high quality and is relevant and valuable for the community.
- While the types of interventions included in the paper are reasonable computationally, it would be important to think about whether they are practical and safe for querying in the real world.
NIPS_2016_537
NIPS_2016
weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers.
7) in the same section, the notation {\cal P} with a subscript is used several times without being defined.
NIPS_2017_110
NIPS_2017
of this work include that it is a not-too-distant variation of prior work (see Schiratti et al, NIPS 2015), the search for hyperparameters for the prior distributions and sampling method do not seem to be performed on a separate test set, the simultion demonstrated that the parameters that are perhaps most critical to the model's application demonstrate the greatest relative error, and the experiments are not described with adequate detail. This last issue is particularly important as the rupture time is what clinicians would be using to determine treatment choices. In the experiments with real data, a fully Bayesian approach would have been helpful to assess the uncertainty associated with the rupture times. Paritcularly, a probabilistic evaluation of the prospective performance is warranted if that is the setting in which the authors imagine it to be most useful. Lastly, the details of the experiment are lacking. In particular, the RECIST score is a categorical score, but the authors evaluate a numerical score, the time scale is not defined in Figure 3a, and no overall statistics are reported in the evaluation, only figures with a select set of examples, and there was no mention of out-of-sample evaluation. Specific comments: - l132: Consider introducing the aspects of the specific model that are specific to this example model. For example, it should be clear from the beginning that we are not operating in a setting with infinite subdivisions for \gamma^1 and \gamma^m and that certain parameters are bounded on one side (acceleration and scaling parameters). - l81-82: Do you mean to write t_R^m or t_R^{m-1} in this unnumbered equation? If it is correct, please define t_R^m. It is used subsequently and it's meaning is unclear. - l111: Please define the bounds for \tau_i^l because it is important for understanding the time-warp function. - Throughout, the authors use the term constrains and should change to constraints. - l124: What is meant by the (*)? - l134: Do the authors mean m=2? - l148: known, instead of know - l156: please define \gamma_0^{***} - Figure 1: Please specify the meaning of the colors in the caption as well as the text. - l280: "Then we made it explicit" instead of "Then we have explicit it"
- l148: known, instead of know - l156: please define \gamma_0^{***} - Figure 1: Please specify the meaning of the colors in the caption as well as the text.
ICLR_2021_1832
ICLR_2021
The paper perhaps bites off a little more than it can chew. It might be best if the authors focused on their theoretical contributions in this paper, added more text and intuition about the extensions of their current bias-free NNs, fleshed out their analyses of the lottery ticket hypothesis and stopped at that. The exposition and experiments done with tropical pruning need more work. Its extension to convolutional layers is a non-trivial but important aspect that the authors are strongly encouraged to address. This work could possibly be written up into another paper. Similarly, the work done towards generating adversarial samples could definitely do with more detailed explanations and experiments. Probably best left to another paper. Contributions: The theoretical contributions of the work are significant and interesting. The fact that the authors have been able to take their framework and apply it to multiple interesting problems in the ML landscape speaks to the promise of their theory and its resultant perspectives. The manner in which the tropical geometric framework is applied to empirical problems however, requires more work. Readability: The general organization and technical writing of the paper are quite strong, in that concepts are laid out in a manner that make the paper approachable despite the unfamiliarity of the topic for the general ML researcher. The language of the paper however, could do with some improvement; Certain statements are written such that they are not the easiest to follow, and could therefore be misinterpreted. Detailed comments: While there are relatively few works that have explicitly used tropical geometry to study NN decision boundaries, there are others such as [2] which are similar in spirit, and it would be interesting to see exactly how they relate to each other. Abstract: It gets a little hard to follow what the authors are trying to say when they talk about how they use the new perspectives provided by the geometric characterizations of the NN decision boundaries. It would be helpful if the tasks were clearly enumerated. Introduction: “For instance, and in an attempt to…” Typo – delete “and”. Similar typos found in the rest of the section too, addressing which would improve the readability of the paper a fair bit. Preliminaries to tropical geometry: The preliminaries provided by the authors are much appreciated, and it would be incredibly helpful to have a slightly more detailed discussion of the same with some examples in the appendix. To that end, it would be a lot more insightful to discuss ex. 2 in Fig. 1, in addition to ex. 1. What exactly do the authors mean by the “upper faces” of the convex hull? The dual subdivision and projection π need to be explained better. Decision boundaries of neural networks: The variable ‘p’ is not explicitly defined. This is rather problematic since it has been used extensively throughout the rest of the paper. It would make sense to move def. 6 to the section discussing preliminaries. Digesting Thm. 2: This section is much appreciated and greatly improves the accessibility of the paper. It would however be important, to provide some intuition about how one would study decision boundaries when the network is not bias-free, in the main text. In particular, how would the geometry of the dual subdivision δ ( R ( x ) ) change? On a similar note, how do things change in practice when studying deep networks that are not bias free, given that, “Although the number of vertices of a zonotope is polynomial in the number of its generating line segments, fast algorithms for enumerating these vertices are still restricted to zonotopes with line segments starting at the origin”? Can Prop. 1 and Cor. 1 be extended to this case trivially? Tropical perspective to the lottery ticket hypothesis: It would be nice to quantify the (dis)similarity in the shape of the decision boundaries polytopes across initializations and pruning using something like the Wasserstein metric. Tropical network pruning: How are λ 1 , λ 2 chosen? Any experiments conducted to decide on the values of the hyper-parameters should be mentioned in the main text and included in the appendix. To that end, is there an intuitive way to weight the two hyper-parameters relative to each other? Extension to deeper networks: Does the order in which the pruning is applied to different layers really make a difference? It would also be interesting to see whether this pruning can be parallelized in some way. A little more discussion and intuition regarding this extension would be much appreciated. Experiments: The descriptions of the methods used as comparisons are a little confusing – in particular, what do the authors mean when they say “pruning for all parameters for each node in a layer” Wouldn’t these just be the weights in the layer? “…we demonstrate experimentally that our approach can outperform all other methods even when all parameters or when only the biases are fine-tuned after pruning” – it is not immediately obvious why one would only want to fine-tune the biases of the network post pruning and a little more intuition on this front might help the reader better appreciate the proposed work and its contributions. Additionally, it might be an unfair comparison to make with other methods, since the objective of the tropical geometry-based pruning is preservation of decision boundaries while that of most other methods is agnostic of any other properties of the NN’s representational space. Going by the results shown in Fig. 5, it would perhaps be better to say that the tropical pruning method is competitive with other pruning methods, rather than outperforming them (e.g., other methods seem to do better with the VGG16 on SVHN and CIFAR100) “Since fully connected layers in DNNs tend to have much higher memory complexity than convolutional layers, we restrict our focus to pruning fully connected layers.” While it is true that fully connected layers tend to have higher memory requirements than convolutional ones, the bulk of the parameters in modern CNNs still belong to convolutional layers. Moreover, the most popular CNNs are now fully convolutional (e.g., ResNet, UNet) which would mean that the proposed methods in their current form would simply not apply to them. Comparison against tropical geometry approaches for network pruning – why are the accuracies for the two methods different when 100% of the neurons are kept and the base architecture used is the same? The numbers reported are à (100, 98.6, 98.84) Tropical adversarial attacks: Given that this topic is not at all elaborated upon in the main text (and none of the figures showcase any relevant results either), it is strongly recommended that the authors either figure out a way to allocate significantly more space to this section, or not include it in this paper. (The idea itself though seems interesting and could perhaps make for another paper in its own right.) References: He et al. 2018a and 2018b seem to be the same. [1] Zhang L. et al., “Tropical Geometry of Deep Neural Networks”, ICML 2018. [2] Balestriero R. and Baraniuk R., “A Spline Theory of Deep Networks”, ICML 2018.
1. What exactly do the authors mean by the “upper faces” of the convex hull? The dual subdivision and projection π need to be explained better. Decision boundaries of neural networks: The variable ‘p’ is not explicitly defined. This is rather problematic since it has been used extensively throughout the rest of the paper. It would make sense to move def.
NIPS_2017_303
NIPS_2017
of their approach with respect to the previous SUCRL. The provided numerical simulation is not conclusive but supports the above considerations; - Clarity: the paper could be clearer but is sufficiently clear. The authors provide an example and a theoretical discussion which help understanding the mathematical framework; - Originality: the work seems to be sufficiently original with respect to its predecessor (SUCRL) and with respect to other published works in NIPS; - Significance: the motivation of the paper is clear and relevant since it addresses a significant limitation of previous methods; Other comments: - Line 140: here the first column of Qo is replaced by vo to form P'o, so that the first state is not reachable anymore but from a terminating state. I assume that either Ass.1 (finite length of an option) or Ass. 2 (the starting state is a terminal state) clarify this choice. In the event this is the case, the authors should mention the connection between the two; - Line 283: "four" -> "for"; - Line 284: "where" s-> "were";
- Originality: the work seems to be sufficiently original with respect to its predecessor (SUCRL) and with respect to other published works in NIPS;
NIPS_2021_2445
NIPS_2021
and strengths in their analysis with sufficient experimental detail, it is admirable, but they could provide more intuition why other methods do better than theirs. The claims could be better supported. Some examples and questions(if I did not miss out anything) Why using normalization is a problem for a network or a task (it can be thought as a part of cosine distance)? How would Barlow Twins perform if their invariance term is replaced with a euclidean distance? Your method still uses 2048 as the batch size, I would not consider it as small. For example, Simclr uses examples in the same batch and its batch size changes between 256-8192. Most of the methods you mentioned need even much lower batch size. You mentioned not sharing weights as an advantage, but you have shared weights in your results, except Table 4 in which the results degraded as you mentioned. What stops the other methods from using different weights? It should be possible even though they have covariance term between the embeddings, how much their performance would be affected compared with yours? My intuition is that a proper design might be sufficient rather than separating variance terms. - Do you have a demonstration or result related to your model collapsing less than other methods? In line 159, you mentioned gradients become 0 and collapse; it was a good point, is it commonly encountered, did you observe it in your experiments? - I am also not convinced to the idea that the images and their augmentations need to be treated separately, they can be interchangeable. - Variances of the results could be included to show the stability of the algorithms since it was another claim in the paper(although "collapsing" shows it partly, it is a biased criteria since the other methods are not designed for var/cov terms). - How hard is it to balance these 3 terms? - When someone thinks about gathering two batches from two networks and calculate the global batch covariance in this way; it includes both your terms and Barlow Twins terms. Can anything be said based on this observation, about which one is better and why? Significance: Currently, the paper needs more solid intuition or analysis or better results to make an impact in my opinion. The changes compared with the prior work are minimal. Most of the ideas and problems in the paper are important, but they are already known. The comparisons with the previous work is valuable to the field, they could maybe extend their experiments to the more of the mentioned methods or other variants. The authors did a great job in presenting their work's limitations, their results in general not being better than the previous works and their extensive analysis(tables). If they did a better job in explaining the reasons/intuitions in a more solid way, or include some theory if there is any, I would be inclined to give an accept.
- I am also not convinced to the idea that the images and their augmentations need to be treated separately, they can be interchangeable.
NIPS_2016_313
NIPS_2016
Weakness: 1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sperately to better support the claim. 2. Lacks in detail about the techniques and make it hard to reproduce the result. For example, it is unclear about the sparsification process since it is important to extract the landmark features for following steps. And how to generate the landmark on the edge? How to decide the number of landmark used? What kind of images features? What is the fixed radius with different scales? How to achieve shape invariance, etc. 3. The authors claim to achieve state-of-the-art results on challenging scene text recognition tasks, even outperforms the deep-learning based approaches, which is not convincing. As claimed, the performance majorly come from the first step which makes it reasonable to conduct comparisons experiments with existing detection methods. 4. It is time-consuming since the shape model is trained in pixel level(though sparsity by landmark) and the model is trained independently on all font images and characters. In addition, parsing model is a high-order factor graph with four types of factors. The processing efficiency of training and testing should be described and compared with existing work. 5. For the shape model invariance study, evaluation on transformations of training images cannot fully prove the point. Are there any quantitative results on testing images?
1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sperately to better support the claim.
NIPS_2018_25
NIPS_2018
- My understanding is that R,t and K (the extrinsic and intrinsic parameters of the camera) are provided to the model at test time for the re-projection layer. Correct me in the rebuttal if I am wrong. If that is the case, the model will be very limited and it cannot be applied to general settings. If that is not the case and these parameters are learned, what is the loss function? - Another issue of the paper is that the disentangling is done manually. For example, the semantic segmentation network is the first module in the pipeline. Why is that? Why not something else? It would be interesting if the paper did not have this type of manual disentangling, and everything was learned. - "semantic" segmentation is not low-level since the categories are specified for each pixel so the statements about semantic segmentation being a low-level cue should be removed from the paper. - During evaluation at test time, how is the 3D alignment between the prediction and the groundtruth found? - Please comment on why the performance of GTSeeNet is lower than that of SeeNetFuse and ThinkNetFuse. The expectation is that groundtruth 2D segmentation should improve the results. - line 180: Why not using the same amount of samples for SUNCG-D and SUNCG-RGBD? - What does NoSeeNet mean? Does it mean D=1 in line 96? - I cannot parse lines 113-114. Please clarify.
- Another issue of the paper is that the disentangling is done manually. For example, the semantic segmentation network is the first module in the pipeline. Why is that? Why not something else? It would be interesting if the paper did not have this type of manual disentangling, and everything was learned.
pwW807WJ9G
ICLR_2024
1. Although the authors derive PAC-Bayesian bound for GNNs in the transductive setting and show how the interplay between training and testing sets influences the generalization ability, I fail to see the strong connection between the theoretical analysis and the proposed method. The proposed method seems to simply adopt the idea of the self-attention mechanism from the transformer and apply it to the graph. It's not clear to me how the proposed method enhances the generalization for the distant nodes. 2. My major concern about the proposed method is the graph partition as partitioning the graph usually leads to information loss. Though node2vec is used for positional encoding purposes, it only encodes the local topological structure, and it cannot compensate for the lost information between different subgraphs. Based on algorithm 1 in Appendix E, there is no information exchange between different subgraphs. The nodes in a subgraph can only receive the information from other nodes within this subgraph and these nodes are isolated from the nodes from other subgraphs. The performance seems to highly depend on the quality of the graph partition algorithms. However, it's unclear whether different graph partitions will influence the performance of the proposed method or not. 3. Some experimental setups are not quite clear. See questions below.
1. Although the authors derive PAC-Bayesian bound for GNNs in the transductive setting and show how the interplay between training and testing sets influences the generalization ability, I fail to see the strong connection between the theoretical analysis and the proposed method. The proposed method seems to simply adopt the idea of the self-attention mechanism from the transformer and apply it to the graph. It's not clear to me how the proposed method enhances the generalization for the distant nodes.
ICLR_2021_1705
ICLR_2021
and suggestions for improvement: I have several concerns about the potential impact of both theoretical and practical results. Mainly: By referring to Wilson et al., 2017, the authors argue that diagonal step sizes in adaptive algorithms hurt generalization. First, I find this claim rather vague, as there has been many followups to Wilson et al., 2017, so I suggest the authors to be more precise and include more recent observations. Moreover, one can use non-diagonal versions of these algorithms. For example, see [2 and Adagrad-norm from Ward et al., 2019], it is easy to consider similar non-diagonal versions of Adam/AMSGrad/Adagrad with first order momentum (a.k.a. AdamNC or AdaFOM), then, are these algorithms also supposed to have good generalization? I think it is important to see how these non-diagonal adaptive methods behave in practice compared to SGD/Adam+ for generalization to support the authors' claim. I think the algorithm seems more like an extension of momentum SGD, than Adam. It is nice to improve \eps^{-4} complexity with Lipschitz Hessian assumption, but what happens when this assumption fails? Does Adam+ get standard \epsilon^{-4}? From what I understand in remark after Lemma 1, the variance reduction is ensured by taking β to 0 . The authors use 1 / T a for some a ∈ ( 0 , 1 ) . Here, I have several questions. First, how does such a small β work in practice? If in practice, a larger β works well and theory requires β → 0 for working, it shows to me that theoretical analysis of the paper does not translate to the practical performance. When one uses β values that work well in practice, does the theory show convergence? Related to the previous part, I am also not sure about "adaptivity" of the method. The authors need to use Lipschitz constants L , L H to set step sizes. Moreover β is also fixed in advance, depending on horizon T , which is the main reason to have variance reduction on z t − ∇ f ( w k ) . So, I do not understand what is adaptive in the step size or in the variance reduction mechanism of the method. For experiments, the authors say that Adam+ is comparable with "tuned" SGD. However, from the explanations in the experimental part, I understand that Adam+ is also tuned similar to SGD. Then, what is the advantage compared to SGD? If one needs the same amount of tuning for Adam+, and the performance is similar, I do not see much advantage compared to SGD. On this front, I suggest the authors to show what happens when the step size parameter is varied, is Adam+ more robust to non-tuned step sizes compared to SGD? To sum up, I vote for rejection since 1) the analysis and parameters require strict condition, 2) it is not clear if the analysis illustrates the practical performance (very small β is needed in theory), 3) practical merit is unclear since the algorithm needs to be tunes similar to SGD and the results are also similar to SGD. [1] Zhang, Lin, Jegelka, Jadbabaie, Sra, Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions, ICML 2020. [2] Levy, Online to Offline Conversions, Universality and Adaptive Minibatch Sizes, NIPS 2017. ======== after discussion phase ========== I still think that the merit of the method is unclear due to reasons: 1) It is not clear how the method behaves without Lipschitz Hessian assumption. 2) The method only obtains the state-of-the-art complexity of ϵ − 3.5 with large mini-batch sizes and the complexity with small mini-batch sizes (section 2.1) is suboptimal (in fact drawbacks such as this needs to be presented explicitly, right now I do not see enough discussions about this.). 3) Adaptive variance reduction property claimed by the authors boils down to picking "small enough" β parameter, which in my opinion takes away the adaptivity claim and is for example not the case in adaptive methods such as AdaGrad. 4) The comparison with AdaGrad and Adam with scalar step sizes is not included (the authors promised to include it later, but I cannot make a decision about these without seeing results) and I am not sure if Adam+ will bring benefits over them. 5) Presentation of the paper needs major improvements. I recommend making the remarks after Lemma 1 and theorems clearer, by writing down exact expressions and the implications of these (for example remarks such as "As the algorithm converges with E [ ∇ F ( w ) 2 ] and β decreases to zero, the variance of z t will also decrease" can be made more rigorous and clearer, by writing down exactly the bound for the variance of z t by iterating the recursion written with E δ t + 1 and highlighting what each term does in the bound. This way will be much easier for readers to understand your paper). Therefore, I am keeping my score.
1) It is not clear how the method behaves without Lipschitz Hessian assumption.
IksoHnq4rC
EMNLP_2023
TL;DR I appreciate the efforts and observations / merits found by the authors. However, this paper poorly presents the methodology (both details and its key advantage), and it’s hard to validate the conclusion with such little hyperparameter analysis. I would love to see more detailed results, but I could not accept this version to be accepted as an EMNLP paper. 1 There are too many missing details when presenting the methodology: e.g., what will be the effect if I remove one or two losses presented by the authors? Though their motivations are clear, they do not validate the hypothesis clearly. 2 A lot of equations look like placeholders, such as equations (1, 2, 3, 5, 6). 3 Some of the pieces are simply using existing methods, such as equation (12), the presentation of these methods are also vague (can only be understood after checking the original paper). 4 The pipeline misses a lot of details. For example, how long does it take to pre-train each module? How adding pre-training will benefit the performance? How to schedule the training of the discriminator and the main module? Not mentioning the detail design of the RNN network used. 5 Why do we need to focus on the four aspects? They are just listed there. Also, some of the results presentation does not seem to be thorough and valid. For example, in table 2, the Quora datasets have the highest perturbation ratio, but the downgraded performance is the least among the three. Is it really because the adversarial samples are effective instead of the task variance or dataset variance? Also, we didn’t see the attack performance of other comparison methods. And how is the test set generated? What is the size of the adversarial test set and why is that a good benchmark? 6 In table 4, it’s actually hard to say which is better, A^3 or A^2T, if you count the number of winners for each row and column. 7 In table 5, is the computation time also considered the pre-training stage? If not, why? Does the pre-training stage can serve as a unified step which is agnostic to the dataset and tasks? 8 I don’t quite understand the point of section 4.6, and its relationship to the effectiveness of A^3. This influence of /rho seems to be really obvious. I would rather be more interested in changing the six hyperparameters mentioned in line 444 and test their effectiveness. 9 The related work section is also not very well-written. I couldn’t understand what is the key difference and key advantage of A^3 compared to the other methods.
3 Some of the pieces are simply using existing methods, such as equation (12), the presentation of these methods are also vague (can only be understood after checking the original paper).
ARR_2022_12_review
ARR_2022
I feel the design of NVSB and some experimental results need more explanation (more information in the section below). 1. In Figure 1, given experimental dataset have paired amateur and professional recordings from the same singer, what are the main rationals for (a) Having a separate timbre encoder module (b) SADTW takes outputs of content encoder (and not timbre encoder) as input? 2. For results shown in Table 3, how to interpret: (a) For Chinese MOS-Q, NVSB is comparable to GT Mel A. (b) For Chinese and English MOS-V, Baseline and NVSB have overlapping 95% CI.
1. In Figure 1, given experimental dataset have paired amateur and professional recordings from the same singer, what are the main rationals for (a) Having a separate timbre encoder module (b) SADTW takes outputs of content encoder (and not timbre encoder) as input?
NIPS_2017_71
NIPS_2017
- The paper is a bit incremental. Basically, knowledge distillation is applied to object detection (as opposed to classification as in the original paper). - Table 4 is incomplete. It should include the results for all four datasets. - In the related work section, the class of binary networks is missing. These networks are also efficient and compact. Example papers are: * XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks, ECCV 2016 * Binaryconnect: Training deep neural networks with binary weights during propagations, NIPS 2015 Overall assessment: The idea of the paper is interesting. The experiment section is solid. Hence, I recommend acceptance of the paper.
- Table 4 is incomplete. It should include the results for all four datasets.
NIPS_2017_351
NIPS_2017
- As I said above, I found the writing / presentation a bit jumbled at times. - The novelty here feels a bit limited. Undoubtedly the architecture is more complex than and outperforms the MCB for VQA model [7], but much of this added complexity is simply repeating the intuition of [7] at higher (trinary) and lower (unary) orders. I don't think this is a huge problem, but I would suggest the authors clarify these contributions (and any I may have missed). - I don't think the probabilistic connection is drawn very well. It doesn't seem to be made formally enough to take it as anything more than motivational which is fine, but I would suggest the authors either cement this connection more formally or adjust the language to clarify. - Figure 2 is at an odd level of abstraction where it is not detailed enough to understand the network's functionality but also not abstract enough to easily capture the outline of the approach. I would suggest trying to simplify this figure to emphasize the unary/pairwise/trinary potential generation more clearly. - Figure 3 is never referenced unless I missed it. Some things I'm curious about: - What values were learned for the linear coefficients for combining the marginalized potentials in equations (1)? It would be interesting if different modalities took advantage of different potential orders. - I find it interesting that the 2-Modalities Unary+Pairwise model under-performs MCB [7] despite such a similar architecture. I was disappointed that there was not much discussion about this in the text. Any intuition into this result? Is it related to swap to the MCB / MCT decision computation modules? - The discussion of using sequential MCB vs a single MCT layers for the decision head was quite interesting, but no results were shown. Could the authors speak a bit about what was observed?
- As I said above, I found the writing / presentation a bit jumbled at times.
ICLR_2022_537
ICLR_2022
1. The stability definition needs better justified, as the left side can be arbitrarily small under some construction of \tilde{g}. A more reasonable treatment is to make it also lower bounded. 2. It is expected to see a variety of tasks beyond link predict where PE is important.
1. The stability definition needs better justified, as the left side can be arbitrarily small under some construction of \tilde{g}. A more reasonable treatment is to make it also lower bounded.
NIPS_2020_1391
NIPS_2020
- I wonder how crucial the annealing scheme from the last paragraph in Section 4 is. Especially when $\alpha$ is not decreased to $0$ I imagine this could induce a bias which may be so large that it outweighs the bias reductions attained by using IWAE in the first place. - The only other weakness is related to the clarity of the exposition, especially around the "OVIS_~" estimator (see further details below). ==== EDIT: 2020-08-24 ===== replaced "$\alpha$ is not increased to $1$" by "$\alpha$ is not decreased to $0$" as I had had a typo in this part of my review
- I wonder how crucial the annealing scheme from the last paragraph in Section 4 is. Especially when $\alpha$ is not decreased to $0$ I imagine this could induce a bias which may be so large that it outweighs the bias reductions attained by using IWAE in the first place.
NIPS_2020_486
NIPS_2020
- I wonder what is the total computational complexity compared to other methods (e.g., emerging convolutions). If I imagine the Woodbury flow working on a mobile device, the number of operations could cause a significant power demand. - Following on that, I am worried that the total computational complexity is much higher for other approaches. This could limit the usability of the proposed transformation.
- I wonder what is the total computational complexity compared to other methods (e.g., emerging convolutions). If I imagine the Woodbury flow working on a mobile device, the number of operations could cause a significant power demand.
FbZSZEIkEU
ICLR_2025
- The experimental reports are lacking in many details about the experimental methodology, making it difficult to be confident that the claims are robust. - The explanations throughout the paper should be clearer to fully communicate the ideas and experiments of the authors. - The S2 hacking hypothesis is quite vague and the author do not present any deep understanding that would explain the mechanisms by which certain attention heads pay extra attention to the S2 token. - In the experiments on the DoubleIO and other prompt variations, it is unclear at which token positions paths are being ablated, as this is unspecified by the original circuit. - The authors write: “Given the algorithm inferred from the IOI circuit, it is clear that the full model should completely fail on this task”. However this is a misunderstanding of the original work. The IOI circuit was discovered using mean ablations that keep most of the prompt intact. Therefore Wang et al. don’t expect it generalize to different prompt formats. - The authors write “In the base IOI circuit, the Induction, Duplicate Token, and Previous Token heads primarily attend to the S2 token” this is incorrect according to Section 3 of Wang et al., 2023. These heads are _active_ at the S2 token, but do not primarily attend to it. - The authors write: "The proposed IOI circuit is shown to perform very well while still being faithful to the full model" In fact, the IOI circuit is known to have severe limitations, as shown in concurrent work by Miller et al. (2024) [[2]](https://arxiv.org/abs/2407.08734). Nitpicks: - In Figure 2, it is not clear what Head 1, 2, 3 and 4 refer to. - The paper should include Figure 2 from Wang et al. 2023 [[1]](https://arxiv.org/pdf/2211.00593#page=4.37) to make it easier to follow discussions about the circuit.
- The authors write “In the base IOI circuit, the Induction, Duplicate Token, and Previous Token heads primarily attend to the S2 token” this is incorrect according to Section 3 of Wang et al., 2023. These heads are _active_ at the S2 token, but do not primarily attend to it.
NIPS_2022_389
NIPS_2022
1). Technically speaking, the contribution of this work is incremental. The proposed pipeline is not that impressive or novel; rather, it seems to be a pack of tricks to improve defense evaluation. 2). Although IoFA is well supported by cited works and described failures, its introduction lacks practical cases, where Figures 1 and 2 do not provide example failures and thus do not lead to a better understanding. 3). The reported experimental results appear to evidence the proposed methods, while there is a missing regarding the case analysis and further studies.
1). Technically speaking, the contribution of this work is incremental. The proposed pipeline is not that impressive or novel; rather, it seems to be a pack of tricks to improve defense evaluation.
NIPS_2019_653
NIPS_2019
of the method. Clarity: The paper has been written in a manner that is straightforward to read and follow. Significance: There are two factors which dent the significance of this work. 1. The work uses only binary features. Real world data is usually a mix of binary, real and categorical features. It is not clear if the method is applicable to real and categorical features too. 2. The method does not seem to be scalable, unless a distributed version of it is developed. It's not reasonable to expect a single instance can hold all the training data that the real world datasets ususally contain.
2. The method does not seem to be scalable, unless a distributed version of it is developed. It's not reasonable to expect a single instance can hold all the training data that the real world datasets ususally contain.
NIPS_2020_284
NIPS_2020
1. A major concern that I have is, the authors consider only shifts in annotations as the noise. However, real-world annotations include other types of noises like missing annotations or duplicate annotations. The authors do not consider this in their discussion. From the outset, it seems that their current method cannot accommodate these additional noises. From this perspective, I would say that the paper is incomplete in modeling different types of annotation noises. 2. It is not clear why the authors approximate pdf phi and Psi with Gaussian distribution. 3. It is not clear why eta_ri term is non-central chi-squared distribution. 4. As far as I understand, small shifts in annotations will not affect performance much, since neural networks can be robust if receptive size of the network is large enough. Can the authors discuss this more in detail. 5. The proposed method seems to be too specific to the counting problem. Can this method be extended to other problems in vision like object detection.
3. It is not clear why eta_ri term is non-central chi-squared distribution.
NIPS_2017_401
NIPS_2017
Weakness: 1. There are no collaborative games in experiments. It would be interesting to see how the evaluated methods behave in both collaborative and competitive settings. 2. The meta solvers seem to be centralized controllers. The authors should clarify the difference between the meta solvers and the centralized RL where agents share the weights. For instance, Foester et al., Learning to communicate with deep multi-agent reinforcement learning, NIPS 2016. 3. There is not much novelty in the methodology. The proposed meta algorithm is basically a direct extension of existing methods. 4. The proposed metric only works in the case of two players. The authors have not discussed if it can be applied to more players. Initial Evaluation: This paper offers an analysis of the effectiveness of the policy learning by existing approaches with little extension in two player competitive games. However, the authors should clarify the novelty of the proposed approach and other issues raised above. Reproducibility: Appears to be reproducible.
4. The proposed metric only works in the case of two players. The authors have not discussed if it can be applied to more players. Initial Evaluation: This paper offers an analysis of the effectiveness of the policy learning by existing approaches with little extension in two player competitive games. However, the authors should clarify the novelty of the proposed approach and other issues raised above. Reproducibility: Appears to be reproducible.
NIPS_2018_917
NIPS_2018
- Results on bAbI should be taken with a huge grain of salt and only serve as a unit-test. Specifically, since the bAbI corpus is generated from a simple grammar and sentence follow a strict triplet structure, it is not surprising to me that a model extracting three distinct symbol representations from a learned sentence representation (therefore reverse engineering the underlying symbolic nature of the data) would solve bAbI tasks. However, it is highly doubtful this method would perform well on actual natural language sentences. Hence, statements such as "trained [...] on a variety of natural language tasks" is misleading. The authors of the baseline model "recurrent entity networks" [12] have not stopped at bAbI, but also validated their models on more real-world data such as the Children's Book Test (CBT). Given that RENs solve all bAbI tasks and N2Ns solve all but one, it is not clear to me what the proposed method adds to a table other than a small reduction in mean error. Moreover, the N2N baseline in Table 2 is not introduced or referenced in this paper, so I am not sure which system the authors are referring to here. Minor Comments - L11: LSTMs have only achieved on some NLP tasks, whereas traditional methods still prevail on others, so stating they have achieved SotA in NLP is a bit too vague. - L15: Again, too vague, certain RNNs work well for certain natural language reasoning tasks. See for instance the literature on natural language inference and the leaderboard at https://nlp.stanford.edu/projects/snli/ - L16-18: The reinforcement learning / agent analogy seems a bit out-of-place here. I think you generally point to generalization capabilities which I believe are better illustrated by the examples you give later in the paper (from lines 229 to 253). - Eq. 1: This seems like a very specific choice of combining the information from entity representations and their types. Why is this a good choice? Why not keep the concatenation of the kitty/cat outer product and the mary/person outer product? Why is instead the superposition of all bindings a good design choice? - I believe section four could benefit from a small overview figures illustrating the computation graph that is constructed by the method. - Eq. 7: At first, I found it surprising why three distinct relation representation are extracted from the sentence representation, but it became clearer later with the write, move and backling functions. Maybe already mention at this point why the three relation representations are going to be used for. - Eq. 15: s_question has not been introduced before. I imagine it is a sentence encoding of the question and calculated similarly to Eq. 5? - Eq. 20: A bit more details for readers unfamiliar with bAbI or question answering would be good here. "valid words" here means possible answer words for the given story and question, correct? - L192: "glorot initalization" -> "Glorot initialization". Also, there is a reference for that method: Glorot, X., & Bengio, Y. (2010, March). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 249-256). - L195: α=0.008, β₁=0.6 and β₂=0.4 look like rather arbitrary choices. Where does the configuration for these hyper-parameters come from? Did you perform a grid search? - L236-244: If I understand it correctly, at test time stories with new entities (Alex etc.) are generated. How does your model support a growing set of vocabulary words given that MLPs have parameters dependent on the vocabulary size (L188-191) and are fixed at test time? - L265: If exploding gradients are a problem, why don't you perform gradient clipping with a high value for the gradient norm to avoid NaNs appearing? Simply reinitializing the model is quite hacky. - p.9: Recurrent entity networks (RENs) [12] is not just an arXiv paper but has been published at ICLR 2017.
- L15: Again, too vague, certain RNNs work well for certain natural language reasoning tasks. See for instance the literature on natural language inference and the leaderboard at https://nlp.stanford.edu/projects/snli/ - L16-18: The reinforcement learning / agent analogy seems a bit out-of-place here. I think you generally point to generalization capabilities which I believe are better illustrated by the examples you give later in the paper (from lines 229 to 253).
m8ERGrOf1f
ICLR_2025
1. While the unified low-precision quantization strategy is effective, the scalability of the approach—especially when applied to larger diffusion models beyond those tested—is unclear. Including runtime and memory trade-offs when scaling to more complex models (e.g., SDXL) or higher-resolution tasks would enhance the practical utility of the work. 2. There is a lack of BF16 (baseline) when the authors try to demonstrate the effectiveness of the purposed method on FP8 configurations. 3. In Fig.5, as shown in the third figure, the proposed sensitive-layer selection against randomized selection does not make too much difference in terms of StableDiffusion and the authors do not further discuss such an observation. Besides, there is a lack of mathematical or theoretical justification for the proposed Algorithm.1.
3. In Fig.5, as shown in the third figure, the proposed sensitive-layer selection against randomized selection does not make too much difference in terms of StableDiffusion and the authors do not further discuss such an observation. Besides, there is a lack of mathematical or theoretical justification for the proposed Algorithm.1.
ARR_2022_1_review
ARR_2022
- Using original encoders as baselines might not be sufficient. In most experiments, the paper only compares with the original XLM-R or mBERT trained without any knowledge base information. It is unclear whether such encoders being fine-tuned towards the KB tasks would actually perform comparable to the proposed approach. I would like to see experiments like just fine tuning the encoders with the same dataset but the MLM objective in their original pretraining and comparing with them. Such baselines can leverage on input sequences as simple as `<s>X_s X_p X_o </s>` where one of them is masked w.r.t. MLM training. - The design of input formats is intuitive and lacks justifications. Although the input formats for monolingual and cross-lingual links are designed to be consistent, it is hard to tell why the design would be chosen. As the major contribution of the paper, justifying the design choice matters. In other words, it would be better to see some comparisons over some variants, say something like `<s>[S]X_s[S][P]X_p[P][O]X_o[O]</s>` as wrapping tokens in the input sequence has been widely used in the community. - The abstract part is lengthy so some background and comparisons with prior work can be elaborated in the introduction and related work. Otherwise, they shift perspective of the abstract, making it hard for the audience to catch the main novelties and contributions. - In line 122, triples denoted as $(e_1, r, e_2)$ would clearly show its tuple-like structure instead of sets. - In sec 3.2, the authors argue that the Prix-LM (All) model consistently outperforms the single model, hence the ability of leveraging multilingual information. Given the training data sizes differ a lot, I would like to see an ablation that the model is trained on a mix of multilingual data with the same overall dataset size as the monolingual. Otherwise, it is hard to justify whether the performance gain is from the large dataset or from the multilingual training.
- In line 122, triples denoted as $(e_1, r, e_2)$ would clearly show its tuple-like structure instead of sets.
NIPS_2021_2247
NIPS_2021
1). Lack of speed analysis, the experiments have compared GFLOPs of different segmentation networks. However, there is no comparisons of inference speed between the proposed network and prior work. The improvement on inference speed will be more interesting than reducing FLOPs. 2). For the detail of the proposed NRD, it is reasonable that the guidance maps are generated from the low-level feature maps. And the guidance maps can be predicted from the the first-stage feature maps or the second-stage feature maps. It is better to provide one ablation study about the effect for them. 3). Important references are missing. The GFF[1] and EfficientFCN[2] both aims to implement the fast semantic segmentation method in the encode-decoder architecture. I encourage the authors to have a comprehensive comparison with these work. [1]. Gated Fully Fusion for Semantic Segmentation, AAAI'20. [2]. EfficientFCN: Holistically-guided Decoding for Semantic Segmentation, ECCV'20. See above. The societal impact is shown one the last page of the manuscript.
1). Lack of speed analysis, the experiments have compared GFLOPs of different segmentation networks. However, there is no comparisons of inference speed between the proposed network and prior work. The improvement on inference speed will be more interesting than reducing FLOPs.
NIPS_2020_1634
NIPS_2020
1. Optimal quantization is not scalable (which is mentioned in the paper as well). Even with clustering before, it is costly to both N(number of data) and M(the dimension). The paper (in abstract and intro) aims to speed up VI by fast convergence which is needed for big data/big model setting, which the quantization is a bottleneck for it, which makes the method loses its point. 2. Apart form the scalability, I wonder about the effectiveness in high dimensional space as well where everything is far away from each other. 3. The experiments are only with very simple small UCI datasets and very simple/small models (linear regression). I would be great to see with more "real-life" experiments. 4. There is also limited baselines. [a] is discussed in the paper but not compared. Only the basic BBVI is compared. It would be good to see at least baselines such as [a] and [b] in the experiments. 5. For algorithm 2, it would be insanely expensive if quantization needs be to computed every round. but it is explained with exponential family, it only need once. But if it limits to be exponential family, then the point of whole BBVI is lost. 5. Small things: line 2, minimize->maximize; can you explicitly discuss about the optimal quantization computational complexity. [a]Alexander Buchholz, Florian Wenzel, and Stephan Mandt. “Quasi-Monte Carlo Variational 238 Inference”. [b] Stochastic Learning on Imbalanced Data: Determinantal Point Processes for Mini-batch Diversification
1. Optimal quantization is not scalable (which is mentioned in the paper as well). Even with clustering before, it is costly to both N(number of data) and M(the dimension). The paper (in abstract and intro) aims to speed up VI by fast convergence which is needed for big data/big model setting, which the quantization is a bottleneck for it, which makes the method loses its point.
lja4JMesmC
ICLR_2025
The biggest issue of the paper is the lack of depth. While it ablates the impact of each of the algorithmic components, they authors spent little effort trying to understand why each of them work and to compare them against existing methods. 1. It’s not clear what makes EP successful. - I strongly suspect the performance gain is mostly due to the fine-tuning of the connector module. The critical experiment of simply having both the connector and the LLM (LoRA params) trainable is missing. - Additionally, an experiment comparing EP with prefixing tuning [1] will tell whether it’s necessary to condition the prefix (additional tokens to the LLM’s embedding space) on the image at all to get good performance. Essentially, I need to see experiments showing me EP > fine-tuning the original VLM’s connector + prefix tuning to be convinced it’s novel. - I also don’t buy the claim that fine-tuning the Vision model in VLM will distort vision language alignment at all. If fine-tuning the Vision model is harmful, wouldn’t the trained LoRA weights be more harmful as well? A controlled experiment where the vision encode is also trained is needed. I am confident this will make EP perform even better. - Finally, other works with the same core methodology should be discussed. For example, Graph Neural Prompting [2] builds a knowledge graph based on the prompt and multiple choice candidates and generates a conditional prefix to prompt the LLM. I think the idea is extremely similar to EP. 2. Regarding RDA: this is essentially a fancy way of saying knowledge distillation but no relevant papers are cited. Regarding implementation, the author mentions gradient detachment. If I understood it correctly, this just means the TSM, or the “teacher”, is not trained while the goal is to train the student. Shouldn’t this be the default setting anyway? 3. Contrastive Response Tuning: as part of the core methodology, the paper should compare its effectiveness against existing methods, such as contrastive decoding [3][4]. Issues mentioned above should be addressed. Otherwise this work should aim for a more application-oriented venue. The notations issues. - In equations (1), (2), (3), (5), (6), why is there a min() operator on the left hand side? The author seems to mix it up with the argmin notation. I think the author should remove the min() and avoid argmin() like notation since not all parameters are trained. Minor grammar issues - For example, Takeaway #1: TSM features can prompts (prompt) VLMs to generate desired responses. References: [1] Li, Xiang Lisa, and Percy Liang. "Prefix-Tuning: Optimizing Continuous Prompts for Generation." Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021. [2] Tian, Yijun, et al. "Graph neural prompting with large language models." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 17. 2024. [3] Leng, Sicong, et al. "Mitigating object hallucinations in large vision-language models through visual contrastive decoding." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [4] Favero, Alessandro, et al. "Multi-modal hallucination control by visual information grounding." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
3. Contrastive Response Tuning: as part of the core methodology, the paper should compare its effectiveness against existing methods, such as contrastive decoding [3][4]. Issues mentioned above should be addressed. Otherwise this work should aim for a more application-oriented venue. The notations issues.
ICLR_2023_3291
ICLR_2023
although this paper uses formal data symbolic description for the proposed method, there is still no framework diagram to help the method understanding, which makes the algorithm of the article slightly inferior in the narration and implementation process. although this paper introduces various attack methods in detail, it does not show more attack methods in experimental comparison, such as ISSBA. as a novel attack method, the authors should give more experimental comparison and analysis of the attack. 3, the author mentioned in the paper the advantages of the algorithm can also be mentioned in the attack on high efficiency. But for this part, I don't seem to see more theoretical analysis (convergence) and related experimental proofs. I have reservations about this point. What is the main difference between the authors and ISSBA in terms of the formulation of the method? I would like the authors further to explain the contribution in conjunction with the formulas. Some Questions: 1.How is the computational efficiency of extracting the trigger? Unlike previous backdoor attack algorithms, the method needs to analyze and extract data from the entire training dataset. Does this result in exponential time growth as the dataset increases? 2. The effectiveness and problem of the algorithm are that it requires access to the entire training dataset. Have the authors considered how the algorithm should operate effectively when the training dataset is not fully perceptible? Overall: The trigger proposed in this paper is novel, but the related validation experiments are not comprehensive, and the time complexity of the computation and the efficiency of the algorithm are not clearly analyzed. In addition, I expect the authors to further elucidate the technical contribution rather than the form of the attack.
2. The effectiveness and problem of the algorithm are that it requires access to the entire training dataset. Have the authors considered how the algorithm should operate effectively when the training dataset is not fully perceptible? Overall: The trigger proposed in this paper is novel, but the related validation experiments are not comprehensive, and the time complexity of the computation and the efficiency of the algorithm are not clearly analyzed. In addition, I expect the authors to further elucidate the technical contribution rather than the form of the attack.
NIPS_2021_895
NIPS_2021
The description of the method is somewhat unclear and it is hard to understand all the design choices. Some natural baselines and important related work seem to be missing. Major comments: The lack of flexibility of standard GPs is not a new observation as has been approached in the past, possibly most famously by the deep GP [1]. These models have recently become a mainstream tool with easily usable frameworks [e.g., 2], so that it would seems like a natural baseline to compare against. Generally, a lot of related work seems to be missed by this paper. For instance, meta-learning kernels for GPs for few-shot tasks has already been done by [3] and then later also by [4,5,6]. These should probably be mentioned and it should be discussed how the proposed method compares against them. The paper proposes to use CNFs, but these require solving a complex-looking integral (e.g., Eq. 9). It should be discussed how tractable this integral is or how it is approximated in practice. Moreover, it seems like an easier choice would be standard NFs, so it should be discussed why CNFs are assumed to be better here. Possibly, one should also directly compare against a model with a standard NF as an ablation study. In l. 257ff it is claimed that the proposed GP methods are less prone to memorization. How does this compare to the results in [4], where DKT seems to memorize as well? Could the regularization proposed in [4] be combined with the proposed model? Minor comments: In l. 104 it is said that every kernel can be described by a feature space parameterized by a neural network, but this is trivially not true. For instance, for RBF kernels, the RKHS is famously infinite-dimensional, such that one would need an NN with infinite width to represent it. So at most, NNs can represent finite-dimensional RKHSs in practice. This limitation should be made more clear. l. 151 with GP -> with a GP l. 152 use invertible mapping -> use an invertible mapping l. 161 the "marginal log-probability" is more commonly called "log marginal likelihood" or "log evidence" Eq. (8): should it be z instead of y ? In the tables, it would be more helpful to also bolden the fonts of the entries where the error bars overlap with the best entry. [1] Damianou & Lawrence 2012, https://arxiv.org/abs/1211.0358 [2] Dutordoir et al. 2021, https://arxiv.org/abs/2104.05674 [3] Fortuin et al. 2019, https://arxiv.org/abs/1901.08098 [4] Rothfuss et al. 2020, https://arxiv.org/abs/2002.05551 [5] Venkitaraman et al. 2020, https://arxiv.org/abs/2006.07212 [6] Titsias et al. 2020, https://arxiv.org/abs/2009.03228 The limitations of the method are hard to assess, mostly because the choice of CNFs over NFs or any other flexible distribution family is not well motivated and because (theoretical and empirical) comparisons to many relevant related methods are missing. This should be addressed.
104 it is said that every kernel can be described by a feature space parameterized by a neural network, but this is trivially not true. For instance, for RBF kernels, the RKHS is famously infinite-dimensional, such that one would need an NN with infinite width to represent it. So at most, NNs can represent finite-dimensional RKHSs in practice. This limitation should be made more clear. l.
f5juXkyorf
ICLR_2024
* The proposed method is not well-positioned in literature. It's worth pointing out that the key idea of representing the marginal score as the expectation of scores of distributions conditioned on inputs is actually quite well-known. It has been used, for example, to develop the original denoising score matching objective [1]. It is also used in the literature as "score-interpolation" [2]. I just named a few but I would recommend the authors to do a thorough literature review as I believe this property is used in many more works. * The definition of the notation \hat{c}_k (baycenters) is missing. * The exponential dependence of the sampling error on T is concerning. Although empirical evidence is provided to justify that this error bound is pessimistic, it also renders the bound unnecessary. Meanwhile, it's unclear if the conclusion that under sigma < 0.4, a large starting T is harmless will generalize to other datasets. * The 3D point cloud experiment is interesting but I don't understand why the proposed method fills the gap there. Could the authors elaborate on this? * The practical utility of the proposed closed-form models is also unclear. Given that the model can only sample from baycenters of data point tuples, is there a clear case where we would prefer such a model over a trained score model? * This is minor, but the readability of section 3 can be greatly improved if not going to the notation convention used by rectified flow, as the proposed method can be described using the standard diffusion model formulation (where the time is reversed and stochastic transition is used). [1] Vincent, Pascal. "A connection between score matching and denoising autoencoders." Neural computation 23.7 (2011): 1661-1674. [2] Dieleman, Sander, et al. "Continuous diffusion for categorical data." arXiv preprint arXiv:2211.15089 (2022).
* The proposed method is not well-positioned in literature. It's worth pointing out that the key idea of representing the marginal score as the expectation of scores of distributions conditioned on inputs is actually quite well-known. It has been used, for example, to develop the original denoising score matching objective [1]. It is also used in the literature as "score-interpolation" [2]. I just named a few but I would recommend the authors to do a thorough literature review as I believe this property is used in many more works.
cdwXPlM4uN
ICLR_2024
major concerns: - This paper proposes a knowledge distillation framework from ANN to SNN. The only contribution is the loss function used here is different from previous work. This contribution is not sufficient for conferences like ICLR. - It is hard to determine that the usage of this loss function is unique in the literature. In the domain of knowledge distillation in ANNs, there are numerous works studying different types of loss functions. Whether this loss function is original remains questioned. - I find the design choice of simply averaging the time dimension in SNN featuremaps inappropriate. The method in this paper is not the real way to compute the similarity matrix. Instead, to calculate the real similarity matrix of SNNs, the authors should flatten the SNN activation to $[B, TCHW]$ and then compute the $B\times B$ covariance matrix. For detailed explanation, check [1]. - Apart from accuracy, there are not many insightful discoveries for readers to understand the specific mechanism of this loss function. - The experiments are not validated on ImageNet, which largely weakens the empirical contribution of this paper. minor concerns: - " The second is that the neuron state of ANNs is represented in binary format but that of SNNs is represented in float format, leading to the precision mismatch between ANNs and SNNs", wrong value formats of SNNs and ANNs. - The related work should be placed in the main text, rather than the appendix. There are many spaces left on the 9th page. [1] Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient
- " The second is that the neuron state of ANNs is represented in binary format but that of SNNs is represented in float format, leading to the precision mismatch between ANNs and SNNs", wrong value formats of SNNs and ANNs.
OROKjdAfjs
ICLR_2024
1. How does your linear attention handle the autoregressive decoding? As of training, you can feed the network with a batch of inputs with long token dimensions. But when it comes to the generation phase, I am afraid that only limited tokens are used to generate the next token. Then do you still have benefits for inference? 2. The paper reads like a combination of various tricks as a lot of techniques were discussed in the previous paper, like LRPE, Flash, and Flash Attention. Especially for the Lightning Attention vs. Flash Attention, I did not find any difference between these two. The gated mechanism was also introduced in Flash paper. These aspects leave us a question in terms of the technical novelty of this paper. 3. It looks like during training, you are still using the quadratic attention computational order as indicated in Equ. 10? I suppose it was to handle the masking part. But that loses the benefits of training with linear attention complexity. 4. In terms of evaluation, although in the abstract, the authors claim that the linearized LLM extends to 175B parameters, most experiments are conducted on 375M models. For the large parameter size settings, the author only reports the memory and latency cost savings. The accuracy information is missing, without which I feel hard to evaluate the linearized LLMs.
1. How does your linear attention handle the autoregressive decoding? As of training, you can feed the network with a batch of inputs with long token dimensions. But when it comes to the generation phase, I am afraid that only limited tokens are used to generate the next token. Then do you still have benefits for inference?
NIPS_2022_1590
NIPS_2022
1) One of the key components is the matching metric, namely, the Pearson correlation coefficient (PCC). However, the assumption that PCC is a more relaxed constraint compared with KL divergence because of its invariance to scale and shift is not convincing enough. The constraint strength of a loss function is defined via its gradient distribution. For example, KL divergence and MSE loss have the same optimal solution while MSE loss is stricter than KL because of stricter punishment according to its gradient distribution. From this perspective, it is necessary to provide the gradient comparison between KL and PCC. 2) The experiments are not sufficient enough. 2-1) There are limited types of teacher architectures. 2-2) Most compared methods are proposed before 2019 (see Tab. 5). 2-3) The compared methods are not sufficient in Tab. 3 and 4. 2-4) The overall performance comparisons are only conducted on the small-scale dataset (i.e., CIFAR100). Large datasets (e.g., ImageNet) should also be evaluated. 2-5) The performance improvement compared with SOTAs is marginal (see Tab. 5). Some students only have a 0.06% gain compared with CRD. 3) There are some typos and some improper presentations. The texts of the figure are too small, especially the texts in Fig.2. Some typos, such as “on each classes” in the caption of Fig. 3, should be corrected. The authors have discussed the limitations and societal impacts of their works. The proposed method cannot fully address the binary classification tasks.
1) One of the key components is the matching metric, namely, the Pearson correlation coefficient (PCC). However, the assumption that PCC is a more relaxed constraint compared with KL divergence because of its invariance to scale and shift is not convincing enough. The constraint strength of a loss function is defined via its gradient distribution. For example, KL divergence and MSE loss have the same optimal solution while MSE loss is stricter than KL because of stricter punishment according to its gradient distribution. From this perspective, it is necessary to provide the gradient comparison between KL and PCC.
NIPS_2019_810
NIPS_2019
weakness of this paper in its current form is clarity, which hopefully can be improved, as although successor representation is an increasingly popular area in RL, it's also fairly complicated, hence it needs to be well explained (as e.g. is done in Gershman, J. Neurosci 2018). I also have a few more technical comments: - It's not exactly clear where is reward in section 2. Tabular case and rewards being linear in the state representation are mentioned; however, how exactly this is done should be explained more explicitly (or at least referred to where it's explained in the supplementary information - currently SI has information about parameters, algorithm and task settings details, but not methodological explanations) - It is mentioned that the mixture model is learned by gradient descent - it would be nice to see further discussion about how exactly this is done and why that is biologically realistic (as gradient descent is not something typically performed in the brain). - It would be nice to see not only summary statistics, but also typical trajectories performed by the model (and other candidate models) at different stages of learning - It is mentioned that epsilon = 0 works best for BSR, but in section 4.2 it's stated that for the puddle world epsilon = 0.2 was used for all models - why is that? Normally when comparing different models/algorithms, effort should be taken to find the best performing parameters (or more generally most suitable formalisations) for each model. - What exactly is the correlation coefficient in section 5.1 (0.90 or 0.95) between? - In Fig. 4, is it possible that GPI with noise added could reproduce the data similarly well or are there other measures to show that GPI cannot have as good fit with behavioural data (e.g. behavioural trajectories? time to goal?) - Finally this approach seems to be suitable for modelling pattern separation tasks, for which there is also behavioural data available - it would be nice to have some discussion on this. - There are a number of typos throughout the paper, which although don't obscure meaning should be corrected in the final version.
- In Fig. 4, is it possible that GPI with noise added could reproduce the data similarly well or are there other measures to show that GPI cannot have as good fit with behavioural data (e.g. behavioural trajectories? time to goal?) - Finally this approach seems to be suitable for modelling pattern separation tasks, for which there is also behavioural data available - it would be nice to have some discussion on this.
ICLR_2021_318
ICLR_2021
weakness, thought this is shared in RIM as well, and not that relevant to what is being evaluated/investigated in the paper. Decision: This paper makes an algorithmic contribution to the systemic generalization literature, and many in the NeurIPS community who are interested in this literature would benefit from having this paper accepted to the conference. I'm in favour of acceptance. Questions to authors: 1. Is there a specific reason why you reported results on knowledge transfer (i.e. section 4.3) only on a few select environments? 2. As mentioned in the “weak points” section, it would be nice if you could elaborate on 3. Is it possible that the caption of Figure 4 is misplaced? That figure is referenced in Section 4.1 (Improved Sample Efficiency), but the caption suggests it has something to do with better knowledge transfer. 4. If you have the resources, I would be very interested to see how the “small learning rate for attention parameters” benchmark (described above) would compare with the proposed approach. 5. In Section 4, 1st paragraph, you write “Do the ingredients of the proposed method lead to […] a better curriculum learning regime[…]”. Could you elaborate on what you mean by this? [1] Beaulieu, Shawn, et al. "Learning to continually learn." arXiv preprint arXiv:2002.09571 (2020).
4. If you have the resources, I would be very interested to see how the “small learning rate for attention parameters” benchmark (described above) would compare with the proposed approach.
NIPS_2019_873
NIPS_2019
--- I think human studies in interpretability research are mis-represented at L59. * These approaches don't just ask people whether they think an approach is trustworthy. They also ask humans to do things with explanations and that seems to have a better connection to whether or not an explanation really explains model behavior. This follows the version of interpretability from [1]. This paper laments a lack of theoretical foundation to interpretability approaches (e.g., at L241,L275-277) and it acknowledges at multiple points that we don't know what ground truth for feature importance estimates should look like. Doesn't a person have to interpret an explanation at some point a model for it to be called interpretable? It seems like human studies may offer a way to philosophically ground interpretability, but this part of the paper mis-represents that research direction in contrast with its treatment of the rest of the related work. Minor evaluation problems: * Given that there are already multiple samples for all these experiments, what is the variance? How significant are the differences between rankings? I only see this as a minor problem because the differences on the right of figure 4 are quite large and those are what matter most. * I understand why more baseline estimators weren't included: it's expensive. It would be interesting to incorporate lower frequency visualizations like Grad-CAM. These can sometimes give significantly different performance (e.g., as in [3]). I expect it may have significant impact here because a more coarse explanation (e.g., 14x14 heatmap) may help avoid noise that comes from the non-smooth, high frequency, per-pixel importance of the explanations investigated. This seems further confirmed by the visualizations in figure 1 which remove whole objects as pointed out at L264. The smoothness of coarse visualization method seems like it should do something similar, so it would further confirm the hypothesis about whole objects implied at L264. * It would be nice to summarize ROAR into one number. It would probably have much more impact that way. One way to do so would be to look at the area under the test accuracy curves of figure 4. Doing so would obscure richer insights that ROAR would provide, but this is a tradeoff made by any aggregate statistic. Presentation: * L106: This seems to carelessly resolve a debate that the paper was previously careful to leave open (L29). Why can't it be that the distribution has changed? Do any experiments disentangle changes in distribution from removal of information? Things I didn't understand: * L29: I didn't get this till later in the paper. I think I do now, but my understanding might change again after the rebuttal. More detail here would be useful. * L85: Wouldn't L1 regularization be applied to the weights? Is that feature selection? What steps were actually taken in the experiments used in this paper? Did the ResNet50 used have L1 regularization? * L122: What makes this a bit unclear is that I don't know what is and what is not a random variable. Normally I would expect some of these (epsilon, eta) to be constants. Suggestions --- * It would be nice to know a bit more about how ROAR is implemented. Were the new datasets dynamically generated? Were they pre-processed and stored? * Say you start re-training from the same point. Train two identical networks with different random seeds. How similar are the importance estimates from these networks (e.g. using rank correlation similarity)? How similar are the sets of the final 10% of important pixels identified by ROAR across different random seeds? If they're not similar then whatever importance estimator isn't even consistent with itself in some sense. This could be thought of as an additional sanity check and it might help understand why the baseline estimators considered don't do well. [1]: Doshi-Velez, F., & Kim, B. (2017). A Roadmap for a Rigorous Science of Interpretability. ArXiv, abs/1702.08608. Final Evaluation --- Quality: The experiments were thorough and appropriately supported the conclusions. The paper really only evaluate importance estimators using ROAR. It doesn't really evaluate ROAR itself. I think this is appropriate given the strong motivation the paper has and the lack of concensus about what methods like ROAR should be doing. Clarity: The paper could be clearer in multiple places, but it ultimately gets the point across. Originality: The idea is similar to [30] as cited. ROAR uses a similar principle with re-training and this makes it new enough. Significance: This evaluation could become popular, inspire future metrics, and inspire better importance estimators. Overall, this makes a solid contribution. Post-rebuttal Update --- After reading the author feedback, reading the other reviews, and participating in a somewhat in-depth discussion I think we reached some agreement, though not everyone agreed about everything. In particular, I agree with R4's two recommendations for the final version. These changes would address burning questions about ROAR. I still think the existing contribution is a pretty good contribution to NeurIPS (7 is a good rating), though I'm not quite as enthusiastic as before. I disagree somewhat with R4's stated main concern, that ROAR does not distinguish enough between saliency methods. While it would be nice to have more analysis about the differences between these methods, ROAR is only one way to analyze these explanations and one analysis needn't be responsible for identifying differences between all the approaches it analyzes.
* L106: This seems to carelessly resolve a debate that the paper was previously careful to leave open (L29). Why can't it be that the distribution has changed? Do any experiments disentangle changes in distribution from removal of information? Things I didn't understand:
NIPS_2021_2050
NIPS_2021
1. Transformer has been adopted for lots of NLP and vision tasks, and it is no longer novel in this field. Although the authors made a modification on the transformer, i.e. cross-layer, it does not bring much insight in aspect of machine learning. Besides, in ablation study (table4 and 5), the self-cross attention brings limited improvement (<1%). I don’t think this should be considered as significant improvement. It seems that the main improvements over other methods come from using a naïve transformer instead of adding the proposed modification. 2. This work only focuses on a niche task, which is more suitable for CV conference like CVPR rather than machine learning conference. The audience should be more interested in techniques that can work for general tasks, like general image retrieval. 3. The proposed method uses AdamW with cosine lr for training, while comparing methods only use adam with fixed lr. Directly comparing with their numbers in paper is unfair. It would be better to reproduce their results using the same setting, since most of the recent methods have their code released.
3. The proposed method uses AdamW with cosine lr for training, while comparing methods only use adam with fixed lr. Directly comparing with their numbers in paper is unfair. It would be better to reproduce their results using the same setting, since most of the recent methods have their code released.
ICLR_2021_2846
ICLR_2021
Weakness: There are some concerns authors should further address: 1)The transductive inference stage is essentially an ensemble of a serial of models. Especially, the proposed data perturbation can be considered as a common data augmentation. What if such an ensemble is applied to the existing transductive methods? And whether the flipping already is adopted in the data augmentation before the inputs fed to the network? 2)During meta-training, only the selected single path is used in one transductive step, what about the performance of optimizing all paths simultaneously? Given during inference all paths are utilized. 3)What about the performance of MCT (pair + instance)? 4)Why the results of Table 6 is not aligned with Table 1 (MCT-pair)? Also what about the ablation studies of MCT without the adaptive metrics. 5)Though this is not necessary, I'm curious about the performance of the SOTA method (e.g. LST) combined with the adaptive metric.
5)Though this is not necessary, I'm curious about the performance of the SOTA method (e.g. LST) combined with the adaptive metric.
NIPS_2016_283
NIPS_2016
weakness of the paper are the empirical evaluation which lacks some rigor, and the presentation thereof: - First off: The plots are terrible. They are too small, the colors are hard to distinguish (e.g. pink vs red), the axis are poorly labeled (what "error"?), and the labels are visually too similar (s-dropout(tr) vs e-dropout(tr)). These plots are the main presentation of the experimental results and should be much clearer. This is also the reason I rated the clarity as "sub-standard". - The results comparing standard- vs. evolutional dropout on shallow models should be presented as a mean over many runs (at least 10), ideally with error-bars. The plotted curves are obviously from single runs, and might be subject to significant fluctuations. Also the models are small, so there really is no excuse for not providing statistics. - I'd like to know the final used learning rates for the deep models (particularly CIFAR-10 and CIFAR-100). Because the authors only searched 4 different learning rates, and if the optimal learning rate for the baseline was outside the tested interval that could spoil the results. Another remark: - In my opinion the claim about evolutional dropout addresses the internal covariate shift is very limited: it can only increase the variance of some low-variance units. Batch Normalization on the other hand standardizes the variance and centers the activation. These limitations should be discussed explicitly. Minor: *
- First off: The plots are terrible. They are too small, the colors are hard to distinguish (e.g. pink vs red), the axis are poorly labeled (what "error"?), and the labels are visually too similar (s-dropout(tr) vs e-dropout(tr)). These plots are the main presentation of the experimental results and should be much clearer. This is also the reason I rated the clarity as "sub-standard".
p2P1Q4FpEB
EMNLP_2023
1. The performance gains are not very high, more most of the metrics the different between the baseline (w/o caption + w/o warmup) and best approach (with caption + warmup) is less than 1%. 2. The paper lack error analysis and model output examples.
1. The performance gains are not very high, more most of the metrics the different between the baseline (w/o caption + w/o warmup) and best approach (with caption + warmup) is less than 1%.
NIPS_2017_486
NIPS_2017
1. The paper is motivated with using natural language feedback just as humans would provide while teaching a child. However, in addition to natural language feedback, the proposed feedback network also uses three additional pieces of information – which phrase is incorrect, what is the correct phrase, and what is the type of the mistake. Using these additional pieces is more than just natural language feedback. So I would like the authors to be clearer about this in introduction. 2. The improvements of the proposed model over the RL without feedback model is not so high (row3 vs. row4 in table 6), in fact a bit worse for BLEU-1. So, I would like the authors to verify if the improvements are statistically significant. 3. How much does the information about incorrect phrase / corrected phrase and the information about the type of the mistake help the feedback network? What is the performance without each of these two types of information and what is the performance with just the natural language feedback? 4. In figure 1 caption, the paper mentions that in training the feedback network, along with the natural language feedback sentence, the phrase marked as incorrect by the annotator and the corrected phrase is also used. However, from equations 1-4, it is not clear where the information about incorrect phrase and corrected phrase is used. Also L175 and L176 are not clear. What do the authors mean by “as an example”? 5. L216-217: What is the rationale behind using cross entropy for first (P – floor(t/m)) phrases? How is the performance when using reinforcement algorithm for all phrases? 6. L222: Why is the official test set of MSCOCO not used for reporting results? 7. FBN results (table 5): can authors please throw light on why the performance degrades when using the additional information about missing/wrong/redundant? 8. Table 6: can authors please clarify why the MLEC accuracy using ROUGE-L is so low? Is that a typo? 9. Can authors discuss the failure cases of the proposed (RLF) network in order to guide future research? 10. Other errors/typos: a. L190: complete -> completed b. L201, “We use either … feedback collection”: incorrect phrasing c. L218: multiply -> multiple d. L235: drop “by” Post-rebuttal comments: I agree that proper evaluation is critical. Hence I would like the authors to verify that the baseline results [33] are comparable and the proposed model is adding on top of that. So, I would like to change my rating to marginally below acceptance threshold.
3. How much does the information about incorrect phrase / corrected phrase and the information about the type of the mistake help the feedback network? What is the performance without each of these two types of information and what is the performance with just the natural language feedback?
rtx8B94JMS
ICLR_2024
From my point of view, there are 2 main weaknesses in this submission (for details, see below). 1. The method and the experiments are insufficiently described, and I have some questions in this regard. However, I am convinced, that the manuscript can be updated to be much more clear. 2. The empirical evaluation is of limited scope. Qualitatively, the method is evaluated on 2 toy problems (fOU & Hurst index); quantitatively on a single synthetic dataset (stochastic moving MNIST). For the latter, only two baseline models from 2018 and 2020 are compared to. In consequence, the usefulness of the method is not established. On the plus side, there are 3 ablations / further studies. Minor weaknesses. - The method appears to be inefficient, with a training time of 39 hours on an NVIDIA GeForce RTX 4090 for one model per model trained on stochastic moving MNIST. - A detailed comparison with Tong et al. (2022) (who also learn approximations to fBM) is missing. So far, it is only stated that Tong et al. did not apply their model to video data and that it is completely different. A clear illustration of the conceptual differences and a comparison of the pros and cons of each approach would be appreciated (summary in main part + mathematical details in supplementary material). # Summary For me, this is a borderline submission. On the one hand, the proposed method is novel, significant and of theoretical interest. On the other hand, there are clarity issues, a weak empirical evaluation and no clear use case. Expecting the clarity issues to be resolved, I rate the submission as a marginally above the acceptance threshold, as the theoretical strengths outweigh the empirical flaws. ---- # Details on major weaknesses ## Point 1 (clarity). **Regarding the method.** After reading the method and experiment section multiple times, I still have no idea, how to implement it. I am aware of the provided source code, nevertheless I found the paper to be insufficient in this regard. What I got from section E and Figure 4 is that: - First, there is an encoding step that returns $h$, a sequence of vectors/matrices over time. Somehow, these vectors are used to compute $\omega$. I have no idea how this $\omega$ is related to the optimal one from Prop. 5. - $h$ is given to a temporal convolution layer that returns $g$ and this $g$ has as many 'frames' as the input and is used as input to the control function $u$. - The control, drift and the diffusion function are implemented as neural networks. - An SDE solution is numerically approximated with a Stratonovich–Milstein solver. - $\omega$ is used in the decoding step, I do not understand why and how. - Where do the approximation processes $Y$ enter. How are they parametrized, is $\gamma$ as in Prop 5? Are they integrated separately from $X$? - The ELBO contains an expectation over sample paths. How many paths are sampled to estimate the mean? - Fig. 6: Why do the samples from the prior always show a 4 and a 7? Does the prior depend on the observations? **Regarding Moving MNIST.** What precisely is the task / evaluation protocol in the experiment on the stoch. moving MNIST dataset? I did not see it specified, but from the overall description it appears that a sequence of 25 frames is given to the model and the task is to return the same 25 frames again (with the goal of learning a generative model). ## Point 2 (empirical evaluation) - The method is not evaluated on real world data. - Quantitatively, the method is only evaluated on one synthetic dataset (stoch. moving MNIST). - While the method is motivated by "*Unfortunately, for many practical scenarios, BM falls short of capturing the full complexity and richness of the observed real data, which often contains long-range dependencies, rare events, and intricate temporal structures that cannot be faithfully represented by a Markovian process"*, the moving MNIST dataset is not of this kind. It is not long range (only 25 frames) and there is no correlated noise. - The method is only compared to 2 baselines (SVG, SLRVP) on moving MNIST. - Table 1 does not show standard deviations. Overall, this would be a far stronger submission, if the experiments were more extensive. This includes: - Evaluation on more task and datasets, and specifically on datasets were this method is expected to shine, i.e., in the presence of correlated noise. The pendulum dataset of [Becker et al., Factorized inference in high-dimensional deep feature space, ICML 2019] would be one example. - Comparison with more baselines. In particular, more recent / state-of-the-art methods that do not model a differential equation and the fBM model by Tong et al. ---- There is a typo in Proposition 1: "Markov rocesses"
- Table 1 does not show standard deviations. Overall, this would be a far stronger submission, if the experiments were more extensive. This includes:
NIPS_2018_583
NIPS_2018
Weakness: - (4) or (5) are nonconvex saddle point problems, there is no convergence guarantee for Alg 1. Moreover, as a subroutine for (7), it is not clear how many iterations (the hyperparameter n) should be taken to make sure (7) is convergent. Previously in structured SVM, people noticed that approximate inference could make the learning diverges. - Performance boost due to more parameters? In Tab 1,2,3, if we think carefully, LinearTop and NLTop adds additional parameters, while Unary performs much worse comparing to the numbers reported e.g. in [14], where they used a different and probably better neural network. This raises a question: if we use a better Unary baseline, is there still a performance boost? - In Table 1, the accuracies are extremely poor: testing accuracy = 0.0? Something must be wrong in this experiment. - Scalability: since H(x,c) outputs the whole potential vector with length O(K^m), where m is the cardinality of the largest factor, which could be extremely long to be an input for T. - The performance of NLTop is way behind the Oracle (which uses GT as input for T). Does this indicate (3) is poorly solved or because of the learning itself? [*] N Komodakis Efficient training for pairwise or higher order CRFs via dual decompositio. CVPR 2011. [**] D Sontag et al. Learning efficiently with approximate inference via dual losses. ICML 2010.
- Performance boost due to more parameters? In Tab 1,2,3, if we think carefully, LinearTop and NLTop adds additional parameters, while Unary performs much worse comparing to the numbers reported e.g. in [14], where they used a different and probably better neural network. This raises a question: if we use a better Unary baseline, is there still a performance boost?
NIPS_2022_2041
NIPS_2022
• The paper is a bit hard to follow, and several sections were needed more than one reading pass. I suggest improving the structure (introduction->method->experiments), and put more focus on the IEM in Fig 3, which is in my view the main figure in this paper. Also, to improve the visualization of the Fig 7, and Fig. 10. It will be good to exemplify few failure cases of your model (e.g., on the FG or medical datasets). Perhaps other FG factors are needed? (e.g., good continuation?).
• The paper is a bit hard to follow, and several sections were needed more than one reading pass. I suggest improving the structure (introduction->method->experiments), and put more focus on the IEM in Fig 3, which is in my view the main figure in this paper. Also, to improve the visualization of the Fig 7, and Fig.
NIPS_2017_302
NIPS_2017
1. Related Work: As the available space allows it, the paper would benefit from a more detailed discussion of related work, by not only describing the related works, but also discussing the differences to the presented work. 2. Qualitative results: To underline the success of the work, the paper should include some qualitative examples, comparing its generated sentences to the ones by related work. 3. Experimental setup: For the Coco image cations, the paper does not rely on the official training/validation/test split used in the COCO captioning challenge. 3.1. Why do the authors not use the entire training set? 3.2. It would be important for the automatic evaluation to report results using the evaluation sever and report numbers on the blind test set (for the human eval it is fine to use the validation set). Conclusion: I hope the authors will include the coco caption evaluation server results in the rebuttal and final version as well as several qualitative results. Given the novelty of the approach and strong experiments without major flaws I recommend accepting the paper. It would be interesting if the authors would comment on which problems and how their approach can be applied to non-sequence problems.
1. Related Work: As the available space allows it, the paper would benefit from a more detailed discussion of related work, by not only describing the related works, but also discussing the differences to the presented work.
NIPS_2020_1012
NIPS_2020
-The scenarios that success of the attack is less than 50%, a simple ensemble method could be used to defend the attack. It seems that the success of attack in attacking the Google model is around 20% which could be circumvented by using multiple models. -The attack seems to be unstable when changing the architecture. For instance the attack on VGG does not succeed as much as the attack on other architectures. -On novelty of the paper: The ideas behind the attack seem to be simple and borrow ideas from the Metalearing literature. However, this is not necessary a bad thing as it shows simple ideas can be used to attack models. - The experiments of the paper are done only on neural networks and image classification tasks. It would be interesting to see the performance of attack on other architecture and classification tasks.
- The experiments of the paper are done only on neural networks and image classification tasks. It would be interesting to see the performance of attack on other architecture and classification tasks.
NIPS_2016_283
NIPS_2016
weakness of the paper are the empirical evaluation which lacks some rigor, and the presentation thereof: - First off: The plots are terrible. They are too small, the colors are hard to distinguish (e.g. pink vs red), the axis are poorly labeled (what "error"?), and the labels are visually too similar (s-dropout(tr) vs e-dropout(tr)). These plots are the main presentation of the experimental results and should be much clearer. This is also the reason I rated the clarity as "sub-standard". - The results comparing standard- vs. evolutional dropout on shallow models should be presented as a mean over many runs (at least 10), ideally with error-bars. The plotted curves are obviously from single runs, and might be subject to significant fluctuations. Also the models are small, so there really is no excuse for not providing statistics. - I'd like to know the final used learning rates for the deep models (particularly CIFAR-10 and CIFAR-100). Because the authors only searched 4 different learning rates, and if the optimal learning rate for the baseline was outside the tested interval that could spoil the results. Another remark: - In my opinion the claim about evolutional dropout addresses the internal covariate shift is very limited: it can only increase the variance of some low-variance units. Batch Normalization on the other hand standardizes the variance and centers the activation. These limitations should be discussed explicitly. Minor: *
- I'd like to know the final used learning rates for the deep models (particularly CIFAR-10 and CIFAR-100). Because the authors only searched 4 different learning rates, and if the optimal learning rate for the baseline was outside the tested interval that could spoil the results. Another remark:
NIPS_2021_1343
NIPS_2021
Weakness - I am not convinced that transformer free of locality-bias is indeed the best option. In fact, due to limited speed of information propagation, the neighborhood agents should naturally have more impacts on each other, compared to far away nodes. I hope the authors to explain more why transformer’s no-locality won’t make a concern here. - Due to the above, I feel graph networks seem to capture this better than the too-free transformer, and their lack of global context/ the “over-squashing” might be mitigated by adding non-local blocks (e.g., check “Non-Local Graph Neural Networks” or several other works proposing “global attention” for GNNs). - The authors also claimed “traditional GNNs” cannot handle direction-feature coupling: that is not true. See a latest work “MagNet: A Neural Network for Directed Graphs” and I am sure there were more prior arts. Authors are asked to consider whether those directional GNNs can possibly suit their task well too. - Transform is introduced as a centralized agent. Its computational overhead can become formidable when the network gets larger. Authors shall discuss how they prepare to address the scalability bottleneck.
- I am not convinced that transformer free of locality-bias is indeed the best option. In fact, due to limited speed of information propagation, the neighborhood agents should naturally have more impacts on each other, compared to far away nodes. I hope the authors to explain more why transformer’s no-locality won’t make a concern here.
NIPS_2019_1145
NIPS_2019
The paper has the following main weaknesses: 1. The paper starts with the objective of designing fast label aggregation algorithms for a streaming setting. But it doesn’t spend any time motivating the applications in which such algorithms are needed. All the datasets used in the empirical analysis are static datasets. For the paper to be useful, the problem considered should be well motivated. 2. It appears that the output from the algorithm depends on the order in which the data are processed. This should be clarified. 3. The theoretical results are presented under the assumption that the predictions of FBI converge to the ground truth. Why should this assumption be true? It is not clear to me how this assumption is valid for finite R. This needs to clarified/justified. 3. The takeaways from the empirical analysis are not fully clear. It appears that the big advantage of the proposed methods is their speed. However, the experiments don’t seem to be explicitly making this point (the running times are reported in the appendix; perhaps they should be moved to the main body). Plus, the paper is lacking the key EM benchmark. Also, perhaps the authors should use a different dataset in which speed is most important to showcase the benefits of this approach. Update after the author response: I read the author rebuttal. I suggest the authors to add the clarifications they detailed in the rebuttal to the final paper. Update after the author response: I read the author rebuttal. I suggest the authors to add the clarifications they detailed in the rebuttal to the final paper. Also, the motivating crowdsourcing application where speed is really important is not completely clear to me from the rebuttal. I suggest the authors clarify this properly in the final paper.
2. It appears that the output from the algorithm depends on the order in which the data are processed. This should be clarified.
84n3UwkH7b
ICLR_2024
1. While the mitigation strategies aim to reduce memorization, it's unclear what impact they might have on the overall performance of the model. Often, there's a trade-off between reducing a particular behavior and maintaining high performance. If these mitigation strategies significantly impair the model's utility, it might deter their adoption. 2. As stated in the paper, a weakness of the proposed method is the lack of interpretability in the detection strategy of memorized prompts. The current approach requires the model owners to select an empirical threshold based on a predetermined false positive rate, but the outcomes generated lack clear interpretability. This lack of clarity can make it difficult for model owners to fully understand and trust the detection process. The authors acknowledge that developing a method that produces interpretable p-values could significantly assist model owners by providing a confidence score quantifying the likelihood of memorization. 3. Advising users on modifying or omitting trigger tokens might be effective in theory, but in practice, it could be cumbersome. Users might need to understand what these tokens are, why they need to modify them, and how they affect the output. This could make the user experience less intuitive, especially for those unfamiliar with the inner workings of AI models. 4. The paper assumes that all prompts can be modified or that users will be willing to modify them. In real-world scenarios, some prompts might be non-negotiable, and changing them might not be an option. 5. While the paper suggests that the method is computationally efficient, implementing the strategies during the training and inference phases might still introduce computational or operational overheads for model owners.
1. While the mitigation strategies aim to reduce memorization, it's unclear what impact they might have on the overall performance of the model. Often, there's a trade-off between reducing a particular behavior and maintaining high performance. If these mitigation strategies significantly impair the model's utility, it might deter their adoption.
ICLR_2023_4079
ICLR_2023
• There are considerable similarities with another paper [1] (see below references). The work in this paper is not novel and there are no citation given to [1]. EM approach, regeneration approach is mentioned in [1]. • The experimental results are not convincing. The paper provides that joint learning on CIFAR- 100 dataset gives 39.97% accuracy when tested on class incremental learning. However, there seems to be more accurate results obtained with CIFAR-100 dataset on class incremental learning. For example, the paper [2] obtains 58.4% accuracy. In addition, the memory size is 10 times lower than this setup. The experiments do not contain the paper [2]. Other relevant papers [3, 4] whose accuracies are listed higher for this dataset are not compared and referenced either. • Although it is provided that a 6-fold cross-validation is used for every dataset, the reason for cross-validation is not understood because other papers that this work compares to did not use the cross- validation in their papers. Therefore, it is not clear why 6-fold cross-validation is required for this problem. • The notation for results is not clear. The paper claims the improvement for CIFAR-10 is 3%p but it is not clear what %p stands for. • Although there is a reference to [2] and other types of rehearsal based continual learning methods, the experiments do not contain any of the rehearsal methods. • The setup for the experiments is missing. The code is not provided. • The effect of memory size is ambiguous. An ablation study containing the effect of memory size should be added for justifying the memory size selection. • In Table-1, the experimental results for CelebA dataset are written in caption. However, there are not any experiments with CelebA dataset. [1] Overcoming Catastrophic Forgetting with Gaussian Mixture Replay (Pfülb and Geppert, 2021) [2] Gdumb: A simple approach that questions our progress in continual learning (Prabhu et al., 2020) [3] Supervised Contrastive Learning (Khosla et al., 2020) [4] Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network (Kim and Choi, 2021)
• Although it is provided that a 6-fold cross-validation is used for every dataset, the reason for cross-validation is not understood because other papers that this work compares to did not use the cross- validation in their papers. Therefore, it is not clear why 6-fold cross-validation is required for this problem.
ARR_2022_51_review
ARR_2022
1. The choice of the word-alignment baseline seems odd. The abstract claims that “Word alignment has proven to benefit many-to-many neural machine translation (NMT).” which is supported by (Lin et al., 2020). However, the method proposed by Lin et al was used as baseline. Instead, the paper compared to an older baseline proposed by (Garg et al., 2019). Besides, this baseline by Garg et al (+align) seems to contradict the claim in the abstract since it always performs worse than the baseline without word-alignment (Table 2). If for some practical reason, the baseline of (Lin et al., 2020) can’t be used, it needs to be explained clearly. 2. In Table 2, the proposed approaches only outperform the baselines in 1 setup (out of 3). In addition, there is no consistent trend in the result (i.e. it’s unclear which proposed method (+w2w) or (+FA) is better). Thus, the results presented are insufficient to prove the benefits of the proposed methods. To better justify the claims in this paper, additional experiments or more in-depth analysis seem necessary. 3. If the claim that better word-alignment improves many-to-many translation is true, why does the proposed method have no impact on the MLSC setup (Table 3)? Section 4 touches on this point but provides no explanation. 1. Please provide more details for the sentence retrieval setup (how sentences are retrieved, from what corpus, is it the same/different to the setup in (Artetxe and Schwenk, 2019) ? ). From the paper, “We found that for en-kk, numbers of extracted word pairs per sentence by word2word and FastAlign are 1.0 and 2.2, respectively. In contrast, the numbers are 4.2 and 20.7 for improved language pairs”. Is this because word2word and FastAlign fail for some language pairs or is this because there are few alignments between these language pairs? Would a better aligner improve result further? 2. For Table 3, are the non-highlighted cells not significant or not significantly better? If it’s the latter, please also highlight cells where the proposed approaches are significantly worse. For example, from Kk to En, +FA is significantly better than mBART (14.4 vs 14.1, difference of 0.3) and thus the cell is highlighted. However, from En to Kk, the difference between +FA and mBART is -0.5 (1.3 vs 1.8) but this cell is not highlighted.
2. In Table 2, the proposed approaches only outperform the baselines in 1 setup (out of 3). In addition, there is no consistent trend in the result (i.e. it’s unclear which proposed method (+w2w) or (+FA) is better). Thus, the results presented are insufficient to prove the benefits of the proposed methods. To better justify the claims in this paper, additional experiments or more in-depth analysis seem necessary.
FgEM735i5M
EMNLP_2023
1. This paper presents a highly effective engineering method for ReC. However, it should be noted that the proposed framework incorporates some combinatorial and heuristic aspects. In particular, the Non-Ambiguous Query Generation procedure relies on a sophisticated filtering template. It would be helpful if the author could clarify the impact of these heuristic components. 2. Since the linguistic expression rewriting utilizes the powerful GPT-3.5 language model, it would be interesting to understand the extent of randomness and deviation that may arise from the influence of GPT-3.5. Is there any studies or analyses on this aspect?
1. This paper presents a highly effective engineering method for ReC. However, it should be noted that the proposed framework incorporates some combinatorial and heuristic aspects. In particular, the Non-Ambiguous Query Generation procedure relies on a sophisticated filtering template. It would be helpful if the author could clarify the impact of these heuristic components.
NIPS_2020_960
NIPS_2020
- The writing of this paper is very misleading. First of all, it claims that it can be trained only using a single viewpoint of the object. In fact, all previous diffrentiable rendering techniques can be trained using a single view of object at training time. However, the reason why multi-view images are used for training in prior works is that single-view images usually lead to ambiguity in the depth direction. The proposed method also suffers from this problem -- it cannot resolve the ambiguity of depth using a single image either. The distrance-transformed silhouette can only provide information on the xy plane - the shape perpendicular to the viewing direction. - I doubt the proposed method can be trained without using any camera information (Line 223, the so called "knowledge of CAD model correspondences"). Without knowing the viewpoint, how is it possible to perform ray marching? How do you know where the ray comes from? - The experiments are not comprehensive and convincing. 1) The comparisons do not seem fair. The performance of DVR is far worse than that in the original DVR paper. Is DVR trained and tested on the same data? What is the code used for evaluation? Is it from the original authors or reimplementation? 2) Though it could be interesting to see how SoftRas performs, it is not very fair to compare SoftRas here as it use a different 3D representation -- mesh. It is well known that mesh representation cannot model arbitrary topology. Thus it is not surprising to see it is outperformed. Since this paper works on implicit surface, it would be more interesting to compare with more state-of-the-art differentiable renderers for implicit surface, i.e. [26],[27],[14], or at least the baseline approach [38]. However, no direct comparisons with these approaches are provided, making it difficult to verify the effectivenss of the proposed approach.
- I doubt the proposed method can be trained without using any camera information (Line 223, the so called "knowledge of CAD model correspondences"). Without knowing the viewpoint, how is it possible to perform ray marching? How do you know where the ray comes from?
NIPS_2018_813
NIPS_2018
(the simulations seem to address whether or not an improvement is actually seen in practice), the paper would benefit from a discussion of the fact that the targeted improvement is in the (relatively) small n regime. 4. The paper would benefit from a more detailed comparison with related work, in particular making a detailed comparison to the time complexity and competitiveness of prior art. Minor: 1. The proofs repeatedly refer to the Cauchy inequality, but it might be better given audience familiarity to refer to it as the Cauchy-Schwarz inequality. Post-rebuttal: I have read the authors' response and am satisfied with it. I maintain my vote for acceptance.
4. The paper would benefit from a more detailed comparison with related work, in particular making a detailed comparison to the time complexity and competitiveness of prior art. Minor:
NIPS_2022_532
NIPS_2022
• It seems that ODA, one of the methods of solving the MOIP problem, has learned the policy to imitate the problem-solving method, but it did not clearly suggest how the presented method improved the performance and computation speed of the solution rather than just using ODA. • In order to apply imitation learning, it is necessary to obtain labeled data by optimally solving various problems. There are no experiments on whether there are any difficulties in obtaining the corresponding data, and how the performance changes depending on the size of the labeled data.
• It seems that ODA, one of the methods of solving the MOIP problem, has learned the policy to imitate the problem-solving method, but it did not clearly suggest how the presented method improved the performance and computation speed of the solution rather than just using ODA.
ICLR_2023_91
ICLR_2023
1. Some confusions. In Parameter Transformation part, you state that “The number of adaptation parameters is given by k (2 d2 + d + 2). This is typically much smaller than the number of MDN parameters (weights and biases from all layers)”. In previous part you state that “The MDN output with all the mixture parameters has dimension p = k (d(d + 1)/2 + d + 1).” Why the adaptation parameters is much smaller than the number of MDN parameters? 2. Some figures are not self-explanatory. For instance, in Figure 4, the line of No adapt or Finetune are covered by other lines, without additional explanation. 3. More experiments. How the unsupervised domain adaptation performs based on the baseline model and how it compares with the proposed approach?
2. Some figures are not self-explanatory. For instance, in Figure 4, the line of No adapt or Finetune are covered by other lines, without additional explanation.
NIPS_2020_3
NIPS_2020
- Unlike Tandem Model [4,5] and cVAE based methods the proposed method uses gradient updates and therefore is slow. The authors acknowledge this in the manuscript and demonstrate study the method as a function of inference budget. - The sampling performed to obtain different initializations x_0 seems important for the convergence to optimum. This is not experimentally evaluated carefully on the proposed benchmarks, except for Tab. 1 in supplementary where it is compared to sampling from uniform distribution.
- The sampling performed to obtain different initializations x_0 seems important for the convergence to optimum. This is not experimentally evaluated carefully on the proposed benchmarks, except for Tab. 1 in supplementary where it is compared to sampling from uniform distribution.
NIPS_2020_541
NIPS_2020
Some concerns: 1. There is too much-overleaped information in Table 1, Table 2, and Figure 1. Figure 1 includes all information presented in Tables 1 and 2. 2. What’s the logic between the proposed method and [9] and [16]? Why the authors compare the proposed method with [9] first, then [16]? Why the authors only compare the computational cost with [9], but [16]? Is the computational cost a big contribution to this paper? Is that a big issue in a practical scenario? That part is weird to me and there is no further discussion about it in the rest of this paper. 3. Why the proposed column smoothing method produces better result compare with block smoothing method? 4. The accuracy drop for the Imagenet dataset is a concern, which makes the proposed method in-practical.
2. What’s the logic between the proposed method and [9] and [16]? Why the authors compare the proposed method with [9] first, then [16]? Why the authors only compare the computational cost with [9], but [16]? Is the computational cost a big contribution to this paper? Is that a big issue in a practical scenario? That part is weird to me and there is no further discussion about it in the rest of this paper.
ARR_2022_311_review
ARR_2022
__1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
__3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged.
rJhk7Fpnvh
EMNLP_2023
1. It was quite unclear how the experiments performed in the work to corroborate the authors’ theory did that. In the random premise task – how could the authors ensure that the random predicate indeed resulted in NO-ENTAIL? I understand that such random sampling has a very small probability of resulting in something that is not NO-ENTAIL, but given that many predicates have synonyms, and other predicates whom they entail, and hence by proxy the current hypothesis might also entail, it feels like it is crucial to ensure that indeed NO-ENTAIL was the case for all the instances (as this is not the train set, but rather the evaluation set). 2. Additionally, it was not clear how the generic argument task and the random argument task proved what the authors claimed. All in all, the whole dataset transformation and the ensuing experimental setup felt very cumbersome, and not very clear.
2. Additionally, it was not clear how the generic argument task and the random argument task proved what the authors claimed. All in all, the whole dataset transformation and the ensuing experimental setup felt very cumbersome, and not very clear.