paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
7
10.2k
point
stringlengths
47
690
ARR_2022_12_review
ARR_2022
I feel the design of NVSB and some experimental results need more explanation (more information in the section below). 1. In Figure 1, given experimental dataset have paired amateur and professional recordings from the same singer, what are the main rationals for (a) Having a separate timbre encoder module (b) SADTW takes outputs of content encoder (and not timbre encoder) as input? 2. For results shown in Table 3, how to interpret: (a) For Chinese MOS-Q, NVSB is comparable to GT Mel A. (b) For Chinese and English MOS-V, Baseline and NVSB have overlapping 95% CI.
1. In Figure 1, given experimental dataset have paired amateur and professional recordings from the same singer, what are the main rationals for (a) Having a separate timbre encoder module (b) SADTW takes outputs of content encoder (and not timbre encoder) as input?
ACL_2017_37_review
ACL_2017
Weak results/summary of "side-by-side human" comparison in Section 5. Some disfluency/agrammaticality. - General Discussion: The article proposes a principled means of modeling utterance context, consisting of a sequence of previous utterances. Some minor issues: 1. Past turns in Table 1 could be numbered, making the text associated with this table (lines 095-103) less difficult to ingest. Currently, readers need to count turns from the top when identifying references in the authors' description, and may wonder whether "second", "third", and "last" imply a side-specific or global enumeration. 2. Some reader confusion may be eliminated by explicitly defining what "segment" means in "segment level", as occurring on line 269. Previously, on line 129, this seemingly same thing was referred to as "a sequence-sequence [similarity matrix]". The two terms appear to be used interchangeably, but it is not clear what they actually mean, despite the text in section 3.3. It seems the authors may mean "word subsequence" and "word subsequence to word subsequence", where "sub-" implies "not the whole utterance", but not sure. 3. Currently, the variable symbol "n" appears to be used to enumerate words in an utterance (line 306), as well as utterances in a dialogue (line 389). The authors may choose two different letters for these two different purposes, to avoid confusing readers going through their equations. 4. The statement "This indicates that a retrieval based chatbot with SMN can provide a better experience than the state-of-the-art generation model in practice." at the end of section 5 appears to be unsupported. The two approaches referred to are deemed comparable in 555 out of 1000 cases, with the baseline better than the proposed method in 238 our of the remaining 445 cases. The authors are encouraged to assess and present the statistical significance of this comparison. If it is weak, their comparison permits to at best claim that their proposed method is no worse (rather than "better") than the VHRED baseline. 5. The authors may choose to insert into Figure 1 the explicit "first layer", "second layer" and "third layer" labels they use in the accompanying text. 6. Their is a pervasive use of "to meet" as in "a response candidate can meet each utterace" on line 280 which is difficult to understand. 7. Spelling: "gated recurrent unites"; "respectively" on line 133 should be removed; punctuation on line 186 and 188 is exchanged; "baseline model over" -> "baseline model by"; "one cannot neglects".
5. The authors may choose to insert into Figure 1 the explicit "first layer", "second layer" and "third layer" labels they use in the accompanying text.
ACL_2017_489_review
ACL_2017
1) The main weakness for me is the statement of the specific hypothesis, within the general research line, that the paper is probing: I found it very confusing. As a result, it is also hard to make sense of the kind of feedback that the results give to the initial hypothesis, especially because there are a lot of them and they don't all point in the same direction. The paper says: "This paper pursues the hypothesis that an accurate model of referential word meaning does not need to fully integrate visual and lexical knowledge (e.g. as expressed in a distributional vector space), but at the same time, has to go beyond treating words as independent labels." The first part of the hypothesis I don't understand: What is it to fully integrate (or not to fully integrate) visual and lexical knowledge? Is the goal simply to show that using generic distributional representation yields worse results than using specific, word-adapted classifiers trained on the dataset? If so, then the authors should explicitly discuss the bounds of what they are showing: Specifically, word classifiers must be trained on the dataset itself and only word classifiers with a sufficient amount of items in the dataset can be obtained, whereas word vectors are available for many other words and are obtained from an independent source (even if the cross-modal mapping itself is trained on the dataset); moreover, they use the simplest Ridge Regression, instead of the best method from Lazaridou et al. 2014, so any conclusion as to which method is better should be taken with a grain of salt. However, I'm hoping that the research goal is both more constructive and broader. Please clarify. 2) The paper uses three previously developed methods on a previously available dataset. The problem itself has been defined before (in Schlangen et al.). In this sense, the originality of the paper is not high. 3) As the paper itself also points out, the authors select a very limited subset of the ReferIt dataset, with quite a small vocabulary (159 words). I'm not even sure why they limited it this way (see detailed comments below). 4) Some aspects could have been clearer (see detailed comments). 5) The paper contains many empirical results and analyses, and it makes a concerted effort to put them together; but I still found it difficult to get the whole picture: What is it exactly that the experiments in the paper tell us about the underlying research question in general, and the specific hypothesis tested in particular? How do the different pieces of the puzzle that they present fit together? - General Discussion: [Added after author response] Despite the weaknesses, I find the topic of the paper very relevant and also novel enough, with an interesting use of current techniques to address an "old" problem, REG and reference more generally, in a way that allows aspects to be explored that have not received enough attention. The experiments and analyses are a substantial contribution, even though, as mentioned above, I'd like the paper to present a more coherent overall picture of how the many experiments and analyses fit together and address the question pursued. - Detailed comments: Section 2 is missing the following work in computational semantic approaches to reference: Abhijeet Gupta, Gemma Boleda, Marco Baroni, and Sebastian Pado. 2015. Distributional vectors encode referential attributes. Proceedings of EMNLP, 12-21 Aurelie Herbelot and Eva Maria Vecchi. 2015. Building a shared world: mapping distributional to model-theoretic semantic spaces. Proceedings of EMNLP, 22–32. 142 how does Roy's work go beyond early REG work? 155 focusses links 184 flat "hit @k metric": "flat"? Section 3: please put the numbers related to the dataset in a table, specifying the image regions, number of REs, overall number of words, and number of object names in the original ReferIt dataset and in the version you use. By the way, will you release your data? I put a "3" for data because in the reviewing form you marked "Yes" for data, but I can't find the information in the paper. 229 "cannot be considered to be names" ==> "image object names" 230 what is "the semantically annotated portion" of ReferIt? 247 why don't you just keep "girl" in this example, and more generally the head nouns of non-relational REs? More generally, could you motivate your choices a bit more so we understand why you ended up with such a restricted subset of ReferIt? 258 which 7 features? ( list) How did you extract them? 383 "suggest that lexical or at least distributional knowledge is detrimental when learning what a word refers to in the world": How does this follow from the results of Frome et al. 2013 and Norouzi et al. 2013? Why should cross-modal projection give better results? It's a very different type of task/setup than object labeling. 394-395 these numbers belong in the data section Table 1: Are the differences between the methods statistically significant? They are really numerically so small that any other conclusion to "the methods perform similarly" seems unwarranted to me. Especially the "This suggests..." part (407). Table 1: Also, the sim-wap method has the highest accuracy for hit @5 (almost identical to wac); this is counter-intuitive given the @1 and @2 results. Any idea of what's going on? Section 5.2: Why did you define your ensemble classifier by hand instead of learning it? Also, your method amounts to majority voting, right? Table 2: the order of the models is not the same as in the other tables + text. Table 3: you report cosine distances but discuss the results in terms of similarity. It would be clearer (and more in accordance with standard practice in CL imo) if you reported cosine similarities. Table 3: you don't comment on the results reported in the right columns. I found it very curious that the gold-top k data similarities are higher for transfer+sim-wap, whereas the results on the task are the same. I think that you could squeeze more information wrt the phenomenon and the models out of these results. 496 format of "wac" Section 6 I like the idea of the task a lot, but I was very confused as to how you did and why: I don't understand lines 550-553. What is the task exactly? An example would help. 558 "Testsets" 574ff Why not mix in the train set examples with hypernyms and non-hypernyms? 697 "more even": more wrt what? 774ff "Previous cross-modal mapping models ... force...": I don't understand this claim. 792 "larger test sets": I think that you could even exploit ReferIt more (using more of its data) before moving on to other datasets.
5) The paper contains many empirical results and analyses, and it makes a concerted effort to put them together; but I still found it difficult to get the whole picture: What is it exactly that the experiments in the paper tell us about the underlying research question in general, and the specific hypothesis tested in particular? How do the different pieces of the puzzle that they present fit together?
ICLR_2023_91
ICLR_2023
1. Some confusions. In Parameter Transformation part, you state that “The number of adaptation parameters is given by k (2 d2 + d + 2). This is typically much smaller than the number of MDN parameters (weights and biases from all layers)”. In previous part you state that “The MDN output with all the mixture parameters has dimension p = k (d(d + 1)/2 + d + 1).” Why the adaptation parameters is much smaller than the number of MDN parameters? 2. Some figures are not self-explanatory. For instance, in Figure 4, the line of No adapt or Finetune are covered by other lines, without additional explanation. 3. More experiments. How the unsupervised domain adaptation performs based on the baseline model and how it compares with the proposed approach?
2. Some figures are not self-explanatory. For instance, in Figure 4, the line of No adapt or Finetune are covered by other lines, without additional explanation.
ACL_2017_779_review
ACL_2017
There were many sentences in the abstract and in other places in the paper where the authors stuff too much information into a single sentence. This could be avoided. One can always use an extra sentence to be more clear. There could have been a section where the actual method used could be explained in a more detailed. This explanation is glossed over in the paper. It's non-trivial to guess the idea from reading the sections alone. During test time, you need the source-pivot corpus as well. This is a major disadvantage of this approach. This is played down - in fact it's not mentioned at all. I could strongly encourage the authors to mention this and comment on it. - General Discussion: This paper uses knowledge distillation to improve zero-resource translation. The techniques used in this paper are very similar to the one proposed in Yoon Kim et. al. The innovative part is that they use it for doing zero-resource translation. They compare against other prominent works in the field. Their approach also eliminates the need to do double decoding. Detailed comments: - Line 21-27 - the authors could have avoided this complicated structure for two simple sentences. Line 41 - Johnson et. al has SOTA on English-French and German-English. Line 77-79 there is no evidence provided as to why combination of multiple languages increases complexity. Please retract this statement or provide more evidence. Evidence in literature seems to suggest the opposite. Line 416-420 - The two lines here are repeated again. They were first mentioned in the previous paragraph. Line 577 - Figure 2 not 3!
- Line 21-27 - the authors could have avoided this complicated structure for two simple sentences. Line 41 - Johnson et. al has SOTA on English-French and German-English. Line 77-79 there is no evidence provided as to why combination of multiple languages increases complexity. Please retract this statement or provide more evidence. Evidence in literature seems to suggest the opposite. Line 416-420 - The two lines here are repeated again. They were first mentioned in the previous paragraph. Line 577 - Figure 2 not 3!
ARR_2022_89_review
ARR_2022
1. The experiments are held on a private datasets and the exact setup is impossible to reproduce. 2. A minor point would be that few-shot would be a more realistic setup for that task, as domain-specific TODOs are easy to acquire, however I agree that the current setup is adequate as well. 3. More error analysis could be useful, especially on the public dataset, as their data could be included without any restrictions, e.g., error types/examples? patterns? Examples when non-contextualized embeddings outperform contextualized ones, or even LITE? I urge the authors to release at least some part of the dataset to the wider public, or under some end user-agreement. Comments: 1. I suggest the authors to focus their comparison on word2vec baselines (currently in appendix), instead of Sentence-BERT, as the latter does not show good performance on the short texts. It seems that non-contextualized embeddings are more suitable for the task. 2. Maybe it makes more sense to try out models pre-trained on conversations, e.g., text from Twitter or natural language conversations.
2. A minor point would be that few-shot would be a more realistic setup for that task, as domain-specific TODOs are easy to acquire, however I agree that the current setup is adequate as well.
zBrjRswpkg
ICLR_2025
1) There is no clear representation of the motivation and contributions of the paper. 2) The theoretical analysis is limited, aspects such as convergence analysis and constraint violations related to constraint learning are not addressed. 3) The overall presentation of the paper might benefit from improvements, as it does not clearly convey its main claims and contains some expression errors. 4) The experimental results do not seem to adequately support the theoretical analysis. For example, Figure 3 shows significant constraint violations in CCPG w/PC. 5) The paper lacks additional necessary experiments, including comparison experiments, ablation studies, and hyperparameter analysis, etc. 6) The baselines and experimental environments in the paper are too few to illustrate the validity of the method. 7) There is no code or detailed implementation description provided to support the reproducibility of the results.
5) The paper lacks additional necessary experiments, including comparison experiments, ablation studies, and hyperparameter analysis, etc.
WC9yjSosSA
EMNLP_2023
- The reported experimental results cannot strongly demonstrate the effectiveness of the proposed method. - In Table 1, for the proposed method, only 6 of the total 14 evaluation metrics achieve SOTA performances. - In Table 2, for the proposed method, only 8 of the total 14 evaluation metrics achieve SOTA performances. In addition, under the setting of "Twitter-2017 $\rightarrow$ Twitter-2015", why the proposed method achieves best overall F1, while not achieves best F1 in all single types? - In Table 3, for the proposed method, 9 of the total 14 evaluation metrics achieve SOTA performances, which means that when ablating some modules, the performance of the proposed method will improve. Furthermore, The performance improvement that adding a certain module can bring is not obvious. - In line 284, a transformer layer with self-attention is used to capture the intra-modality relation for the test modality. However, there're a lot of self-attention transformer layers in BERT. Why not using the attention scores in the last self-attention transformer layer? - In line 322, softmmax -> softmax - Will the coordination of $b_d$ exceed the scope of the patches?
- In Table 2, for the proposed method, only 8 of the total 14 evaluation metrics achieve SOTA performances. In addition, under the setting of "Twitter-2017 $\rightarrow$ Twitter-2015", why the proposed method achieves best overall F1, while not achieves best F1 in all single types?
ARR_2022_303_review
ARR_2022
- Citation type recognition is limited to two types –– dominant and reference –– which belies the complexity of the citation function, which is a significant line of research by other scholars. However this is more of a choice of the research team in limiting the scope of research. - Relies on supplemental space to contain the paper. The paper is not truly independent given this problem (esp. S3.1 reference to Sup. Fig. 6) and again later as noted with the model comparison and other details of the span vs. sentence investigation. - The previous report of SciBERT were removed, but this somewhat exacerbates the earlier problem in v1 where the analyses of the outcomes of the models was too cursory and unsupported by deeper analyses. However, this isn't very fair to write as a weakness because the current paper just simply doesn't mention this. - Only having two annotators for the dataset is a weakness, since it's not clear how the claims might generalise, given such a small sample. - A summative demographics is inferrable but not mentioned in the text. Table 1's revised caption mentions 2.9K paragraphs as the size. This paper is a differential review given that I previously reviewed the work in the Dec 2021 version submitted to ARR. There are minor changes to the introduction section, lengthening the introduction and moving the related work section to the more traditional position, right after the introduction. There are no rebuttals nor notes from the authors to interpret what has been changed from the previous submission, which could have been furnished to ease reviewer burden in checking (I had to read both the new and old manuscripts side by side and align them myself) Many figures could be wider given the margins for the column. I understand you want to preserve space to make up for the new additions into your manuscript, but the wider margins would help for legibility. Minor changes were made S3.3 to incorporate more connection to prior work. S4.1 Model design was elaborated into subsections, S5.2.1 adds an introduction to LED. 462 RoBERTa-base
- Relies on supplemental space to contain the paper. The paper is not truly independent given this problem (esp. S3.1 reference to Sup. Fig. 6) and again later as noted with the model comparison and other details of the span vs. sentence investigation.
NIPS_2016_93
NIPS_2016
- The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog. - With a fixed policy, this setting is a subset of reinforcement learning. Can tasks get more complicated (like what explained in the last paragraph of the paper) so that the policy is not fixed. Then, the authors can compare with a reinforcement learning algorithm baseline. - The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations. - Overall, the writing quality of the paper should be improved; e.g., the authors spend the same space on explaining basic memory networks and then the forward model. The related work has missing pieces on more reinforcement learning tasks in the literature. - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails.
- The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog.
NIPS_2016_43
NIPS_2016
Weakness: 1. The organization of this paper could be further improved, such as give more background knowledge of the proposed method and bring the description of the relate literatures forward. 2. It will be good to see some failure cases and related discussion.
2. It will be good to see some failure cases and related discussion.
et5l9qPUhm
ICLR_2025
- Model Assumptions: I am unsure of the modelling of synthetic data and how well it translates into practice. Specifically, the authors model synthetic data using a label shift, assuming that the data (X) marginal remains the same. However, it seems unrealistic for autoregressive training (a key experiment in the paper), where the input tokens for next token generation come from the synthetic distribution. - Experimental Details: The theoretical results establish a strong dependence on the quality of synthetic data. However, the experiments with real data (MNIST/GPT-2) do not provide quantitative metrics to measure the degradation in the synthetic data source (either accuracy of MNIST classifier or perplexity/goodness scores of the trained GPT-2 generator), which makes it hard to ascertain which paradigm (in fig.1) the experiments align best with or what level of degradation in practice results in the observed trend. - Minor Issues - Typos: Line 225 (v \in R^{m}), line 299 (synthetic data P2), line 398 (represented by stars) - Suggestions for Clarity: - Akin to theorem 1, is it possible to present a simplified version of theorem 2 for the general audience? As it is, definition 2 and theorem 2 are hard to digest just on their own. - Line 481, before stating the result, can the authors explain in words the process of iterative mixing proposed in Ferbach et al., 2024? It would make the manuscript more self-contained. - Missing Citation: Line 424, for the MNIST dataset, please include the citation for "Gradient Based Learning Applied to Document Recognition", LeCun et al, 1998 - Visualization: For fig.1, please consider changing the y-axes test error to the same scale. Right now, it is hard to compare the error values or the slope in subplot 1 to those in 2 and 3.
- Akin to theorem 1, is it possible to present a simplified version of theorem 2 for the general audience? As it is, definition 2 and theorem 2 are hard to digest just on their own.
ARR_2022_311_review
ARR_2022
__1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance.
NIPS_2016_221
NIPS_2016
weakness: 1. To my understanding, two aspects which are the keys to the segmentation performance are: (1) The local DNN evaluation of shape descriptors in terms of energy, and (2) The back-end guidance of (super)voxel agglomeration. Although experiment showed gains of the proposed method over GALA, it is yet not clear enough which part is the major contributor of such gain. The original paper of GALA used methods different from this paper (3D-CNN) to generate edge probability maps. Is the edge map extraction framework in this paper + GALA a fair enough baseline? It would be great if such edge map can also be visualized. 2. The proposed method to some extent is not that novel. The idea of generating or evaluating segmentation masks with DNN has been well studied by the vision community in many general image segmentation tasks. In addition, the idea of greedily guiding the agglomeration of (super)voxels by evaluating energies pretty much resembles many bottom-up graph-theoretic merging in early segmentation methods. The authors, however, failed to mention and compare with many related works from the general vision community. Although the problem general image segmentation somewhat differs from the task of this paper, showing certain results on general image segmentation datasets (like the GALA paper) and comparing with state-of-the-art general segmentation methods may give a better view of the performance of the proposed method. 3. Certain parts did not provide enough explanation, or are flooded by too much details and fail to give the big picture clearly enough. For example in Appendix B Definition 2, when explaining the relationship between shape descriptor and connectivity region, some graphical illustrations would have helped the readers understanding the ideas in the paper much easier and better. ----------------------------------------------------------------------------- Additional Comments after Discussion Upon carefully reading the rebuttal and further reviewing some of the related literature, I decide to down-grade scores on novelty and impact. Here are some of the reasons: 1. I understand this paper targets a problem which somewhat differs from general segmentation problems. And I do very much appreciate its potential benefit to the neuroscience community. This is indeed a plus for the paper. However, an important question is how much this paper can really improve over the existing solutions. Therefore, to demonstrate that the algorithm is able to correctly find closed contours, and really show stronger robustness against weak boundaries (This is especially important for bottom up methods), the authors do need to refer to more recent trends in the vision community. 2. I noticed one reference "Maximin affinity learning of image segmentation, NIPS 2009" cited in this paper. The paper proposed a very elegant solution to affinity learning. The core ideas proposed in this paper, such as greedy merging, Rand Index like energy function show strong connections to the cited paper. The maximin affinity is basically the weakest edge along a minimum spanning tree (MST), and we know greedy region merging is also based on cutting weakest edges on a MST. The slight difference is the author proposed the energy term at a lot of positions and scales but the previous paper has a single energy term for the global image. In addition cited paper also addressed a similar problem in the experiment. However, the authors not only did not include the citation as an experimental baseline, but also failed to provide detailed discussions on the relation between the two works. 3. The authors argued for greedy strategy, claiming this is better than factorizable energies. "By sacrificing factorization, we are able to achieve the rich combinatorial modeling provided by the proposed 3d shape descriptors." I guess what the authors mean is they are putting more emphasis on local predictions. But this statement is not solidly justified by the paper. In addition, although to some extent I could understand this argument (local prediction indeed seems important because the size of cells vs volume are much smaller than segments vs whole image in general segmentation), there are significant advances on making pretty strong local predictions from the general vision community using deep learning, which the authors fail to mention and compare. I think overall the paper addressed an interesting problem and indeed showed solid research works. However it will better if the authors could better address contemporary segmentation literature and further improve the experiments.
1. I understand this paper targets a problem which somewhat differs from general segmentation problems. And I do very much appreciate its potential benefit to the neuroscience community. This is indeed a plus for the paper. However, an important question is how much this paper can really improve over the existing solutions. Therefore, to demonstrate that the algorithm is able to correctly find closed contours, and really show stronger robustness against weak boundaries (This is especially important for bottom up methods), the authors do need to refer to more recent trends in the vision community.
NIPS_2021_2095
NIPS_2021
A primary concern is how the proposed method deviates a lot from existing HRL methods which also consider action-repeats? Perhaps the key added value is the extension to continuous control for hybrid action space. It might be useful to elaborate on this in the discussion/related work. I would encourage you to expand the related work section to cover more HRL literature and discuss where the paper is positioned in the context of broader HRL methods. Arguably most HRL methods allow for the switching between the flat (one-step policies) and abstract actions (multi-step policies), especially the intra-option learning methods, which is the primary claim of this work. So what have we learned here which allows us to scale better to the list of tasks studied here? What happens when we don’t specifically want action-reproducible policies? Currently a lot of the improvements seem to stem from the nature of tasks where action-repeat might be beneficial. While the n-score and n-AUC is better as an average over tasks, TAAC is only marginally better or same as top baselines in most tasks as reported in the Figure 8: The unnormalized reward curves of the 14 tasks. It might be useful to expand on this in the main paper as to why you choose the evaluation metric to be n-score and n-AUC and consider averaging over all tasks as the primary metric. Empirical Analysis: The experiments evaluate TAAC in 5 categories of 14 continuous control tasks, covering simple control, locomotion, terrain walking (Brockman et al.,572016), manipulation (Plappert et al., 2018), and self-driving. The baselines are chosen fairly and compare a wide range of action-repeats methods. How do these methods compare to non-action repeat baselines? Any insights on this would be great. In particular, I am curious if the action that is being repeated is not encouraging exploration or performance, how does the method overcome such a choice? How many random seeds have been considered? Please mention it while reporting. Writing and Presentation: The paper can be presented better in terms of explaining how this approach differs from existing approaches which consider action repeats. Consider a summary table which can highlight the novelty of the approach as compared to many other HRL off policy approaches as mentioned by the authors in Sec 1. Sec 2, and Sec. 5.2. There are several insights in the more observations paragraph which is buried in 5.4. The paper might benefit from re-writing earlier parts to give some intuitions on key differences which this method brings to the table. For example 1) Persistent exploration and the compare-through operator are crucial to the success of this approach. 2) The importance of the formulation of the closed-loop action repetition. Yes the authors have discussed societal impact. The limitations of the work are also covered in different sections of the paper - it would be valuable to consolidate them and add them in the conclusion section perhaps.
1) Persistent exploration and the compare-through operator are crucial to the success of this approach.
ICLR_2022_2123
ICLR_2022
of this submission and make suggestions for improvement: Strengths - The authors provide a useful extension to existing work on VAEs, which appears to be well-suited for the target application they have in mind. - The authors include both synthetic and empirical data as test cases for their method and compare it to a range of related approaches. - I especially appreciated, that the authors validated their method on the empirical data and also provide an assessment of face validity using established psychological questionnaires (BDI and AQ). - I also appreciated the ethics statement pointing out that the method requires additional validation, before it may enter the clinic. - The paper is to a great extend clearly written. Weaknesses - In Figure 2 it seems that Manner-1 use of diagnostic information is more important than Manner-2 use of this information, which calls your choice into question to set lambda = 0.5 in equation 3. Are you able to learn this parameter from the data? - Also in Figure 2, when applying your full model to the synthetic data, it appears to me that inverting your model seems to underestimate the within-cluster variance (compared to the ground truth). Could it be that your manner-1 use of information introduces constraints that are too strong, as they do not allow for this variance? - It would strengthen your claims of “superiority” of your approach over others, if you could provide a statistical test that shows that your approach is indeed better at recovering the true relationship compared to others. Please, provide such tests. - There are important information about the empirical study missing that should be mentioned in the supplement, such as recording parameters for the MRI, preprocessing steps, was the resting-state recorded under eyes-open or eyes-closed condition? A brief explanation of the harmonization technique would also be appreciated. It would also be helpful to mention the number of regions in the parcellation in the main text. - The validation scheme using the second study is not clear to me. Were the models trained on dataset A and then directly applied to dataset B or did you simply repeat the training in dataset B. If the latter is the case, I would refer to this as a replication dataset and not a validation dataset (which would require applying the same model on a new dataset, without retraining). - Have you applied multiple testing correction for the FID comparisons across diagnoses. If so which? If not, you should apply it and please, state that clearly in the main manuscript. - It is somewhat surprising that the distance between SCZ and MDD is shorter than between SCZ and ASD as often the latter two are viewed as closely related. It might be helpful to discuss, why that may be the case in more detail. - The third ethics statement is not clear to me. Could you clarify? - The font size in the figures is too small. Please, increase it to improve readability.
- There are important information about the empirical study missing that should be mentioned in the supplement, such as recording parameters for the MRI, preprocessing steps, was the resting-state recorded under eyes-open or eyes-closed condition? A brief explanation of the harmonization technique would also be appreciated. It would also be helpful to mention the number of regions in the parcellation in the main text.
NIPS_2017_401
NIPS_2017
Weakness: 1. There are no collaborative games in experiments. It would be interesting to see how the evaluated methods behave in both collaborative and competitive settings. 2. The meta solvers seem to be centralized controllers. The authors should clarify the difference between the meta solvers and the centralized RL where agents share the weights. For instance, Foester et al., Learning to communicate with deep multi-agent reinforcement learning, NIPS 2016. 3. There is not much novelty in the methodology. The proposed meta algorithm is basically a direct extension of existing methods. 4. The proposed metric only works in the case of two players. The authors have not discussed if it can be applied to more players. Initial Evaluation: This paper offers an analysis of the effectiveness of the policy learning by existing approaches with little extension in two player competitive games. However, the authors should clarify the novelty of the proposed approach and other issues raised above. Reproducibility: Appears to be reproducible.
3. There is not much novelty in the methodology. The proposed meta algorithm is basically a direct extension of existing methods.
ICLR_2023_3381
ICLR_2023
The authors claim that they bridge an important gap between IBC [2] and RvS by modeling the dependencies between the state, action, and return with an implicit model on Page 6. However, noticing that IBC proposes to use the implicit model to model the dependencies between the state and action, I think the contribution of this paper is to introduce the return from RvS to the implicit model. Thus, the proposed method looks like a combination of IBC and RvS. The authors conduct experiments in Section 5.1 to show the advantages of the implicit model. However, such advantages are similar to IBC, which could hurt the novelty of this paper. The authors may want to highlight the novelty of the proposed method against IBC. The discussions of the empirical results in Sections 5.1 and 5.2.2 are missing. The authors may want to explain: 1) why the RvS method fails to reach either goal and converges to the purple point in Figure 4(b); 2) why the explicit methods perform better than implicit methods on the locomotion tasks. The pseudo-code of the proposed method is missing. [1] Søren Asmussen and Peter W Glynn. Stochastic simulation: algorithms and analysis, volume 57. Springer, 2007. [2] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mordatch, and J. Tompson. Implicit behavioral cloning. In Proceedings of the 5th Conference on Robot Learning. PMLR, 2022.
2) why the explicit methods perform better than implicit methods on the locomotion tasks. The pseudo-code of the proposed method is missing. [1] Søren Asmussen and Peter W Glynn. Stochastic simulation: algorithms and analysis, volume 57. Springer, 2007. [2] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mordatch, and J. Tompson. Implicit behavioral cloning. In Proceedings of the 5th Conference on Robot Learning. PMLR, 2022.
ICLR_2021_1533
ICLR_2021
1) The nature of the contribution with respect to ECE_sweep is not clearly described in the text. Concretely, this amounts to a way to choose the number of bins using data (i.e., autotuning a hyperparameter in the estimate). While this, of course, leads to a different estimator, this is not something fundamentally different. I would much rather that the paper was upfront about the contribution. (In fact, I was pretty confused about the point the paper was making until I realised this). 2) I don't think the baseline comparisons made in the experiments are appropriate. The proposal is a method to choose the appropriate number of bins in the estimate, and should be compared to other methods to do so instead of to an arbitrary choice of number of bins as is done in section 5.2. Without this comparison, I have no way to judge if this is a good autotuning method or not. Reasonable comparisons could be, e.g., choosing b by cross validation, or, in equal mass binning, choosing b so that each bin has a reasonable number of samples for the error y ― k to not be too large. 3) While the focus of the paper is on bias, it should be noted that by searching over many different bin sizes, the variance of ECE_sweep may be inflated. If this is to such an extent that the gains in bias relative to other autotuning methods are washed out, then this estimator would not be good. To judge this requires at least that the variances for ECE_sweep are reported, but these are never mentioned in the main text. 4) Choice of law in simulation in section 3, which are used to illustrate the dependence of bias on the number of bins, not aligned with the laws/curves in figure 3. Taking the latter as representative of the sort of laws and calibration curves that arise in practice, there are two issues: 4a) The pdfs of f tend to be a lot more peaked near the end than the one explored in section 3 - this is borne out by the values of α , β in the fits in Table 1. Beta(1.1,1) is remarkably flat compared to the curves in Fig 3. 4b) There seem to be a few different qualitative properties of the calibration curves - monotone but with a large intercept at 0; those with an inflection point in the middle; and those with the bulk lying below the y = x line. In particular, all of them tend to have at least some region above the y = x line. The choice of curve c 2 in section 3 doesn't completely align with any of these cases, but even if we make the case that it aligns with the third type, this leaves two qualitative behaviours unexplored. In fact, the choice of laws is such that the error of the hard classifier that thresholds f at 1 / 2 is 26 % . I don't think we're usually interested in the calibration of a predictor as poor as this in practice. All of this makes me question the relevance of this simulation. Is the dependence of the bias on the number of bins as strong for the estimated laws as it is for these? Seeing the the equivalents of figs 7 and 8 for the laws from section 5 would go a long way in sorting this out. 5) Experiments: As I previously mentioned, I don't think the correct baselines are compared to. Instead of posing the method against other autotuning schemes, just one choice of the number of bins is taken. This already makes it near impossible to judge the efficacy of this method. Despite this, even the data presented does not make a clear case for ECE_sweep. In Fig. 4 we see that the bias of EW_sweep is even worse than EW. This already means that the sweep estimate doesn't fix the issues of ECE_bin in all contexts. It is the case that EM_sweep has better bias than EM, but again, for samples large enough for the variances to be in control, it seems like these numbers are both converging to the same, so I don't see any distinct advantage when it comes to estimation. (of course, this is moot because this isn't the right comparison anyway) Also, Fig. 5 is flawed because it compares EW and EM_sweep. It should either compare EM and EM_sweep, or EW and EW_sweep, I don't see why EW and EM_sweep are directly comparable. Minor issues: a) Algorithm (1) and the formula for ECE_sweep in section 4 don't compute the same thing. In algorithm (1), you find the largest b such that the resulting y ― k is a monotone sequence, and return the ECE_bin for this number of bins. In the formula, you maximise the ECE_bin for all b that yield a monotone y ― k . From the preceding text, I assumed that the quantity in Algorithm (1) is intended. b) Why is the L p norm definition of the ECEs introduced at all? In the paper only p = 2 is used throughout. I feel like the p just complicates things without adding much - even if you only present the L 2 definition, the fact that a generic p can be used instead should be obvious to the audience. c) Design considerations for ECE_sweep - it is worth noting that accuracy is not all that we want in an estimate of calibration error. For instance, one might reasonably want to add this as a regulariser when training a model in order to obtain better calibrated solutions. One issue with ECE_sweep is that it introduces a problem in that how the number of bins in the ECE_sweep estimate changes with a small change in model parameters seems very difficult to handle, which makes this a nondifferentiable loss. Broader issues of this form, and a discussion of how they may be mitigated, could lead to a more well rounded paper. Comments: a) Exact monotonicity in the ECE_sweep proposal - I find the argument stemming from the monotonicity of the true calibration curve, and the idea to use this to nail down a maximum binning size interesting. However, why should we demand exact monotonicity in the bin heights? Each y ― k will have noise at the scale of roughly b / n , (for equal mass binning with b bins), and in my opinion, violation of monotonicity at this scale should not be penalised. Also, what if a few y ― k s decrease but most are increasing (i.e., the sequence has a few falling regions, but the bulk is increasing)? Perhaps instead of dealing with this crudely, the error of a shape constrained estimator may serve as a better proxy. b) Isn't the procedure for parametrically fitting the pdf of f , and E [ Y f ( X ) ] , and then integrating the bias a completely different estimator for TCE of a model? In fact, if the laws are a good fit, as is claimed in section 5.1, then this plug in estimator might do well simply because the integration is exact. In fact, since the fit is parametric, this can further be automatically differentiated (if, say, f were a DNN), and thus used to train. c) It would be interesting to see what number of bins are ultimately adopted in the ECE_sweep computations that are performed. Overall opinion: The lack of comparison to appropriate baselines makes it near impossible for me to judge the validity of the proposed estimator. I feel like this is a deep methodological flaw when it comes to evaluating the main proposal of the paper. This is a real pity because I quite like some of the ideas in the paper. Due to the inability to evaluate the main contribution of the paper, i am rating it a strong reject. I'd be completely open to re-rating it if appropriate comparisons are performed, and the case for the method is properly made.
1) The nature of the contribution with respect to ECE_sweep is not clearly described in the text. Concretely, this amounts to a way to choose the number of bins using data (i.e., autotuning a hyperparameter in the estimate). While this, of course, leads to a different estimator, this is not something fundamentally different. I would much rather that the paper was upfront about the contribution. (In fact, I was pretty confused about the point the paper was making until I realised this).
gDDW5zMKFe
ICLR_2024
1. At the heart of FIITED is the utility-based approach to determine chunk significance. However, basing eviction decisions purely on utility scores might introduce biases. For instance, recent chunks might gain a temporary high utility, leading to potentially premature evictions of other valuable chunks. 2. This approach does not consider the individual significance of dimensions within a chunk, leading to potential information loss. 3. While the chunk address manager maintains a free address stack, this design assumes that the most recently evicted space is optimal for the next allocation. This might not always be the case, especially when considering the locality of data and frequent access patterns. 4. The system heavily depends on the hash table to fetch and manage embeddings. This approach, while efficient in accessing chunks, might lead to hashing collisions even though the design ensures a low collision rate. Any collision, however rare, can introduce latency in access times or even potential overwrites. 5. The methodology leans heavily on access frequency to decide on embedding significance. However, frequency doesn't always equate to importance. There could be rarely accessed but critically important embeddings, and the method might be prone to undervaluing them.
1. At the heart of FIITED is the utility-based approach to determine chunk significance. However, basing eviction decisions purely on utility scores might introduce biases. For instance, recent chunks might gain a temporary high utility, leading to potentially premature evictions of other valuable chunks.
fkAKjbRvxj
EMNLP_2023
1. Have the author(s) thought about the reason why, information value is "stronger predictor" for dialogue(Complementarity in page 7 or discussion in page 8), is there any already existing linguistic theory which could explain it. If so, adding that will make this one a stronger paper. 2.It turns out to me that different information value serves as a strong predictor among the 5 chosen corpora, for example, in PROVO it is the syntactic information value. Again have the author(s) already had a potential hypothesis for this phenomenon?Is it practical to do a linguistic analysis on this 5 corpora to find the reason? 3. Is it possible that by increasing the set size of A_(x), the generated sentences sampled from "ancestral sampling" could have already covered some/all of the samples from other sampling methods, e.g., "temperature sampling"? As far as I know, both of the two mentioned sampling methods are based on the conditional probability, while typical sampling is a comparatively new sampling method which could cut off the dependence on the conditional probability to some degree and helps to generate more creative and diversify next sentences instead of entering receptive loop(Meister 2023). Based on that,I would suggest the author(s) making a statistic report about the distributions of generated sentences from different sampling methods and maybe then making a selection of just two representative sampling methods based on the observation. Also a reference to temperature sampling is needed.
1. Have the author(s) thought about the reason why, information value is "stronger predictor" for dialogue(Complementarity in page 7 or discussion in page 8), is there any already existing linguistic theory which could explain it. If so, adding that will make this one a stronger paper.
ICLR_2021_665
ICLR_2021
Weakness 1 The way of using GP is kind of straightforward and naive. In the GP community, dynamical modeling has been widely investigated, from the start of Gaussian Process Dynamical Model in NIPs 2005. 2 I do not quite get the modules of LSTM Frame Generation and GP Frame Generation in Eq (4). Where are these modules in Fig.3 ? The D in the Stage 3? Using GP to generate Images? Does it make sense? GP is more suitable to work in the latent space, is it? 3 The datasets are not quite representative, due to the simple and experimental scenarios. Moreover, the proposed method is like a fundamental work. But is it useful for high-level research topics, e.g., large-scale action recognition, video caption, etc?
1 The way of using GP is kind of straightforward and naive. In the GP community, dynamical modeling has been widely investigated, from the start of Gaussian Process Dynamical Model in NIPs 2005.
ARR_2022_141_review
ARR_2022
- The approach description (§ 3) is partially difficult to follow and should be revised. The additional page of the camera-ready version should be used to extend the approach description (rather than adding more experiments). - CSFCube results are not reported with the same metrics as in the original publication making a comparison harder than needed. - The standard deviation from the Appendix could be added to Table 1 at least for one metric. There should be enough horizontal space. - Notation of BERT_\theta and BERT_\epsilon is confusing. Explicitly calling them the co-citation sentence encoder and the paper encoder could make it clearer. - How are negative sampled for BERT_\epsilon? Additional relevant literature: - Luu, K., Wu, X., Koncel-Kedziorski, R., Lo, K., Cachola, I., & Smith, N.A. (2021). Explaining Relationships Between Scientific Documents. ACL/IJCNLP. - Malte Ostendorff, Terry Ruas, Till Blume, Bela Gipp, Georg Rehm. Aspect-based Document Similarity for Research Papers. COLING 2020. Typos: - Line 259: “cotation” - Line 285: Missing “.”
- The approach description (§ 3) is partially difficult to follow and should be revised. The additional page of the camera-ready version should be used to extend the approach description (rather than adding more experiments).
R4h5PXzUuU
ICLR_2025
1. Limited Explanation of Failures: Although the paper provides examples of failure cases, it does not fully delve into the underlying causes or offer detailed solutions for these issues. While the authors acknowledge the problem of overconfidence in models such as GPT-4o with ReGuide, further exploration of these limitations could strengthen the study. There is still not a robust method to effectively introduce LVLMs to the OoD tasks. 2. Model-Specific Insights: The paper focuses on generic findings across models, but a deeper investigation into how specific models (e.g., GPT-4o vs. InternVL2) behave differently when ReGuide is applied could add nuance to the conclusions. For example, the differences in false positive rates (FPR) between models with and without ReGuide should be presented for a better comparison. 3. Scalability and Practicality: While the ReGuide method shows a promising direction, the computational overhead and API limitations mentioned in the paper could present challenges for practical, large-scale implementation. This issue is touched upon but not sufficiently addressed in terms of how ReGuide might be optimized for deployment at scale. Meanwhile, the inference cost analysis can further improve the paper's quality and inspire further work.
2. Model-Specific Insights: The paper focuses on generic findings across models, but a deeper investigation into how specific models (e.g., GPT-4o vs. InternVL2) behave differently when ReGuide is applied could add nuance to the conclusions. For example, the differences in false positive rates (FPR) between models with and without ReGuide should be presented for a better comparison.
ICLR_2023_4455
ICLR_2023
1) Using the center's representation to conduct panoptic segmentation is too similar to PanopticFCN. The core difference would be the island centers for stuff, however, according to Table 6, it does not make significant improvements. 2) Although MaskConver gets significantly better performance than previous works, it is not clear where these improvements come from. It lacks a roadmap-like ablation study from the baseline to MaskConver. For example, in Table 5, the backbones and input sizes are all different among different models, which is not a fair or clear comparison. 3) The novelty of this paper is limited, as it does not propose new modules or training strategies. As it does not provide detailed ablations, it would be susceptible that the improvements mainly come from a highly engineered strong baseline. 4) Some other representative panoptic segmentation models are not compared, like PanopticFPN, Mask2Former, etc.
4) Some other representative panoptic segmentation models are not compared, like PanopticFPN, Mask2Former, etc.
ICLR_2021_863
ICLR_2021
Weakness 1. The presentation of the paper should be improved. Right now all the model details are placed in the appendix. This can cause confusion for readers reading the main text. 2. The necessity of using techniques includes Distributional RL and Deep Sets should be explained more thoroughly. From this paper, the illustration of Distributional RL lacks clarity. 3. The details of state representation are not explained clear. For an end-to-end method like DRL, it is crucial for state representation for training a good agent, as for network architecture. 4. The experiments are not comprehensive for validating that this algorithm works well in a wide range of scenarios. The efficiency, especially the time efficiency of the proposed algorithm, is not shown. Moreover, other DRL benchmarks, e.g., TD3 and DQN, should also be compared with. 5. There are typos and grammar errors. Detailed Comments 1. Section 3.1, first paragraph, quotation mark error for "importance". 2. Appendix A.2 does not illustrate the state space representation of the environment clearly. 3. The authors should state clearly as to why the complete state history is enough to reduce POMDP for the no-CSI case. 4. Section 3.2.1: The first expression for J ( θ ) is incorrect, which should be Q ( s t 0 , π θ ( s t 0 ) ) . 5. The paper did not explain Figure 2 clearly. In particular, what does the curve with the label "Expected" in Fig. 2(a) stand for? Not to mention there are multiple misleading curves in Fig. 2(b)&(c). The benefit of introducing distributional RL is not clearly explained. 6. In Table 1, only 4 classes of users are considered in the experiment sections, which might not be in accordance with practical situations, where there can be more classes of users in the real system and more user numbers. 7. In the experiment sections, the paper only showed the Satisfaction Probability of the proposed method is larger than conventional methods. The algorithm complexity, especially the time complexity of the proposed method in an ultra multi-user scenario, is not shown. 8. There is a large literature on wireless scheduling with latency guarantees from the networking community, e.g., Sigcomm, INFOCOM, Sigmetrics. Representative results there should also be discussed and compared with. ====== post rebuttal: My concern regarding the experiments remains. I will keep my score unchanged.
4. Section 3.2.1: The first expression for J ( θ ) is incorrect, which should be Q ( s t 0 , π θ ( s t 0 ) ) .
wcgfB88Slx
EMNLP_2023
The following are the questions I have and these are not necessarily 'reasons to reject'. - I was looking for a comparison with the zero-shot chain of thought baseline which authors refer as ZOT (Kojima et al., 2022). The example selection method has a cost. Also, few shot experiments involve extra token usage cost than zero shot. - Some of the numbers while comparing proposed method vs baselines seem to be pretty close. Wondering, if authors did any statistical significance test? - A parallel field to explanation selection is prompt/instruction engineering, where we often change the zeroshot instruction. Another alternative is prompt-tuning via gradient descent. Wondering if authors have any thoughts regarding the tradeoff. - Few shot examples has various types of example biases such as majority bias, recency bias etc. (http://proceedings.mlr.press/v139/zhao21c/zhao21c.pdf, https://aclanthology.org/2023.eacl-main.130/, https://aclanthology.org/2022.acl-long.556.pdf). Wondering if authors have any thought on how the robustness look like with the application of their method? I am looking forward to hear answers to these questions from the authors.
- Some of the numbers while comparing proposed method vs baselines seem to be pretty close. Wondering, if authors did any statistical significance test?
NIPS_2019_1049
NIPS_2019
- While the types of interventions included in the paper are reasonable computationally, it would be important to think about whether they are practical and safe for querying in the real world. - The assumption of disentangled factors seems to be a strong one given factors are often dependent in the real world. The authors do include a way to disentangle observations though, which helps to address this limitation. Originality: The problem of causal misidentification is novel and interesting. First, identifying this phenomenon as an issue in imitation learning settings is an important step towards improved robustness in learned policies. Second, the authors provide a convincing solution as one way to address distributional shift by discovering the causal model underlying expert action behaviors. Quality: The quality of the work is high. Many details are not included in the main paper, but the appendices help to clarify some of the confusion. The authors evaluated the approach on multiple domains with several baselines. It was particularly helpful to see the motivating domains early on with an explanation of how the problem exists in these domains. This motivated the solution and experiments at the end. Clarity: The work was very well-written, but many parts of the paper relied on pointers to the appendices so it was necessary to go through them to understand the full details. There was a typo on page 3: Z_t → Z^t. Significance: The problem and approach can be of significant value to the community. Many current learning systems fail to identify important features relevant for a task due to limited data and due to the training environment not matching the real world. Since there will almost always be a gap between training and testing, developing approaches that learn the correct causal relationships between variables can be an important step towards building more robust models. Other comments: - What if the factors in the state are assumed to be disentangled but are not? What will the approach do/in what cases will it fail? - It seems unrealistic to query for expert actions at arbitrary states. One reason is because states might be dangerous, as the authors point out. But even if states are not dangerous, parachuting to a particular state would be hard practically. The expert could instead be simply presented a state and asked what they would do hypothetically (assuming the state representations of the imitator and expert match, which may not hold), but it could be challenging for an expert to hypothesize what he or she would do in this scenario. Basically, querying out of context can be challenging with real users. - In the policy execution mode, is it safe to execute the imitator’s learned policy in the real world? The expert may be capable of acting safely in the world, but given that the imitator is a learning agent, deploying the agent and accumulating rewards in the real world can be unsafe. - On page 7, there is a reference to equation 3, which doesn’t appear in the main submission, only in the appendix. - In the results section for intervention by policy execution, the authors indicate that the current model is updated after each episode. How long does this update take? - For the Atari game experiments, how is the number of disentangled factors chosen to be 30? In general, this might be hard to specify for an arbitrary domain. - Why is the performance for DAgger in Figure 7 evaluated at fewer intervals? The line is much sharper than the intervention performance curve. - The authors indicate that GAIL outperforms the expert query approach but that the number of episodes required are an order of magnitude higher. Is there a reason the authors did not plot a more equivalent baseline to show a fair comparison? - Why is the variance on Hopper so large? - On page 8, the authors state that the choice of the approach for learning the mixture of policies doesn’t matter, but disc-intervention obtains clearly much higher reward than unif-intervention in Figures 6 and 7, so it seems like it does make a difference. ----------------------------- I read the author response and was happy with the answers. I especially appreciate the experiment on testing the assumption of disentanglement. It would be interesting to think about how the approach can be modified in the future to handle these settings. Overall, the work is of high quality and is relevant and valuable for the community.
- While the types of interventions included in the paper are reasonable computationally, it would be important to think about whether they are practical and safe for querying in the real world.
ICLR_2023_1645
ICLR_2023
1. Can this method be used on both SEEG and EEG simultaneously? 2. It would be better to compare with other self-supervised learning methods that are not based on contrastive learning.
2. It would be better to compare with other self-supervised learning methods that are not based on contrastive learning.
6iM2asNCjK
ICLR_2024
1. My primary concern is with the limited scope of the paper. The paper primarily considers only evaluating sentence embeddings from LLMs, which while important, is a small part of the overall evaluation landscape of LLMs. Consequently, the title "Robustness-Accuracy characterization of Large Language Models using synthetic datasets" is somewhat misleading. Furthermore, the generation methodology for the synthetic tasks using SentiWordNet for polarity detection does seem somewhat restrictive. For sentence embedding evaluation, it does seem to be a good methodology, but it is not clear how well it would generalize to any generative tasks (e.g. question answering, summarization, etc.). Whether this metric can be leveraged for other tasks (especially for a different class of tasks) needs to be demonstrated in my opinion. 2. While the proposed methodology of using a ratio of positive / negative to neutral sentiment words is a good way of defining difficulty, it does seem somewhat restrictive given the contextual nature of languages. Interesting linguistic phenomena such as sarcasm, irony, etc. are not captured by the proposed methodology, which arguably form for a large part of the difficulty in language understanding especially for such large LLMs. While the authors briefly touch upon the issue of negation, negation in natural language is not limited to structured rules, and any methodology testing the robustness of LLMs should provide a way of capturing this, given that LLMs are generally have a poor understanding of negations ([1]). 3. The baseline metrics are still computed on the synthetic dataset. For a generative LLM model training for example, this potentially results in bad sentence embeddings, which subsequently may result in bad task performance. This is especially problematic when done for a single dataset (as is the case for all the baseline metrics). In contrast, the proposed SynTextBench benefits from aggregating across different difficulty levels, and is somewhat more robust to this issue compared to the baseline metrics. A better way for considering the baselines might be to treat them in the same way as SynTextBench is treated (aggregated across different difficulty levels, thresholded for some value of the metric, and then computing the area under the curve). 4. Additionally, there has been a large amount of work on LLM evaluation [2]. While some of the metrics there do not satisfy the proposed desiderata, it would still be good to see how SynTextBench metric compares to the other metrics proposed in the literature. Concretely, from the paper, it is hard to understand under what conditions should one use SynTextBench over other metrics (eg: say MMLU / Big Bench for language generation).
4. Additionally, there has been a large amount of work on LLM evaluation [2]. While some of the metrics there do not satisfy the proposed desiderata, it would still be good to see how SynTextBench metric compares to the other metrics proposed in the literature. Concretely, from the paper, it is hard to understand under what conditions should one use SynTextBench over other metrics (eg: say MMLU / Big Bench for language generation).
NIPS_2018_914
NIPS_2018
of the paper are (i) the presentation of the proposed methodology to overcome that effect and (ii) the limitations of the proposed methods for large-scale problems, which is precisely when function approximation is required the most. While the intuition behind the two proposed algorithms is clear (to keep track of partitions of the parameter space that are consistent in successive applications of the Bellman operator), I think the authors could have formulated their idea in a more clear way, for example, using tools from Constraint Satisfaction Problems (CSPs) literature. I have the following concerns regarding both algorithms: - the authors leverage the complexity of checking on the Witness oracle, which is "polynomial time" in the tabular case. This feels like not addressing the problem in a direct way. - the required implicit call to the Witness oracle is confusing. - what happens if the policy class is not realizable? I guess the algorithm converges to an \empty partition, but that is not the optimal policy. minor: line 100 : "a2 always moves from s1 to s4 deterministically" is not true line 333 : "A number of important direction" -> "A number of important directions" line 215 : "implict" -> "implicit" - It is hard to understand the figure where all methods are compared. I suggest to move the figure to the appendix and keep a figure with less curves. - I suggest to change the name of partition function to partition value. [I am satisfied with the rebuttal and I have increased my score after the discussion]
- the authors leverage the complexity of checking on the Witness oracle, which is "polynomial time" in the tabular case. This feels like not addressing the problem in a direct way.
NIPS_2016_482
NIPS_2016
of the method (see above) would clearly help in making the case for its impact. Clarity: The paper is very clearly written and easy to follow. It would be interesting to see a version of Fig. 1 including error bars estimated from the method - it seems that currently only the estimated means are ever used. More emphasis could be put on explaining the big picture of when the method is actually useful too. Other comments/questions: 1. In Eq. (1), why does y depend on theta but not f? 2. It would be nice to know the source of the variance seen in Fig. 1.
1. In Eq. (1), why does y depend on theta but not f?
ICLR_2023_2934
ICLR_2023
- Fig. 1 leaves me with some doubts. It would seem that the private task is solved by using only a head operating on the learned layer for the green task (devanagari). This is at least what I would expect for the claims of the method to still uphold, because if the private task head can alter the weights of the Transformer layer 1 then information from the private task is flowing into the network. I would appreaciate if the authors could clarify this. - Overall a lot of choices seem to lead towards the necessity for large compute power. The choice of modifying hyperparameters only by a one-hop neighbor is quite restrictive and it implies that we have to evolve/search for quite a while before stumbling on the correct hyperparams. The layer cloning and mutation probability hyperparameter is set at random by the evolutionary process, implying that the level of overall randomicity is very high and therefore large training times are needed to get stable results or be able to reproduce the claimed results (considering the authors use DNN architectures). The authors mention that "the score can be defined to optimize a mixture of factors depending on application requirements". It would have been nice to see what the tradeoff between training time and model size vs optimal multi-task performance is, especially considering these high levels of randomicity present in the proposed approach. (and also for others to be able to reproduce somewhat similar results on lesser compute). - The parameters in Table 1, the model and the experiments seem to be only good for image data and ViT. Did the authors try to apply the same principles to other areas research areas such as NLP or simpler models in the image domain (CNNs)? I understand the latter might be due to the focus about state of the art performance, but it would show that the method can generalize to different architectures and tasks, not just transformers in vision.
- The parameters in Table 1, the model and the experiments seem to be only good for image data and ViT. Did the authors try to apply the same principles to other areas research areas such as NLP or simpler models in the image domain (CNNs)? I understand the latter might be due to the focus about state of the art performance, but it would show that the method can generalize to different architectures and tasks, not just transformers in vision.
NIPS_2017_486
NIPS_2017
1. The paper is motivated with using natural language feedback just as humans would provide while teaching a child. However, in addition to natural language feedback, the proposed feedback network also uses three additional pieces of information – which phrase is incorrect, what is the correct phrase, and what is the type of the mistake. Using these additional pieces is more than just natural language feedback. So I would like the authors to be clearer about this in introduction. 2. The improvements of the proposed model over the RL without feedback model is not so high (row3 vs. row4 in table 6), in fact a bit worse for BLEU-1. So, I would like the authors to verify if the improvements are statistically significant. 3. How much does the information about incorrect phrase / corrected phrase and the information about the type of the mistake help the feedback network? What is the performance without each of these two types of information and what is the performance with just the natural language feedback? 4. In figure 1 caption, the paper mentions that in training the feedback network, along with the natural language feedback sentence, the phrase marked as incorrect by the annotator and the corrected phrase is also used. However, from equations 1-4, it is not clear where the information about incorrect phrase and corrected phrase is used. Also L175 and L176 are not clear. What do the authors mean by “as an example”? 5. L216-217: What is the rationale behind using cross entropy for first (P – floor(t/m)) phrases? How is the performance when using reinforcement algorithm for all phrases? 6. L222: Why is the official test set of MSCOCO not used for reporting results? 7. FBN results (table 5): can authors please throw light on why the performance degrades when using the additional information about missing/wrong/redundant? 8. Table 6: can authors please clarify why the MLEC accuracy using ROUGE-L is so low? Is that a typo? 9. Can authors discuss the failure cases of the proposed (RLF) network in order to guide future research? 10. Other errors/typos: a. L190: complete -> completed b. L201, “We use either … feedback collection”: incorrect phrasing c. L218: multiply -> multiple d. L235: drop “by” Post-rebuttal comments: I agree that proper evaluation is critical. Hence I would like the authors to verify that the baseline results [33] are comparable and the proposed model is adding on top of that. So, I would like to change my rating to marginally below acceptance threshold.
3. How much does the information about incorrect phrase / corrected phrase and the information about the type of the mistake help the feedback network? What is the performance without each of these two types of information and what is the performance with just the natural language feedback?
Yz4VKLeZMG
EMNLP_2023
1. Generalizability: both fine-tuning and in-context learning strategies seem to be tailored for shifting model attention to a smaller chunk of key information, which is confirmed by the attention weight analysis. This makes me to worry to what extent this method could be generalized to other datasets where the information is not presented in contrastive pairs, and where the conflicting information is not restricted to one or two sentences but widely spread in the entire passage. For example, the task of identifying conflicting sentences might have over-simplified the reasoning task by taking the short-cut to ignore lots information, which just happens to be trivial in these specific datasets. 2. The developed strategies, while interesting, might just be marginally relevant to the cognitive process of heuristic / analytical dual passes of human reasoning. The heuristic reasoning process is more related to the information being utilized and the amount of attention paid to more fine-grained details. For instance, in online language comprehension, comprehenders might ignore fine-grained syntactic structured and rely on the semantic meaning of the words and their prior knowledge to interpret "the hearty meal was devouring..." as "the hearty meal was devoured..." (Kim and Osterhout, 2005). However, the heuristic process is less concerned with gratuity of final decision, as implied by the HAR model. The HAR framework breaks the reasoning tasks into multiple sub-tasks, where the gratuity of the decision gradually becomes finer-grained. This might be better characteristics as step-by-step chain of reasoning rather than heuristic decision-making. 3. The ICL-HAR, while improving consistency and verifiability, has greatly impedes the accuracy scores (dropping from 70.4 to 55.6 on TRIP). This should be discussed or at least acknowledged in the main text in more detail.
3. The ICL-HAR, while improving consistency and verifiability, has greatly impedes the accuracy scores (dropping from 70.4 to 55.6 on TRIP). This should be discussed or at least acknowledged in the main text in more detail.
NIPS_2016_386
NIPS_2016
, however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which I mark with ** in the minor comments below. Hopefully I am missing something silly. One also has to wonder about the practicality of such algorithms. The main algorithm relies on an estimate of the payoff for the optimal policy, which can be learnt with sufficient precision in a "short" initialisation period. Some synthetic experiments might shed some light on how long the horizon needs to be before any real learning occurs. A final note. The paper is over length. Up to the two pages of references it is 10 pages, but only 9 are allowed. The appendix should have been submitted as supplementary material and the reference list cut down. Despite the weaknesses I am quite positive about this paper, although it could certainly use quite a lot of polishing. I will raise my score once the ** points are addressed in the rebuttal. Minor comments: * L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0? * L177: "(OCO )" -> "(OCO)" and similar things elsewhere * L176: You might want to mention that the learner observes the whole concave function (full information setting) * L223: I would prefer to see a constant here. What does the O(.) really mean here? * L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards. * L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes? * The algorithm only stops /after/ it has exhausted its budget. Don't you need to stop just before? (the regret is only trivially affected, so this isn't too important). * L213: \tilde \mu is undefined. I guess you mean \tilde \mu_t, but that is also not defined except in Corollary 1, where it just given as some point in the confidence ellipsoid in round t. The result holds for all points in the ellipsoid uniformly with time, so maybe just write that, or at least clarify somehow. ** L435: I do not see how this follows from Corollary 2 (I guess you meant part 1, please say so). So first of all mu_t(a_t) is not defined. Did you mean tilde mu_t(a_t)? But still I don't understand. pi^*(X_t) is (possibly random) optimal static strategy while \tilde \mu_t(a_t) is the optimistic mu for action a_t, which may not be optimistic for pi^*(X_t)? I have similar concerns about the claim on the use of budget as well. * L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else. * L178: Why not say what Omega is here. Also, OMD is a whole family of algorithms. It might be nice to be more explicit. What link function? Which theorem in [32] are you referring to for this regret guarantee? * L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give. * It would be nice to have more interpretation of theta (I hope I got it right), since this is the most novel component of the proof/algorithm.
* L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else.
NIPS_2019_499
NIPS_2019
of the method. Are there any caveats to practitioners due to some violation of the assumptions given in Appendix. B or for any other reasons? Clarity: the writing is highly technical and rather dense, which I understand is necessary for some parts. However, I believe the manuscript would be readable to a broader audience if Sections 2 and 3 are augmented with more intuitive explanations of the motivations and their proposed methods. Many details of the derivations could be moved to the appendix and the resultant space could be used to highlight the key machinery which enabled efficient inference and to develop intuitions. Many terms and notations are not defined in text (as raised in "other comments" below). Significance: the empirical results support the practical utility of the method. I am not sure, however, if the experiments on synthetic datasets, support the theoretical insights presented in the paper. I believe that the method is quite complex and recommend that the authors release the codes to maximize the impact. Other comments: - line 47 - 48 "over-parametrization invariably overfits the data and results in worse performance": over-parameterization seems to be very helpful for supervised learning of deep neural networks in practice ... Also, I have seen a number of theoretical work showing the benefits of over-parametrisation e.g. [1]. - line 71: $\beta$ is never defined. It denotes the set of model parameters, right? - line 149-150 "the convergence to the asymptotically correct distribution allows ... obtain better point estimates in non-convex optimization.": this is only true if the assumptions in Appendix. B are satisfied, isn't it? How realistic are these assumptions in practice? - line 1: MCMC is never defined: Markov Chain Monte Carlo - line 77: typo "gxc lobal"=> "global" - eq.4: $\mathcal{N}$ and $\mathcal{L}$ are not defined. Normal and Laplace I suppose. You need to define them, please. - Table 2: using the letter `a` to denote the difference in used models is confusing. - too many acronyms are used. References: [1] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. "A convergence theory for deep learning via over-parameterization." arXiv preprint arXiv:1811.03962 (2018). ---------------------------------------------------------------------- I am grateful that the authors have addressed most of the concerns about the paper, and have updated my score accordingly. I would like to recommend for acceptance provided that the authors reflect the given clarifications in the paper.
- line 47 - 48 "over-parametrization invariably overfits the data and results in worse performance": over-parameterization seems to be very helpful for supervised learning of deep neural networks in practice ... Also, I have seen a number of theoretical work showing the benefits of over-parametrisation e.g. [1].
X4ATu1huMJ
ICLR_2024
**Overall comment** The paper discusses evaluating TTA methods across multiple settings, and how to choose the correct method during test-time. I would argue most of the methods/model selection strategies that are discussed in the paper are not novel and/or existed before, and the paper does not have a lot of algorithmic innovation. While this discussion unifies various prior methods and can be a valuable guideline for practitioners to choose the appropriate TTA method, there needs to be more experiments to make it a compelling paper (i.e., add MEMO [1] as a method of comparison, add WILDS-Camelyon 17 [9], WILDS-FMoW [9], ImageNet-A [7], CIFAR-10.1 [8] as dataset benchmarks). But I do feel the problem setup is very important, and adding more experiments and a bit of rewriting can make the paper much stronger. **Abstract and Introduction** 1. The paper mentions model restarting to avoid error propagation. There has been important work in TTA, where the model adapts its parameters to only one test example at a time, and reverts back to the initial (pre-trained) weights after it has made the prediction, doing the process all over for the next test example. This is also an important setting to consider, where only one test example is available, and one cannot rely on batches of data from a stream. For example, see MEMO [1]. 2. (nitpicking, not important to include in the paper) “under extremely long scenarios all existing TTA method results in degraded performance”, while this is true, the paper does not mention some recent works that helps alleviate this. E.g., MEMO [1] in the one test example at a time scenario, or MEMO + surgical FT [2] where MEMO is used in the online setting, but parameter-efficient updating helps with feature distortion/performance degradation. So the claim is outdated. 3. It would be good to cite relevant papers such as [4] as prior works that look into model selection strategies (but not for TTA setting) to motivate the problem statement. **Section 3.2, model selection strategies in TTA** 1. While accuracy-on-the-line [3] shows correlation between source (ID) and target (OOD) accuracies, some work [4] also say source accuracy is unreliable in the face of large domain gaps. I think table 3 shows the same result. Better to cite [4] and add their observation. 2. Why not look at agreement-on-the-line [5]? This is known to be a good way of assessing performance on the target domain without having labels. For example, A-Line [6] seems to have good performance on TTA tasks. This should also be considered as a model selection method. **Section 4.1, datasets** 1. Missing some key datasets such as ImageNet-A [7], CIFAR-10.1 [8]. It is important to consider ImageNet-A (to show TTA’s performance on adversarial examples) and CIFAR-10.1, to show TTA’s performance on CIFAR-10 examples where the shift is natural, i.e., not corruptions. Prior work such as MEMO [1] has used some of these datasets. **Section 4.3, experimental setup** 1. The architecture suite that is used is limited in size. Only ResNext-29 and ResNet-50 are used. Since the paper’s goal is to say something rigorous about model selection strategies, it is important to try more architectures to have a comprehensive result. At least some vision-transformer architecture is required to make the results strong. I would suggest trying RVT-small [12] or ViT-B/32 [13]. Why do the authors use SGD as an optimizer for all tasks? It is previously shown that [14] SGD often performs worse for more modern architectures. The original TENT [15] paper also claims they use SGD for ImageNet and for everything else they use Adam [16]. **Section 5, results** 1. (Table 1) It might be easier if the texts mention that each row represent one method, and each column represents one model selection strategy. When the authors say “green” represents the best number, they mean “within a row”. 2. (Different methods’ ranking under different selection strategies) The results here are not clear and hard to read. How many times does one method outperform the other, when considering all different surrogate based metrics across all datasets? If the goal is to show consistency of AdaContrast as mentioned in the introduction, a better way of presenting this might be making something similar to table 1 of [17]. 3. What does the **Median** column in Table 2 and 3 represent? There is no explanation given for this in paper. 4. I assume the 4 surrogate strategies are: S-Acc, Cross-Acc, Ent and Con. If so, then the statement **“While EATA is significantly the best under the oracle selection strategy (49.99 on average) it is outperformed for example by Tent (5th method using oracle selection) when using 3 out of 4 surrogate-based metrics”** is clearly False according to the last section of Table 2: Tent > EATA on Cross-Acc and Con, but EATA > Tent when using S-Acc and Ent. 5. **(Performance of TTA methods)** This is an interesting observation, that using non-standard benchmarks breaks a lot of popular TTA methods. If the authors can evaluate TTA on more conditions of natural distribution shift, like WILDS [9], it could really strengthen the paper.
5. **(Performance of TTA methods)** This is an interesting observation, that using non-standard benchmarks breaks a lot of popular TTA methods. If the authors can evaluate TTA on more conditions of natural distribution shift, like WILDS [9], it could really strengthen the paper.
Gzuzpl4Jje
EMNLP_2023
1. The original tasks’ performance degenerates to some extent and underperforms the baseline of the Adapter, which indicates the negative influence of removing some parts of the original networks. 2. The proposed method may encounter a limitation if the users continuously add new languages because of the limited model capacity.
2. The proposed method may encounter a limitation if the users continuously add new languages because of the limited model capacity.
NIPS_2021_1222
NIPS_2021
Claims: 1.a) I think the paper falls short of the high-level contributions claimed in the last sentence of the abstract. As the authors note in the background section, there are a number of published works that demonstrate the tradeoffs between clean accuracy, training with noise perturbations, and adversarial robustness. Many of these, especially Dapello et al., note the relevance with respect to stochasticity in the brain. I do not see how their additional analysis sheds new light on the mechanisms of robust perception or provides a better understanding of the role stochasticity plays in biological computation. To be clear - I think the paper is certainly worthy of publication and makes notable contributions. Just not all of the ones claimed in that sentence. 1.b) The authors note on lines 241-243 that “the two geometric properties show a similar dependence for the auditory (Figure 4A) and visual (Figure 4B) networks when varying the eps-sized perturbations used to construct the class manifolds.” I do not see this from the plots. I would agree that there is a shared general upward trend, but I do not agree that 4A and 4B show “similar dependence” between the variables measured. If nothing else, the authors should be more precise when describing the similarities. Clarifications: 2.a) The authors say on lines 80-82 that the center correlation was not insightful for discriminating model defenses, but then use that metric in figure 4 A&B. I’m wondering why they found it useful here and not elsewhere? Or what they meant by the statement on lines 80-82. 2.b) On lines 182-183 the authors note measuring manifold capacity for unperturbed images, i.e. clean exemplar manifolds. Earlier they state that the exemplar manifolds are constructed using either adversarial perturbations or from stochasticity of the network. So I’m wondering how one constructs images for a clean exemplar manifold for a non-stochastic network? Or put another way, how is the denominator of figure 2.c computed for the ResNet50 & ATResNet50 networks? 2.c) The authors report mean capacity and width in figure 2. I think this is the mean across examples as well as across seeds. Is the STD also computed across examples and seeds? The figure caption says it is only computed across seeds. Is there a lot of variability across examples? 2.d) I am unsure why there would be a gap between the orange and blue/green lines at the minimum strength perturbation for the avgpool subplot in figure 2.c. At the minimum strength perturbation, by definition, the vertical axis should have a value of 1, right? And indeed in earlier layers at this same perturbation strength the capacities are equal. So why does the ResNet50 lose so much capacity for the same perturbation size from conv1 to avgpool? It would also be helpful if the authors commented on the switch in ordering for ATResNet and the stochastic networks between the middle and right subplots. General curiosities (low priority): 3.a) What sort of variability is there in the results with the chosen random projection matrix? I think one could construct pathological projection matrices that skews the MFTMA capacity and width scores. These are probably unlikely with random projections, but it would still be helpful to see resilience of the metric to the choice of random projection. I might have missed this in the appendix, though. 3.b) There appears to be a pretty big difference in the overall trends of the networks when computing the class manifolds vs exemplar manifolds. Specifically, I think the claims made on lines 191-192 are much better supported by Figure 1 than Figure 2. I would be interested to hear what the authors think in general (i.e. at a high/discussion level) about how we should interpret the class vs exemplar manifold experiments. Nitpick, typos (lowest priority): 4.a) The authors note on line 208 that “Unlike VOneNets, the architecture maintains the conv-relu-maxpool before the first residual block, on the grounds that the cochleagram models the ear rather than the primary auditory cortex.” I do not understand this justification. Any network transforming input signals (auditory or visual) would have to model an entire sensory pathway, from raw input signal to classification. I understand that VOneNets ignore all of the visual processing that occurs before V1. I do not see how this justifies adding the extra layer to the auditory network. 4.b) It is not clear why the authors chose a line plot in figure 4c. Is the trend as one increases depth actually linear? From the plot it appears as though the capacity was only measured at the ‘waveform’ and ‘avgpool’ depths; were there intermediate points measured as well? It would be helpful if they clarified this, or used a scatter/bar plot if there were indeed only two points measured per network type. 4.c) I am curious why there was a switch to reporting SEM instead of STD for figures 5 & 6. 4.c) I found typos on lines 104, 169, and the fig 5 caption (“10 image and”).
2.a) The authors say on lines 80-82 that the center correlation was not insightful for discriminating model defenses, but then use that metric in figure 4 A&B. I’m wondering why they found it useful here and not elsewhere? Or what they meant by the statement on lines 80-82.
NIPS_2019_629
NIPS_2019
- To my opinion, the setting and the algorithm lack a bit of originality and might seem as incremental combinations of methods of graph labelings prediction and online learning in a switching environment. Yet, the algorithm for graph labelings is efficient, new and seem different from the existing ones. - Lower bounds and optimality of the results are not discussed. In the conclusion section, it is asked whether the loglog(T) can be removed. Does this mean that up to this term the bounds are tight? I would like more discussions on this. More comparison with existing upper-bounds and lower-bound without switches could be made for instance. In addition, this could be interesting to plot the upper-bound on the experiments, to see how tight is the analysis. Other comments: - Only bounds in expectation are provided. Would it be possible to get high-probability bounds? For instance by using ensemble methods as performed in the experiments. Some measure about the robustness could be added to the experiments (such as error bars or standard deviation) in addition to the mean error. - When reading the introduction, I thought that the labels were adversarially chosen by an adaptive adversary. It seems that the analysis is only valid when all labels are chosen in advance by an oblivious adversary. Am I right? This should maybe be clarified. - This paper deals with many graph notions and it is a bit hard to get into it but the writing is generally good though more details could sometimes be provided (definition of the resistance distance, more explanations on Alg. 1 with brief sentences defining A_t, Y_t,...). - How was alpha tuned in the experiments (as 1/(t+1) or optimally)? - Some possible extensions could be discussed (are they straightforward?): directed or weighted graph, regression problem (e.g, to predict the number of bikes in your experiment)... Typo: l 268: the sum should start at 1
- This paper deals with many graph notions and it is a bit hard to get into it but the writing is generally good though more details could sometimes be provided (definition of the resistance distance, more explanations on Alg. 1 with brief sentences defining A_t, Y_t,...).
NIPS_2016_499
NIPS_2016
- The proposed method is very similar in spirit to the approach in [10]. It seems that the method in [10] can also be equipped with scoring causal predictions and the interventional data. If otherwise, why [10] cannot use these side information? - The proposed method reduces the computation time drastically compared to [10] but this is achieved by reducing the search space to the ancestral graphs. This means that the output of ACI has less information compared to the output of [10] that has a richer search space, i.e., DAGs. This is the price that has been paid to gain a better performance. How much information of a DAG is encoded in its corresponding ancestral graph? - Second rule in Lemma 2, i.e., Eq (7) and the definition of minimal conditional dependence seem to be conflicting. Taking Z’ in this definition to be the empty set, we should have that x and y are independent given W, but Eq. (7) says otherwise.
- The proposed method reduces the computation time drastically compared to [10] but this is achieved by reducing the search space to the ancestral graphs. This means that the output of ACI has less information compared to the output of [10] that has a richer search space, i.e., DAGs. This is the price that has been paid to gain a better performance. How much information of a DAG is encoded in its corresponding ancestral graph?
NIPS_2016_9
NIPS_2016
Weakness: The authors do not provide any theoretical understanding of the algorithm. The paper seems to be well written. The proposed algorithm seems to work very all on the experimental setup, using both synthetic and real-world data. The contributions of the papers are enough to be considered for a poster presentation. The following concerns if addressed properly could raise to the level of oral presentation: 1. The paper does not provide an analysis on what type of data the algorithm work best and on what type of data the algorithm may not work well. 2. The first claimed contribution of the paper is that unlike other existing algorithms, the proposed algorithm does not take as many points or does not need apriori knowledge about dimensions of subspaces. It would have been better if there were some empirical justification about this. 3. It would be good to show some empirical evidence that the proposed algorithm works better for Column Subset Selection problem too, as claimed in the third contribution of the paper.
2. The first claimed contribution of the paper is that unlike other existing algorithms, the proposed algorithm does not take as many points or does not need apriori knowledge about dimensions of subspaces. It would have been better if there were some empirical justification about this.
NIPS_2016_93
NIPS_2016
- The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog. - With a fixed policy, this setting is a subset of reinforcement learning. Can tasks get more complicated (like what explained in the last paragraph of the paper) so that the policy is not fixed. Then, the authors can compare with a reinforcement learning algorithm baseline. - The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations. - Overall, the writing quality of the paper should be improved; e.g., the authors spend the same space on explaining basic memory networks and then the forward model. The related work has missing pieces on more reinforcement learning tasks in the literature. - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails.
- The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here.
NIPS_2016_238
NIPS_2016
- My biggest concern with this paper is the fact that it motivates “diversity” extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned that there is no diversity. - The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions: - The first sentence of the abstract needs to be re-written. - Diversity should be toned down. - line 108, the first “f” should be “g” in “we fixed the form of ..” - extra “.” in the middle of a sentence in line 115. One Question: For the baseline MCL with deep learning, how did the author ensure that each of the networks have converged to a reasonable results. Cutting the learners early on might significantly affect the ensemble performance.
- The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions:
IQ0BBfbYR2
ICLR_2025
1. Writing should be seriously improved. It is really tedious to get through the paper. Often terms are not defined properly and the reader really has to rely on the context to understand used terms. This is not possible always. See a list of writing issues below: * I find parts of Fig. 2 unclear. The caption should describe the key features of the model. Why is there a connection between Decoder and External classifier but no arrow. What is it supposed to mean? * line 224 Should describe $d, \kappa$ clearly and what they denote. Is the feature encoding coming from an external network, the classifier $f$ itself etc. What is the distance metric? Euclidean distance? * Implicit/Explicit classifier should be described clearly when describing CoLa-DCE in Sec. 4. What is their input, output domains, what are their roles etc. I guess one of them is $f$. I assume they originate from prior literature but they also seem central to your method so need clear concise description. * line 249: Should define $\mathcal{N}$ or state its the set of natural numbers (can be confused with normal distribution as you use it in Eq. 1). * line 259 Why is the external classifier modelled as p(x|y) (which should be for diffusion model). Wouldn't the classifier be modelled as p(y|x)? * The notations $h, g$ almost appear out of the blue. What are their input, output domains. It is the same issue with $\delta$ but it at least has some brief description. * line 260. You provide absolutely no explanation of notations about concepts, how they are represented, what the binary constraints denote exactly. It is difficult to understand $\lambda_1, ..., \lambda_k$ and $\theta_1, ..., \theta_k$. Are the lambda's just subset of natural numbers from 1 to K? Your notation for $\theta$ also does not seem consistent. For starters, line 236 has it going from 0 to k while at other places it is 1 to k. More importantly, Eq. 7 makes it seem like $\theta$'s denote subset of indices but they are supposed to be binary masks. I am not sure what are these representations exactly, beyond the basic idea that they control which concepts to condition on. * There are two terms for datasets (line 219, 226) $X', \hat{X}$. What is the difference between the two? Is the reference data classification training/validation dataset and the other test data? * line 222 "As the model perception of the data shall be represented, the class predictions of the model are used to determine class affiliation." Really odd phrasing, please make it more clear. * line 319-320 "Using the intermediate ... high flip ratios" I am not sure how you are drawing this conclusion from Tab. 1. Please elaborate on this how you come to this conclusion. * line 469 What is $attr$ supposed to denote exactly. The attribution map for a particular feature/concept? If yes, why are you computing absolute magnitude for relative alignment? Is it the Frobenius norm of the difference of attributions? It should be described more clearly. * Please explain more clearly what the "confidence" metric is? Is it the difference between classifier's probability for the initially predicted class before and after the modifications? * line 295 mentions l2 norm between original and counterfactual image. Was it supposed to be a metric in Tab. 1? 2. The quantitative metrics of CoLa-DCE seem weak. LDCE seems to clearly outperform CoLA-DCE on "Flip-ratio" and "Confidence" metrics while being close on FID.
* The notations $h, g$ almost appear out of the blue. What are their input, output domains. It is the same issue with $\delta$ but it at least has some brief description.
NIPS_2017_351
NIPS_2017
- As I said above, I found the writing / presentation a bit jumbled at times. - The novelty here feels a bit limited. Undoubtedly the architecture is more complex than and outperforms the MCB for VQA model [7], but much of this added complexity is simply repeating the intuition of [7] at higher (trinary) and lower (unary) orders. I don't think this is a huge problem, but I would suggest the authors clarify these contributions (and any I may have missed). - I don't think the probabilistic connection is drawn very well. It doesn't seem to be made formally enough to take it as anything more than motivational which is fine, but I would suggest the authors either cement this connection more formally or adjust the language to clarify. - Figure 2 is at an odd level of abstraction where it is not detailed enough to understand the network's functionality but also not abstract enough to easily capture the outline of the approach. I would suggest trying to simplify this figure to emphasize the unary/pairwise/trinary potential generation more clearly. - Figure 3 is never referenced unless I missed it. Some things I'm curious about: - What values were learned for the linear coefficients for combining the marginalized potentials in equations (1)? It would be interesting if different modalities took advantage of different potential orders. - I find it interesting that the 2-Modalities Unary+Pairwise model under-performs MCB [7] despite such a similar architecture. I was disappointed that there was not much discussion about this in the text. Any intuition into this result? Is it related to swap to the MCB / MCT decision computation modules? - The discussion of using sequential MCB vs a single MCT layers for the decision head was quite interesting, but no results were shown. Could the authors speak a bit about what was observed?
- As I said above, I found the writing / presentation a bit jumbled at times.
ICLR_2022_537
ICLR_2022
1. The stability definition needs better justified, as the left side can be arbitrarily small under some construction of \tilde{g}. A more reasonable treatment is to make it also lower bounded. 2. It is expected to see a variety of tasks beyond link predict where PE is important.
1. The stability definition needs better justified, as the left side can be arbitrarily small under some construction of \tilde{g}. A more reasonable treatment is to make it also lower bounded.
ARR_2022_1_review
ARR_2022
- Using original encoders as baselines might not be sufficient. In most experiments, the paper only compares with the original XLM-R or mBERT trained without any knowledge base information. It is unclear whether such encoders being fine-tuned towards the KB tasks would actually perform comparable to the proposed approach. I would like to see experiments like just fine tuning the encoders with the same dataset but the MLM objective in their original pretraining and comparing with them. Such baselines can leverage on input sequences as simple as `<s>X_s X_p X_o </s>` where one of them is masked w.r.t. MLM training. - The design of input formats is intuitive and lacks justifications. Although the input formats for monolingual and cross-lingual links are designed to be consistent, it is hard to tell why the design would be chosen. As the major contribution of the paper, justifying the design choice matters. In other words, it would be better to see some comparisons over some variants, say something like `<s>[S]X_s[S][P]X_p[P][O]X_o[O]</s>` as wrapping tokens in the input sequence has been widely used in the community. - The abstract part is lengthy so some background and comparisons with prior work can be elaborated in the introduction and related work. Otherwise, they shift perspective of the abstract, making it hard for the audience to catch the main novelties and contributions. - In line 122, triples denoted as $(e_1, r, e_2)$ would clearly show its tuple-like structure instead of sets. - In sec 3.2, the authors argue that the Prix-LM (All) model consistently outperforms the single model, hence the ability of leveraging multilingual information. Given the training data sizes differ a lot, I would like to see an ablation that the model is trained on a mix of multilingual data with the same overall dataset size as the monolingual. Otherwise, it is hard to justify whether the performance gain is from the large dataset or from the multilingual training.
- In line 122, triples denoted as $(e_1, r, e_2)$ would clearly show its tuple-like structure instead of sets.
fL8AKDvELp
EMNLP_2023
1. The paper needs a comprehensive analysis of sparse MoE, including the communication overhead (all to all). Currently, it's not clear where the performance gain comes from, basically, different number of experts incurs different communication overhead. 2. The evaluation needs experiments on distributed deployment and a larger model. 3. For the arguments that the existing approach has two key limitations, the authors should present key experiment results for demonstration.
2. The evaluation needs experiments on distributed deployment and a larger model.
lesQevLmgD
ICLR_2024
I believe the authors' results merit publication in a specialized journal rather than in ICLR. The main reasons are the following 1. The authors do not give any compelling numerical evidence that their bound is tight or even "log-tight". 2. The authors' derivation falls into classical learning theory-based bounds, which, to the best of my knowledge, does not yield realistic bounds, unless Bayesian considerations are taken into account (e.g. Bayesian-PAC based bounds). 3. Even if one maintains that VC-dimension-style learning theory is an important part of the theory of deep learning, my hunch would be that the current work does not contain sufficient mathematical interest to be published in ICLR. My more minor comments are that 1. The introduction is very wordy and contains many repetitions of similar statements. 2. I found what I believe are various math typos, for instance around Lemma 3.5. I think n and m are used interchangeably. Furthermore calligraphic R with an n sub-script and regular R. Similarly, capital and non-capital l are mixed in assumption 4.8. Runaway subscripts also appear many times in Appendix A2.
2. The authors' derivation falls into classical learning theory-based bounds, which, to the best of my knowledge, does not yield realistic bounds, unless Bayesian considerations are taken into account (e.g. Bayesian-PAC based bounds).
NIPS_2022_947
NIPS_2022
1. Apart from the multiple pre-trained models, FedPCL is built on the idea of prototypical learning and contrastive learning, which are not new in federated learning. 2. The performance of FedPCL heavily relies on the selection of different pre-trained models, limiting its applications to more wide areas. As shown in Table 4, the model accuracy is quite sensitive to the pre-trained models. This work adequately addressed the limitations. The authors developed a lightweight federated learning framework to reduce the computation and communication costs and integrated pre-trained models to extract prototypes for federated aggregation. The is a new try for federated learning.
2. The performance of FedPCL heavily relies on the selection of different pre-trained models, limiting its applications to more wide areas. As shown in Table 4, the model accuracy is quite sensitive to the pre-trained models. This work adequately addressed the limitations. The authors developed a lightweight federated learning framework to reduce the computation and communication costs and integrated pre-trained models to extract prototypes for federated aggregation. The is a new try for federated learning.
ICLR_2022_2323
ICLR_2022
Weakness: 1. The literature review is inaccurate, and connections to prior works are not sufficiently discussed. To be more specific, there are three connections, (i) the connection of (1) to prior works on multivariate unlabeled sensing (MUS), (ii) the connection of (1) to prior works in unlabeled sensing (US), and (iii) the connection of the paper to (Yao et al., 2021). (i) In the paper, the authors discussed this connection (i). However, the experiments shown in Figure 2 do not actually use the MUS algorithm of (Zhang & Li, 2020) to solve (1); instead the algorithm is used to solve the missing entries case. This seems to be an unfair comparison as MUS algorithms are not designed to handle missing entries. Did the authors run matrix completion prior to applying the algorithm of (Zhang & Li, 2020)? Also, the algorithm of (Zhang & Li, 2020) is expected to fail in the case of dense permutation. (ii) Similar to (i), the methods for unlabeled sensing (US) can also be applied to solve (1), using one column of B_0 at a time. There is an obvious advantage because some of the US methods can handle arbitrary permutations (sparse or dense), and they are immune to initialization. In fact, these methods were used in (Yao et al., 2021) for solving more general versions of (1) where each column of B has undergone arbitrary and usually different permutations; moreover, this can be applied to the d-correspondence problem of the paper. I kindly wish the authors consider incoporating discussions and reviews on those methods. (iii) Finally, the review on (Yao et al., 2021) is not very accurate. The framework of (Yao et al., 2021), when applied to (1), means that the subspace that contains the columns of A and B is given (when generating synthetic data the authors assume that A and B come from the same subspace). Thus the first subspace-estimation step in the pipeline of (Yao et al., 2021) is automatically done; the subspace is just the column space of A. As a result, the method of (Yao et al., 2021) can handle the situation where the rows of B are densely shuffled, as discussed above in (ii). Also, (Yao et al., 2021) did not consider only "a single unknown correspondence". In fact, (Yao et al., 2021) does not utilize the prior knowledge that each column of B is permuted by the same permutation (which is the case of (1)), instead it assumes every column of B is arbitrarily shuffled. Thus it is a more general situation of (1) and of the d-correspondence problem. Finally, (Yao et al., 2021) discusses theoretical aspects of (1) with missing entries, while an algorithm for this is missing until the present work. 2. In several places the claims of the paper are not very rigorous. For example, (i) Problem (15) can be solved via linear assignment algorithms to global optimality, why do the authors claim that "it is likely to fall into an undesirable local solution"? Also I did not find a comparison of the proposed approach with linear assignment algorithms. (ii) Problem (16) seems to be "strictly convex", not "strongly convex". Its Hessian has positive eigenvalues everywhere but the minimum eigenvalue is not lower bounded by some positive constant. This is my feeling though, as in the situation of logistic regression, please verify this. (iii) The Sinkhorn algorithm seems to use O(n^2) time per iteration, as in (17) there is a term C(hat{M_B}), which needs O(n^2) time to be computed. Experiments show that the algorithm needs > 1000 iterations to converge. Hence, in the regime where n << 1000 the algorithm might take much more time than O(n^2) (this is the regime considered in the experiments). Also I did not see any report on running times. Thus I feel uncomfortable to see the author claim in Section 5 that "we propose a highly efficient algorithm". 3. Even though an error bound is derived in Theorem 1 for the nuclear norm minimization problem, there is no guarantee of success on the alternating minimization proposal. Moreover, the algorithm requires several parameters to tune, and is sensitive to initialization. As a result, the algorithm has very lage variance, as shown in Figure 3 and Table 1. Questions: 1. In (3) the last term r+H(pi_P) and C(pi_P) is very interesting. Could you provide some intuition how it shows up, and in particular give an example? 2. I find Assumption 1 not very intuitive; and it is unclear to me why "otherwise the influence of the permutation will be less significant". Is it that the unknown permutation is less harmful if the magnitudes of A and B are close? 3. Solving the nuclear norm minimization program seems to be NP-hard as it involves optimization over permutation matrices and a complicated objective. Is there any hardness result for this problem? Suggestions: The following experiments might be useful. 1. Sensitivity to permutation sparsity: As shown in the literature of unlabeled sensing, the alternating minimization of (Abid et al., 2017) works well if the data are sparsely permuted. This might also apply to the proposed alternating minimization algorithm here. 2. Sensitivity to initialization: One could present the performance as a function of the distance of initialization M^0 to the ground-truth M^*. That is for varying distance c (say from 0.01:0.01:0.1), randomly sample a matrix M^0 so that M^0 - M^* _F < c as initialization, and report the performance accordingly. One would expect that the mean error and variance increases as the quality of initialization decreases. 3. Sensitivity to other hyper-parameters. Minor Comments on language usage: (for example) 1. "we typically considers" in the above of (7) 2. "two permutation" in the above of Theorem 1 3. "until converge" in the above of (14) 4. ...... Please proofread the paper and fix all language problems.
3. Sensitivity to other hyper-parameters. Minor Comments on language usage: (for example) 1. "we typically considers" in the above of (7) 2. "two permutation" in the above of Theorem 1 3. "until converge" in the above of (14) 4. ...... Please proofread the paper and fix all language problems.
ICLR_2022_1842
ICLR_2022
weakness, right? Sec. 4.2: just for clarity, is each object's bounding box (for ray intersection) computation axis-aligned with the object coordinate system or the world/scene coordinate system? Is there anything that constrains (in a soft or hard manner) the outgoing fractions to sum up/integrate to 1 or at most 1 for a given incoming light direction? Fig. 10: What exactly is N in this figure? N is used in the main text to refer to the number of objects and to the number of point samples along a ray, neither of which seems like the right parameter here. Minor suggestions for improvements: Fig. 7: I currently cannot see much in this figure, a comparison to a white/grey environment map would make it easier to tell that there is an effect. I'm not a fan of the equation two lines after Eq. 4. I understand what it's trying to say but I believe this needs to be changed to be mathematically correct, unless that makes a bunch of other equations messy. Also, why is it L_l instead of just L? That notation should be introduced beforehand. Fig. 8: Switching out columns 2 and 3 would make the difficult comparison between No Indirect and Full Model easier. There's a typo at the end of page 2: from from
4. I understand what it's trying to say but I believe this needs to be changed to be mathematically correct, unless that makes a bunch of other equations messy. Also, why is it L_l instead of just L? That notation should be introduced beforehand. Fig.
aGH43rjoe4
ICLR_2024
I do have several queries/concerns however: - **a. Fixed time horizon**: The use of an MLP to convert the per-timestep embeddings into per-sequence Fourier coefficients means that you can only consider fixed-length sequences. This seems to me to be a real limitation, since often neural/behavioral data – especially naturalistic behavior – is not of a fixed length. This could be remedied by using an RNN or neural process in place of the MLP, so this is not catastrophic as far as I can tell. However, I at least expect to see this noted as a limitation of the method, and, preferably, substitute in an RNN or neural process for the MLP in one of the examples, just to concretely demonstrate that this is not a fundamental limitation. - **b. Hidden hyperparameters and scaling issues**: Is there a problem if the losses/likelihoods from the channels are “unbalanced”? E.g. if the behavioral data is 1080p video footage, and you have say 5 EEG channels, then a model with limited capacity may just ignore the EEG data. This is not mentioned anywhere. I think this can be hacked by including a $\lambda$ multiplier on the first term of (6) or raising one of the loss terms to some power (under some sensible regularization), trading off the losses incurred by each channel and making sure the model pays attention to all the data. I am not 100% sure about this though. Please can the authors comment. - **c. Missing experiments**: There are a couple of experiments/baselines that I think should be added. - Firstly, in Figure 3, I'd like to see a model that uses the data independently to estimate the latent states and reconstruction. It seems unfair to compare multimodal methods to methods that use just one channel. I’m not 100% sure what this would look like, but an acceptable baseline would be averaging the predictions of image-only and neuron-only models (co-trained with this loss). At least then all models have access to the same data, and it is your novel structure that is increasing the performance. - Secondly, I would like to see an experiment sweeping over the number of observed neurons in the MNIST experiment. If you have just one neuron, then performance of MM-GP-VAE should be basically equivalent to GP-VAE. If you have 1,000,000 neurons, then you should have near-perfect latent imputations (for a sufficiently large model), which can be attributed solely to the neural module. This should be a relatively easy experiment to add and is a good sanity check. - Finally, and similarly to above, i’d like to see an experiment where the image is occluded (half of the image is randomly blacked out). This (a) simulates the irregularity that is often present in neural/behavioral data (e.g. keypoint detection failed for some mice in some frames), and (b) would allow us to inspect the long-range “inference” capacity of the model, as opposed to a nearly-supervised reconstruction task. Again, these should be reasonably easy experiments to run. I’d expect to see all of these experiments included in a final version (unless the authors can convince me otherwise). - **d. Slightly lacking analysis**: This is not a deal-breaker for me, but the analysis of the inferred latents is somewhat lacking. I’d like to see some more incisive analysis of what the individual and shared features pull out of the data – are there shared latent states that indicate “speed”, or is this confined to the individual behavioral latent? Could we decode a stimulus type from the continuous latent states? How does decoding accuracy from each of the three different $z$ terms differ? etc. I think this sort of analysis is the point of training and deploying models like this, and so I was disappointed to not see any attempt at such an analysis. This would just help drive home the benefits of the method. ### Minor weaknesses / typographical errors: 1. Page 3: why are $\mu_{\psi}$ and $\sigma_{\psi}^2$ indexed by $\psi$? These are variational posteriors and are a function of the data; whereas $\psi$ are static model parameters. 2. Use \citet{} for textual citations (e.g. “GP-VAE, see (Casale et al., 2018).” -> “GP-VAE, see Casale et al. (2018).”) 3. The discussion of existing work is incredibly limited (basically two citations). There is a plethora of work out there tackling computational ethology/neural data analysis/interpretable methods. This notable weakens the paper in my opinion, because it paints a bit of an incomplete picture of the field, and actually obfuscates why this method is so appealing! I expect to see a much more thorough literature review in any final version. 4. Text in Figure 5 is illegible. 5. Only proper nouns should be capitalized (c.f. Pg 2 “Gaussian Process” -> “Gaussian process”), and all proper nouns should be capitalized (c.f. Pg 7 “figure 4(c)”). 6. Figure 1(a): Is there are sampling step to obtain $\tilde{\mu}$ and $\tilde{\sigma}^2$? This sample step should be added, because right now it looks like a deterministic map. 7. I think “truncate” is more standard than “prune” for omitting higher-frequency Fourier terms. 8. I find the use of “A” and “B” very confusing – the fact that A is Behaviour, and B is Neural? I’m not sure what better terms are. I would suggest B for Behavioural – and then maybe A for neural? Or A for (what is currently referred to as) behavioral, but be consistent (sometimes you call it “other”) and refer to it as Auxiliary or Alternative data, and then B is “Brain” data or something. 9. The weakest section in terms of writing is Section 3. The prose in there could do with some tightening. (It’s not terrible, but it’s not as polished as the rest of the text). 10. Use backticks for quotes (e.g. ‘behavioral modality’ -> ``behavioral modality’’).
- Finally, and similarly to above, i’d like to see an experiment where the image is occluded (half of the image is randomly blacked out). This (a) simulates the irregularity that is often present in neural/behavioral data (e.g. keypoint detection failed for some mice in some frames), and (b) would allow us to inspect the long-range “inference” capacity of the model, as opposed to a nearly-supervised reconstruction task. Again, these should be reasonably easy experiments to run. I’d expect to see all of these experiments included in a final version (unless the authors can convince me otherwise).
ICLR_2021_243
ICLR_2021
Weakness: 1. As several modifications mentioned in Section 3.4 were used, it would be better to provide some ablation experiments of these tricks to validate the model performance further. 2. The model involves many hyperparameters. Thus, the selection of the hyperparameters in the paper needs further explanation. 3. A brief conclusion of the article and a summary of this paper's contributions need to be provided. 4. Approaches that leveraging noisy label noise label regularization and multi-label co-regularization were not reviewed or compared in this paper.
1. As several modifications mentioned in Section 3.4 were used, it would be better to provide some ablation experiments of these tricks to validate the model performance further.
NIPS_2018_775
NIPS_2018
#ERROR!
* Brau and Jiang. 3D human pose estimation via deep learning from 2D annotations. 3DV 2016.
ICLR_2021_1213
ICLR_2021
weakness of the paper. Then, I present my additional comments which are related to specific expressions in the main text, proof steps in the appendix etc. I would appreciate it very much if authors could address my questions/concerns under “Additional Comments” as well, since they affect my assessment and understanding of the paper; consequently my score for the paper. Summary: • The paper focuses on convergence of two newly-proposed versions of AdaGrad, namely AdaGrad-window and AdaGrad-truncation, for finite sum setting where each component is smooth and possibly nonconvex. • The authors prove convergence rate with respect to number of epochs T, where in each epoch one full pass over the data is performed with respect to well-known “random shuffling” sampling strategy. • Specifically, AdaGrad-window is shown to achieve O ~ ( T − 1 / 2 ) rate of convergence, whereas AdaGrad-truncation attains ( T − 1 / 2 ) convergence, under component-wise smoothness and bounded gradients assumptions. Additionally, authors introduce a new condition/assumption called consistency ratio which is an essential element of their analysis. • The paper explains the proposed modification to AdaGrad and provide their intuition for such adjustments. Then, the main results are presented followed by a proof sketch, which demonstrates the main steps of the theoretical approach. • In order to evaluate the practical performance of the modified adaptive methods in a comparative fashion, two set of experiments were provided: training logistic regression model on MNIST dataset and Resnet-18 model on CIFAR-10 dataset. In these experiments; SGD, SGD with random shuffling, AdaGrad and AdaGrad-window were compared. Additionally, authors plot the behavior of their proposed condition “consistency ratio” over epochs. Strengths: • I think epoch-wise analysis, especially for finite sum settings, could help provide insights into behaviors of optimization algorithms. For instance, it may enable to further investigate effect of batch size or different sampling strategies with respect to progress of the algorithms after every full pass of data. This may also help with comparative analysis of deterministic and stochastic methods. • I have checked the proof of Theorem 1 in details and had a less detailed look at Theorems 2 and 3. I appreciate some of the technically rigorous sections of the analysis as the authors bring together analytical tools from different resources and re-prove certain results with respect to their adjustments. • Performance comparison in the paper is rather simple but the authors try to provide a perspective of their consistency condition through numerical evidence. It gives some rough idea about how to interpret this condition. • Main text is written in a clear; authors highlight their modification to AdaGrad and also highlight what their new “consistency condition” is. Proposed contributions of the paper are stated clearly although I do not totally agree with certain claims. One of the main theorems has a proof sketch which gives an overall idea about authors’ approach to proving the results. Weaknesses: • Although numerically the paper provides an insight into the consistency condition, it is not verifiable ahead of time. One needs to run a simulation to get some idea about this condition, although it still wouldn’t verify the correctness. Since authors did not provide any theoretical motivation for their condition, I am not fully convinced out this assumption. For instance, authors could argue about a specific problem setting in which this condition holds. • Theorem 3 (Adagrad-truncation) sets the stepsize depends on knowledge of r . I couldn’t figure out how it is possible to compute the value r ahead of time. Therefore, I do not think this selection is practically applicable. Although I appreciate the theoretical rigor that goes into proving Theorem 3, I believe the concerns about computing r weakens the importance of this result. If I am missing out some important point, I would like to kindly ask the authors to clarify it for me. • The related work which is listed in Table 1, within the group “Adaptive Gradient Methods” prove \emph{iteration-wise} convergence rates for variants of Adam and AdaGrad, which I would call the usual practice. This paper argues about \emph{epoch-wise} convergence. The authors claim improvement over those prior papers although the convergence rate quantifications are not based on the same grounds. All of those methods consider the more general expectation minimization setting. I would suggest the authors to make this distinction clear and highlight iteration complexities of such methods while comparing previous results with theirs. In my opinion, total complexity comparison is more important that rate comparison for the setting that this paper considers. • As a follow up to the previous comment, the related work could have highlighted related results in finite sum setting. Total complexity comparisons with respect to finite sum setting is also important. There exists results for finite-sum nonconvex optimization with variance reduction, e.g., Stochastic Variance Reduction for Nonconvex Optimization, 2016, Reddi et. al. I believe it is important to comparatively evaluate the results of this paper with that of such prior work. • Numerically, authors only compare against AdaGrad and SGD. I would say this paper is a rather theory paper, but it claims rate improvements, for which I previously stated my doubts. Therefore, I would expect comparisons against other methods as well, which is of interest to ICLR community in my opinion. • This is a minor comment that should be easy to address. For ICLR, supplementary material is not mandatory to check, however, this is a rather theoretical paper and the correctness/clarity of proofs is important. I would say authors could have explained some of the steps of their proof in a more open way. There are some crucial expressions which were obtained without enough explanations. Please refer to my additional comments in the following part. Additional Comments: • I haven’t seen the definition that x t , m + 1 = x t + 1 , 1 in the main text. It appears in the supplements. Could you please highlight this in the main text as it is important for indexing in the analysis? • Second bullet point of your contributions claim that “[consistency] condition is easy to verify”. I do not agree with this as I cannot see how someone could guarantee/compute the value r ahead of time or even after observing any sequence of gradients. Could you please clearly define what verification means in this context? • In Assumption A3, I understand that G t e i = g t , i and G t e = ∑ i = 1 m g t , i . I believe the existing notation makes it complicated for the reader to understand the implications of this condition. • In the paragraph right above Section 4.2, authors state that presence of second moments, V t , i enables adaptive methods to have improved rates of SGD through Lemma 3. Could the authors please explain this in details? • In Corollary 1, authors state that “the computational complexity is nearly O ( m 5 / 2 n d 2 ϵ − 2 ) ~ ”. A similar statement exists in Corollary 2. Could you please explain what “nearly” means in this context? • In Lemma 8 in the supplements, a a T and b b T in the main expression of the lemma are rank-1 matrices. This lemma has been used in the proof of Lemma 4. As far as I understood, Lemma 8 is used in such a way that a a T or b b T correspond to something like g t , j 2 – g t − 1 , j 2 . I am not sure if this construction fits into Lemma 8 because, for instance, the expression g t , j 2 – g t − 1 , j 2 is difference of two rank-1 matrices, which could have rank \leq 2. Hence, there may not exist some vector a such that a a T = g t , j 2 – g t − 1 , j 2 , hence Lemma 8 may not be applied. If I am mistaken in my judgment I am 100% open for a discussion with the authors. • In the supplements, in section “A.1.7 PROOF OF MAIN THEOREM 1”, in the expression following the first line, I didn’t understand how you obtained the last upper bound to ∇ f ( x t , i ) . Could you please explain how this is obtained? Score: I would like to vote for rejecting the paper. I praise the analytically rigorous proofs for the main theorems and the use of a range of tools for proving the key lemmas. Epoch-wise analysis for stochastic methods could provide insight into behavior of algorithms, especially with respect to real-life experimental setting. However, I have some concerns: I am not convinced about the importance of consistency ratio and that it is a verifiable condition. Related work in Table 1 has iteration-wise convergence in the general expectation-minimization setting whereas this paper considers finite sum structure with epoch-wise convergence rates. The comparison with related work is not sufficient/convincing in this perspective. (Minor) I would suggest the authors to have a more comprehensive experimental study with comparisons against multiple adaptive/stochastic optimizers. More experimental insight might be better for demonstrating consistency ratio. Overall, due to the reasons and concerns stated in my review, I vote for rejecting this paper. I am open for further discussions with the authors regarding my comments and their future clarifications. ======================================= Post-Discussions ======================================= I would like to thank the authors for their clarifications. After exchanging several responses with the authors and regarding other reviews, I decide to keep my score. Although the authors come up with a more meaningful assumption, i.e., SGC, compared to their initial condition, I am not fully convinced about the contributions with respect to prior work: SGC assumption is a major factor in the improved rates and it is a very restrictive assumption to make in practice. Although this paper proposes theoretical contributions regarding adaptive gradient methods, the experiments could have been a bit more detailed. I am not sure whether the experimental setup fully displays improvements of the proposed variants of AdaGrad.
• I think epoch-wise analysis, especially for finite sum settings, could help provide insights into behaviors of optimization algorithms. For instance, it may enable to further investigate effect of batch size or different sampling strategies with respect to progress of the algorithms after every full pass of data. This may also help with comparative analysis of deterministic and stochastic methods.
NIPS_2016_395
NIPS_2016
- I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differential privacy, without experimental validation, should be omitted from the main paper in favor of the preliminary experimental evidence of the tensor method. The results on privacy appear too preliminary to appear in a "conference of record" like NIPS. TECHNICAL COMMENTS: 1) Section 1.2: the dimensions of the projection matrices are written as $A_i \in \mathbb{R}^{m_i \times d_i}$. I think this should be $A_i \in \mathbb{R}^{d_i \times m_i}$, otherwise you cannot project a tensor $T \in \mathbb{R}^{d_1 \times d_2 \times \ldots d_p}$ on those matrices. But maybe I am wrong about this... 2) The neighborhood condition in Definition 3.2 for differential privacy seems a bit odd in the context of topic modeling. In that setting, two tensors/databases would be neighbors if one document is different, which could induce a change of something like $\sqrt{2}$ (if there is no normalization, so I found this a bit confusing. This makes me think the application of the method to differential privacy feels a bit preliminary (at best) or naive (at worst): even if a method is robust to noise, a semantically meaningful privacy model may not be immediate. This $\sqrt{2}$ is less than the $\sqrt{6}$ suggested by the authors, which may make things better? 3) A major concern I have about the differential privacy claims in this paper is with regards to the noise level in the algorithm. For moderate values of $L$, $R$, and $K$, and small $\epsilon = 1$, the noise level will be quite high. The utility theorem provided by the author requires a lower bound on $\epsilon$ to make the noise level sufficiently low, but since everything is in "big-O" notation, it is quite possible that the algorithm may not work at all for reasonable parameter values. A similar problem exists with the Hardt-Price method for differential privacy (see a recent ICASSP paper by Imtiaz and Sarwate or an ArXiV preprint by Sheffet). For example, setting L=R=100 and K=10, \epsilon = 1, \delta = 0.01 then the noise variance is of the order of 4 x 10^4. Of course, to get differentially private machine learning methods to work in practice, one either needs large sample size or to choose larger $\epsilon$, even $\epsilon \gg 1$. Having any sense of reasonable values of $\epsilon$ for a reasonable problem size (e.g. in topic modeling) would do a lot towards justifying the privacy application. 4) Privacy-preserving eigenvector computation is pretty related to private PCA, so one would expect that the authors would have considered some of the approaches in that literature. What about (\epsilon,0) methods such as the exponential mechanism (Chaudhuri et al., Kapralov and Talwar), Laplace noise (the (\epsilon,0) version in Hardt-Price), or Wishart noise (Sheffet 2015, Jiang et al. 2016, Imtiaz and Sarwate 2016)? 5) It's not clear how to use the private algorithm given the utility bound as stated. Running the algorithm is easy: providing $\epsilon$ and $\delta$ gives a private version -- but since the $\lambda$'s are unknown, verifying if the lower bound on $\epsilon$ holds may not be possible: so while I get a differentially private output, I will not know if it is useful or not. I'm not quite sure how to fix this, but perhaps a direct connection/reduction to Assumption 2.2 as a function of $\epsilon$ could give a weaker but more interpretable result. 6) Overall, given 2)-5) I think the differential privacy application is a bit too "half-baked" at the present time and I would encourage the authors to think through it more clearly. The online algorithm and robustness is significantly interesting and novel on its own. The experimental results in the appendix would be better in the main paper. 7) Given the motivation by topic modeling and so on, I would have expected at least an experiment on one real data set, but all results are on synthetic data sets. One problem with synthetic problems versus real data (which one sees in PCA as well) is that synthetic examples often have a "jump" or eigenvalue gap in the spectrum that may not be observed in real data. While verifying the conditions for exact recovery is interesting within the narrow confines of theory, experiments are an opportunity to show that the method actually works in settings where the restrictive theoretical assumptions do not hold. I would encourage the authors to include at least one such example in future extended versions of this work.
6) Overall, given2)-5) I think the differential privacy application is a bit too "half-baked" at the present time and I would encourage the authors to think through it more clearly. The online algorithm and robustness is significantly interesting and novel on its own. The experimental results in the appendix would be better in the main paper.
NIPS_2017_35
NIPS_2017
- The applicability of the methods to real world problems is rather limited as strong assumptions are made about the availability of camera parameters (extrinsics and intrinsics are known) and object segmentation. - The numerical evaluation is not fully convincing as the method is only evaluated on synthetic data. The comparison with [5] is not completely fair as [5] is designed for a more complex problem, i.e., no knowledge of the camera pose parameters. - Some explanations are a little vague. For example, the last paragraph of Section 3 (lines 207-210) on the single image case. Questions/comments: - In the Recurrent Grid Fusion, have you tried ordering the views sequentially with respect to the camera viewing sphere? - The main weakness to me is the numerical evaluation. I understand that the hypothesis of clean segmentation of the object and known camera pose limit the evaluation to purely synthetic settings. However, it would be interesting to see how the architecture performs when the camera pose is not perfect and/or when the segmentation is noisy. Per category results could also be useful. - Many typos (e.g., lines 14, 102, 161, 239 ), please run a spell-check.
- The applicability of the methods to real world problems is rather limited as strong assumptions are made about the availability of camera parameters (extrinsics and intrinsics are known) and object segmentation.
NIPS_2022_2605
NIPS_2022
Weakness: 1) In the beginning of the paper, authors often mention that previous works lack the flexibility compared to their work. It is not clear what does it mean and thus makes it harder to understand their explanation. 2) It is not clear regarding the choice of 20 distribution sets. Can we control the number of distribution sets for each class? What if you select only few number of distribution set? 3) The role of Tranfer Matrix T is not discussed or elaborated. 4) It is not clear how to form the target distribution H. How do you formulate H? 5) There is no discussion on how to generate x_H from H and what does x_H constitute of? 6) Despite the significant improvement, it is not clear how this proposed method boost the transferability of the adversarial examples. As per my understanding, authors briefly addressed the limitations and negative impact in their work.
2) It is not clear regarding the choice of 20 distribution sets. Can we control the number of distribution sets for each class? What if you select only few number of distribution set?
NIPS_2018_356
NIPS_2018
The paper doesn't have one message. Theorem 3 is not empirically investigated. TYPOS, ETC - Abstract. To state that the papers "draws useful connections" is uninformative, if the abstract doesn't state *what* connections are drawn. - Theorem 1. Is subscript k (overloaded later in Line 178, etc) necessary? It looks like one can simply restate the theorem in terms of alpha -> infinity? - Line 137 -- do the authors confuse VAEs with GANs's mode collapse here? - The discussion around equation (10) is very terse, and not very clearly explained. - Line 205. True posterior over which random variables? - Line 230 deserves an explanation, i.e. why conditioning p(x_missing | x_observed, x) is easily computable. - Figure 3: which Markov chain line is red and blue? Label?
- Theorem 1. Is subscript k (overloaded later in Line 178, etc) necessary? It looks like one can simply restate the theorem in terms of alpha -> infinity?
NIPS_2022_1637
NIPS_2022
1. The examples of scoring systems in the Introduction seem out of date, there are many newer and recognized clinical scoring systems. It also should briefly introduce the traditional framework of the scoring system and its difference in methodology and performance with the proposed method. 2. As shown in figure 3, the performance improvement of proposed methods seems not so significant, the biggest improvement in the bank dataset was ~0.02. Additionally, using some tables to directly show the key improvements may be more intuitive and detailed. 3. Although extensive experiments and discussion on performance, in my opinion, its most significant improvement would be efficiency, and there are few discussions or ablation experiments on efficiency. 4. The model AUC can assess the model discriminant ability, i.e., the probability of a positive case is bigger than that of a negative case, but may be hard to show its consistency between predicted score and actual risk. However, this consistency may be more crucial to the clinical scoring system (differentiated with classification task). Therefore, the related studies are encouraged to conduct calibration curves to show the agreement. It would be better to prove the feasibility of the generated scoring system? The difference between the traditional method and our method can also be discussed in this paper.
4. The model AUC can assess the model discriminant ability, i.e., the probability of a positive case is bigger than that of a negative case, but may be hard to show its consistency between predicted score and actual risk. However, this consistency may be more crucial to the clinical scoring system (differentiated with classification task). Therefore, the related studies are encouraged to conduct calibration curves to show the agreement. It would be better to prove the feasibility of the generated scoring system? The difference between the traditional method and our method can also be discussed in this paper.
2z9o8bMQNd
EMNLP_2023
- So difficult to follow the contribution of this paper. And it looks like an incremental engineering paper. The proposed method has been introduced in many papers, such as [1] Joshi, A., Bhat, A., Jain, A., Singh, A., & Modi, A. (2022, July). COGMEN: COntextualized GNN-based Multimodal Emotion Recognition. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 4148-4164). - The related work should be updated with more recent related works. - The experimental section needs some significance tests to further verify the effectiveness of the method put forward in the paper. - For the first time appearing in the text, the full name must be written, and abbreviations must be written in parentheses. When it appears in the abstract, it needs to be written once, and when it appears in the main text, it needs to be repeated again, that is, the full name+parentheses (abbreviations) should appear again. - Error analysis plays a crucial role in evaluating model performance and identifying potential issues. We encourage the authors to conduct error analysis in the paper and provide detailed explanations of the model's performance under different scenarios. Error analysis will aid in guiding subsequent improvements and expansions of the ERC research. - Writing mistakes are common across the overall paper, which could be found in “Typos, Grammar, Style, and Presentation Improvements”.
- Error analysis plays a crucial role in evaluating model performance and identifying potential issues. We encourage the authors to conduct error analysis in the paper and provide detailed explanations of the model's performance under different scenarios. Error analysis will aid in guiding subsequent improvements and expansions of the ERC research.
ACL_2017_768_review
ACL_2017
. First, the classification model used in this paper (concat + linear classifier) was shown to be inherently unable to learn relations in "Do Supervised Distributional Methods Really Learn Lexical Inference Relations?" ( Levy et al., 2015). Second, the paper makes superiority claims in the text that are simply not substantiated in the quantitative results. In addition, there are several clarity and experiment setup issues that give an overall feeling that the paper is still half-baked. = Classification Model = Concatenating two word vectors as input for a linear classifier was mathematically proven to be incapable of learning a relation between words (Levy et al., 2015). What is the motivation behind using this model in the contextual setting? While this handicap might be somewhat mitigated by adding similarity features, all these features are symmetric (including the Euclidean distance, since |L-R| = |R-L|). Why do we expect these features to detect entailment? I am not convinced that this is a reasonable classification model for the task. = Superiority Claims = The authors claim that their contextual representation is superior to context2vec. This is not evident from the paper, because: 1) The best result (F1) in both table 3 and table 4 (excluding PPDB features) is the 7th row. To my understanding, this variant does not use the proposed contextual representation; in fact, it uses the context2vec representation for the word type. 2) This experiment uses ready-made embeddings (GloVe) and parameters (context2vec) that were tuned on completely different datasets with very different sizes. Comparing the two is empirically flawed, and probably biased towards the method using GloVe (which was a trained on a much larger corpus). In addition, it seems that the biggest boost in performance comes from adding similarity features and not from the proposed context representation. This is not discussed. = Miscellaneous Comments = - I liked the WordNet dataset - using the example sentences is a nice trick. - I don’t quite understand why the task of cross-lingual lexical entailment is interesting or even reasonable. - Some basic baselines are really missing. Instead of the "random" baseline, how well does the "all true" baseline perform? What about the context-agnostic symmetric cosine similarity of the two target words? - In general, the tables are very difficult to read. The caption should make the tables self-explanatory. Also, it is unclear what each variant means; perhaps a more precise description (in text) of each variant could help the reader understand? - What are the PPDB-specific features? This is really unclear. - I could not understand 8.1. - Table 4 is overfull. - In table 4, the F1 of "random" should be 0.25. - Typo in line 462: should be "Table 3" = Author Response = Thank you for addressing my comments. Unfortunately, there are still some standing issues that prevent me from accepting this paper: - The problem I see with the base model is not that it is learning prototypical hypernyms, but that it's mathematically not able to learn a relation. - It appears that we have a different reading of tables 3 and 4. Maybe this is a clarity issue, but it prevents me from understanding how the claim that contextual representations substantially improve performance is supported. Furthermore, it seems like other factors (e.g. similarity features) have a greater effect.
1) The best result (F1) in both table 3 and table 4 (excluding PPDB features) is the 7th row. To my understanding, this variant does not use the proposed contextual representation; in fact, it uses the context2vec representation for the word type.
NIPS_2021_952
NIPS_2021
- Some important points about the method and the experiments are left unclear (see also questions below). - The writing could be improved (see also Typos & Additional Questions below) - Multiple runs and significance tests are missing. This makes it hard to judge the improvements (Table 2 & 3). Most Important Questions - Line 156: What is q_ij^k here exactly? I thought q_ij was a state flag, such as “2” or “0”. But you tokenize it and encode it, so it sounds more like it is something like “Copy(snow)”? (If it is the latter, then what is the meaning of tokenizing and encoding something like “Len(9)”?) - 192: What exactly is storyline and what do you need it for? - The baseline takes the predicate logic constraints as input: How does T6 know what to do with these inputs? Was the model trained on this but without the NRETM module? Can you give an example of what the input looks likes? How do these inputs guide which sentences should be generated? Looking at the datsset, it feels like one would need at least the first 2 sentences or so to know how to continue. Maybe this information is now in your constraints but it would be important to understand what they look like and how they were created. Is there no other suitable baseline for this experiment? - What is the overhead of your method compared to standard decoding approaches? (you mention GBS can only be used with T5-Base, so your method is more efficient? That would be important to point out) - What happens if the decoding process cannot find a sequence that satisfies all constraint? - Document-level MT: How do you know at test time whether the system translates a particular sentence or not? - How many sentences are misaligned by Doc-mBART25? What are the s-BLEU and d-BLEU values on the subset that NRETM aligns correctly and Doc does not? - Why was NEUROLOGIC not used as a comparison baseline? - What is dynamic vs static strategy? In which experiment did you show that dynamic works better than static (from conclusion)? Typos & Additional Questions - Line 40: you could mention here that the examples will be translated into logic forms in the next section. - Paragraph starting at line 53: Why did you choose these datasets? How will they help evaluate the proposed approach? - Line 75: a and b should be bold faced? - 83: “that used” -> “that are used” - 83: “details” -> “for details” - Paragraph at line 86: At this point, the state matrix is unclear. What are the initial values? How can the state matrix be used to understand if a constraint is satisfied or not? - 98: “take[s]” & “generate[s]” - 108: “be all” -> “all be” - Paragraph at line 101: What is dynamic vs static strategy? - Paragraph at line 109: The state flag explanation would greatly benefit from an example. Does q_i refer to whether a particular U_i is satisfied? - Eq 2: What is the meaning of N? Can it change depending on the definition of U_k? Does it mean this constraint is not relevant for x_i? - 133: Figure 1 should be Figure 2 - Figure 2: What exactly do the “&” rows track? - Figure 2: Is the state flag matrix equal to the state matrix? If not, how do you go from one to the other? - Line 146: What does the inf in the superscript signify? - 177: What is the symbolic operator? - Paragraph at line 194: Without understanding what a storyline is, it is not clear what the constraints are. An example might be helpful here. - Line 204: what is the ROUGH-L metric? Do you mean ROUGE-L? - Line 223: How do you obtain the morphological inflections for the concepts? - 237: @necessity [of] integrating” - 3.3: How exactly is the document-level MT done? Is the entire input document the input to T5? - 293: “because” typo - 3.4 where/how exactly is the sentence index used? The paper's broader impact section discusses general potential benefits and issues of text generation (from large language models). It could maybe be tailored a bit better by discussing what effect this proposed work would have on the potential benefits and issues.
- Document-level MT: How do you know at test time whether the system translates a particular sentence or not?
NIPS_2016_117
NIPS_2016
weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is higher for these deep networks). If the authors could demonstrate that DFA allows one to train and make use of such deep networks where BP and FA struggle on a larger dataset this would significantly enhance the impact of the paper. In terms of biological understanding, FA seems more supported by biological observations (which typically show reciprocal forward and backward connections between hierarchical brain areas, not direct connections back from one region to all others as might be expected in DFA). The paper doesn't provide support for their claim, in the final paragraph, that DFA is more biologically plausible than FA. Minor issues: - A few typos, there is no line numbers in the draft so I haven't itemized them. - Table 1, 2, 3 the legends should be longer and clarify whether the numbers are % errors, or % correct (MNIST and CIFAR respectively presumably). - Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color. - Figure 3 is very hard to read anything on the figure. - I think this manuscript is not following the NIPS style. The citations are not by number and there are no line numbers or an "Anonymous Author" placeholder. - I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers).
- I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers).
ICLR_2021_1849
ICLR_2021
I see in this paper are: - Although there is a clear and formal explanation of why it is not possible to discriminate among classes from different task when there is no access to data from those previous classes, I am not fully convinced that the set of parameters kept from previous classes, and used in regularization-based approaches, do not represent to some extent this data. In particular, there is no clear argument for the claim on page 5: “However, by hypothesis, \omega_{t-1} does not model the data distribution from C_{t-1} and therefore it does not model data distribution from C_{t-1} classes.”. I would like to see some discussion regarding how fairly a set of parameters \theta_{t-1} would represent the S’ set. - In terms of the experiments, I consider the number of tasks quite limited. To be convinced I would like to see several tasks (at least 10) and sequential results in terms of tasks learned rather than epochs. Questions for authors: Please address my comments on the weaknesses above.
- In terms of the experiments, I consider the number of tasks quite limited. To be convinced I would like to see several tasks (at least 10) and sequential results in terms of tasks learned rather than epochs. Questions for authors: Please address my comments on the weaknesses above.
ICLR_2021_738
ICLR_2021
---: 1: This paper ensembles some existing compression/NAS approaches to improve the performance of BNNs, which is not significant enough. The dynamic routing strategy (conditional on input) has been widely explored. For example, the proposed dynamic formulation in this paper has been used in several studies [2, 3]. Varying width and depth has been extensively explored in the quantization literature, especially in AutoML based approaches [Shen et al. 2019, Bulat et al. 2020], to design high capacity quantized networks. The effectiveness of the group convolution in BNNs was initially studied in [1]. Later works also incorporate the group convolution into the search space in NAS+BNNs methods [e.g., Bulat et al. 2020a] to reduce the complexity. 2: In each layer, the paper introduces a full-precision fully-connected layer to decide which expert to use. However, for deeper networks, such as ResNet-101, it will include ~100 full-precision layers, which can be very expensive especially in BNNs. As a result, it deteriorates the benefits and practicability of the dynamic routing mechanism. 3: The actual speedup, memory usage and energy consumption on edge devices (e.g., CPU/GPU/FPGA) or IoT devices must be reported. Even though the full-precision operations only account for a small amount of computations in statistics, it can have a big influence on the efficiency on platforms like FPGA. 4: This paper proposes to learn the binary gates via gradient-based optimization while exploring the network structure via EfficientNet manner. Then the problem comes. This paper can formulate the <width, depth, groups and layer arrangement> as configuration vectors and optimize them using policy gradients and so on, with the binary gates learning unified in a gradient-based framework. So what is the advantage of the "semi-automated" method of EfficientNet over the gradient-based optimization? In addition, how about learning a policy agent via RL to predict the gates? I encourage the authors can add comparsions and discussions with these alternatives. 5: More experiments on deeper networks (e.g., ResNet-50) and other network structures (e.g., MobileNet) are needed to further strengthen the paper. References: [1] MoBiNet: A Mobile Binary Network for Image Classification, in WACV 2020. [2] Dynamic Channel Pruning: Feature Boosting and Suppression, in ICLR2019. [3] Learning Dynamic Routing for Semantic Segmentation, in CVPR2020.
5: More experiments on deeper networks (e.g., ResNet-50) and other network structures (e.g., MobileNet) are needed to further strengthen the paper. References: [1] MoBiNet: A Mobile Binary Network for Image Classification, in WACV 2020. [2] Dynamic Channel Pruning: Feature Boosting and Suppression, in ICLR2019. [3] Learning Dynamic Routing for Semantic Segmentation, in CVPR2020.
ICLR_2022_1522
ICLR_2022
Weakness: The overall novelty seems limited since the instance-adaptive method is from existing work with no primary changes. Here are some main questions and concerns: 1). How many optimization steps are used to produce the final reported performance in Figure.1 as well as in some other figs and tables? 2). The proposed method looks stronger at high bitrate but close to the baselines at low bitrate. What is the precise bitrate range used for BD-rate comparison? Besides, a related work about implementing content adaptive algorithm in learned video compression is suggested for discussion or comparison: Guo Lu, et al., "Content Adaptive and Error Propagation Aware Deep Video Compression." ECCV 2020.
2). The proposed method looks stronger at high bitrate but close to the baselines at low bitrate. What is the precise bitrate range used for BD-rate comparison? Besides, a related work about implementing content adaptive algorithm in learned video compression is suggested for discussion or comparison: Guo Lu, et al., "Content Adaptive and Error Propagation Aware Deep Video Compression." ECCV 2020.
NIPS_2022_1505
NIPS_2022
Prior work has already studied the claimed contributions. Poor comparison with the literature on accessing privacy risks. Weak evaluations. Detailed Comments The idea of evaluating the risk of membership inference under data poisoning attacks is interesting. As more and more data is collected from various sources, the privacy risks of machine learning models trained on such data is an important topic. 1. Contributions were shown by the prior work However, data poisoning for increasing privacy risks has already been initially studied by Mahloujifar et al. [1], and all the contributions (claimed from Line 41 to Line 52) have already been shown by Tramer et al. [2]. Moreover, the paper uses the techniques and tools for measuring the membership inference risks already known as meaningless by Carlini et al. [3]. Thus, I believe this paper is largely detached from the state-of-the-art privacy studies, and unfortunately, the contributions are the repetition of what we have known so far. [1] Mahloujifar, et al., Property Inference from Poisoning, IEEE Security and Privacy, 2022. [2] Tramer et al., Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets, Preprint, 2022. [3] Carlini et al., Membership Inference Attacks From First Principles, IEEE Security and Privacy, 2022. Note: The studies I mentioned had appeared 3-12 months before the NeurIPS submission deadline and were even accepted before then, so I wouldn’t review this paper as concurrent work. 2. Poor comparison to the prior work My second concern is that the paper just combines two threat models (data poisoning and membership inference attacks) while it largely ignores important research questions in the community, such as: RQ 1. Why does this poisoning work? RQ 1. What are the training samples that become more vulnerable after poisoning? RQ 2. What does it mean to increase the AUC? RQ 3. Why are clean-label poisoning attacks important? (As the paper mentioned in the introduction, sanitizing the training data is not feasible.) RQ 4. If someone wants to mitigate this attack, what can this person do? which (those questions) are partially already answered in the prior work [2, 3]. 3. Weak Evaluation My last concern is that there is unclear interpretation of the results in the evaluation section: Q1. (Line 257) I am unclear why the clean-label poisoning attack can be considered an “approximate” version of the dirty-label poisoning attack in the feature space? As shown in visualization (Figure 5), it seems that clean-label attacks and dirty-label attacks cause a completely different impact on the models. If this is true, wouldn’t it make more sense in Sec 3 to present a single attack with different objectives? Q2. (Line 261) I am also unclear how this paper measures the distributional differences between D_train and D_shadow. I believe it’s still a hard question to quantify the distributional differences and actively studied in domain adaption and robustness, so I don’t think we can compare. Q3. (Line 284) I am a bit confused about the fine-tuning scenario. Is it the case where we take an ImageNet pre-trained model and fine-tune it on CIFAR10? Then why don’t the attacker make membership inference on ImageNet instead of attacking CIFAR10? Isn’t it easier to spot poisoning samples if we inject them into the training data for fine-tuning? Q4. (Line 304) I am unclear about the connection between the presented attacks and adversarial training. Adversarial training crafts adversarial examples in each training iteration and update the model parameters, while this attack just injects a set of static poisons into the training data and trains on it. My lesser concern is that the paper conducts offensive research, contaminating the training data for increasing privacy risks but does not discuss any ethical concerns when a miscreant uses this attack. At least, I think it's okay to talk about some potential defense mechanisms, but the paper just concludes that defenses are future work.
3. Weak Evaluation My last concern is that there is unclear interpretation of the results in the evaluation section:
NIPS_2016_417
NIPS_2016
1. Most of the human function learning literature has used tasks in which people never visualize data or functions. This is also the case in naturalistic settings where function learning takes place, where we have to form a continuous mapping between variables from experience. All of the tasks that were used in this paper involved presenting people with data in the form of a scatterplot or functional relationship, and asking them to evaluate lines applied to those axes. This task is more akin to data analysis than the traditional function learning task, and much less naturalistic. This distinction matters because performance in the two tasks is likely to be quite different. In the standard function learning task, it is quite hard to get people to learn periodic functions without other cues to periodicity. Many of the effects in this paper seem to be driven by periodic functions, suggesting that they may not hold if traditional tasks were used. I don't think this is a major problem if it is clearly acknowledged and it is made clear that the goal is to evaluate whether data-analysis systems using compositional functions match human intuitions about data analysis. But it is important if the paper is intended to be primarily about function learning in relation to the psychological literature, which has focused on a very different task. 2. I'm curious to what extent the results are due to being able to capture periodicity, rather than compositionality more generally. The comparison model is one that cannot capture periodic relationships, and in all of the experiments except Experiment 1b the relationships that people were learning involved periodicity. Would adding periodicity to the spectral kernel be enough to allow it to capture all of these results at a similar level to the explicitly compositional model? 3. Some of the details of the models are missing. In particular the grammar over kernels is not explained in any detail, making it hard to understand how this approach is applied in practice. Presumably there are also probabilities associated with the grammar that define a hypothesis space of kernels? How is inference performed?
2. I'm curious to what extent the results are due to being able to capture periodicity, rather than compositionality more generally. The comparison model is one that cannot capture periodic relationships, and in all of the experiments except Experiment 1b the relationships that people were learning involved periodicity. Would adding periodicity to the spectral kernel be enough to allow it to capture all of these results at a similar level to the explicitly compositional model?
NIPS_2022_532
NIPS_2022
1. Imitation Learning: The proposed method needs to be trained by behavioral cloning, which means 1) it requires a carefully well-designed algorithm (e.g., ODA-T/B/K) to generate the supervised data set. 2) More importantly, the data generated by ODA with a time limit L is indeed not a perfect teacher for behavioral cloning. Since all the ODAs are not designed and optimized for bounded time performance, their behavior (the state-action pairs) for the first L time is not the optimal policy under time limit L. Since the generic algorithm procedure of ODA can be encoded as a MDP, is it possible to use reinforcement learning to train the model? Would the RL-based approach find a better policy for bounded time performance that is aware of the time limit T? 2. Performance with Different Time Limits: It seems that the proposed method is both trained and tested with a single fixed time limit T = 1000s for all problems. However, in practice, the applications could have very different run time limits. Will the proposed method generalize well to different time limits (such as 1/10/100/2000/5000s)? 3. Comparison with PMOCO: PMOCO is the only learning-based approach in the comparison, but it has a very small cardinality (feasible points) for most problems. Its approximated Pareto front is also relatively sparse (which means a small set of solutions) in Figure A.3. This result is a bit counter-intuitive. In my understanding, PMOCO is a construction-based neural combinatorial optimization algorithm, of which one important advantage is the very fast run time. The PMOCO paper [1] reports it can generate 101 solutions for 100 MOKP(2-100) instances (hence 10,100 solutions in total) in only 15s, and 10,011 solutions for 100 MOTSP(3-100) instances (hence 1,001,100 solutions in total) in 33 minutes (~2000s). In this work, with a large time limit of 1,000s, I think POMO should be able to generate a dense set of solutions for each instance. In addition, while a dataset with 100 instances could provide enough supervised ODA state-action pairs (with a time limit L = 1000s for each instance) for imitation learning, it is far from enough for PMOCO's RL-based training. Since PMOCO does not require any supervised data and the MOKP instances can be easily generated on the fly, is it more suitable to train PMOCO under the same wall-clock time with the proposed method? [1] Pareto set learning for neural multi-objective combinatorial optimization. ICLR 2022. The limitations of this work are 1) the requirement of ODAs and a well-designed IP solver; 2) it loses the guarantee for finding the whole Pareto front. They have been properly discussed in the paper (see remark at the end of Section 4.1 and Conclusion). I do not see any potential negative societal impact of this work.
2) it loses the guarantee for finding the whole Pareto front. They have been properly discussed in the paper (see remark at the end of Section 4.1 and Conclusion). I do not see any potential negative societal impact of this work.
NIPS_2020_3
NIPS_2020
- Unlike Tandem Model [4,5] and cVAE based methods the proposed method uses gradient updates and therefore is slow. The authors acknowledge this in the manuscript and demonstrate study the method as a function of inference budget. - The sampling performed to obtain different initializations x_0 seems important for the convergence to optimum. This is not experimentally evaluated carefully on the proposed benchmarks, except for Tab. 1 in supplementary where it is compared to sampling from uniform distribution.
- The sampling performed to obtain different initializations x_0 seems important for the convergence to optimum. This is not experimentally evaluated carefully on the proposed benchmarks, except for Tab. 1 in supplementary where it is compared to sampling from uniform distribution.
ARR_2022_138_review
ARR_2022
1. The paper need further polish to make readers easy to follow. 2. In Table 6, the improvement of method is marginal and unstable. 3. The motivation of this new task is not strong enough to convince the reader. Is it a necessary intermediate task for document summarization and text mining (as stated in L261)? 4. It directly reverse the table-to-text settings then conducts the experiments on four existing table-to-text datasets. More analysis of the involved datasets is required, such as the number of output tables, the size/schema of the output tables. Questions: 1. L68-L70, Is there any further explanation of the statement "the schemas for extraction are implicitly included in the training data"? 2. How to generate the table content that not shown in the text? 3. Why not merge Table 1 and Table 2? They are both about the statistics of datasets used in experiments. 4. What’s the relation between text-to-table task and vanilla summarization task? 5. How to determine the number of output table(s)? Appendex. C don't provide an answer about this. 6. What’s the version of BART in Table3 and Table 4? Suggestions: 1. The font size of Figure 2 and Figure 3 is too small. Typos: 1. L237: Text-to-table -> text-to-table 2. L432: "No baseline can be applies to all four datasets" is confusing. 3. Table 3: lOur method -> Our method
2. How to generate the table content that not shown in the text?
ICLR_2022_2327
ICLR_2022
1. Technical novelty is limited. The proposed framework is essentially a transformer variant. Although this work is probably the first to apply a Transformer-like model on few-shot font generation, there exist works that have attempted to apply in closely related tasks, like style transfer [1]. 2. Evaluation is insufficient. For quantitative comparison, only IoU and classification accuracy are provided. It will be more convincing to provide comparison results on FID and SSIM to show the effectiveness of the proposed framework. 3. References and baseline methods are missing. [2] and [3] are both proposed for font generation but are not mentioned. 4. Experimental details are not clear. (1) how to conduct the classification task on content, as mentioned in Section 4.4? 2. As mentioned in Section 3.3.3, the discriminators are conditional. What is the input of the discriminators? 5. More insights are needed. It would be better to move the discussion in the Appendix to the main body of the manuscript. References: [1] StyleFormer: Real-time Arbitrary Style Transfer via Parametric Style Composition, ICCV 2021 [2] Multi-Content GAN for Few-Shot Font Style Transfer, CVPR 2018 [3] DG-Font: Deformable Generative Networks for Unsupervised Font Generation, CVPR 2021
4. Experimental details are not clear. (1) how to conduct the classification task on content, as mentioned in Section 4.4?
NIPS_2021_2257
NIPS_2021
- Missing supervised baselines. Since most experiments are done on datasets of scale ~100k images, it is reasonable to assume that full annotation is available for a dataset at this scale in practice. Even if it isn’t, it’s an informative baseline to show where these self-supervised methods are at comparing to a fully supervised pre-trained network. - The discussion in section 3 is interesting and insightful. The authors compared training datasets such as object-centric versus scene-centric ones, and observed different properties that the model exhibited. One natural question is then what would happen if a model is trained on \emph{combined} datasets. Can the SSL model make use of different kinds of data? - The authors compared two-crop and multi-crop augmentation in section 4, and observed that multi-crop augmentation yielded better performance. One important missing factor is the (possible) computation overhead of multi-crop strategies. My estimation is that it would increase the computation complexity (i.e., slowing the speed) of training. Therefore, one could argue that if we could train the two-crop baseline for a longer period of time it would yield better performance as well. To make the comparison fair, the computation overhead must be discussed. It can also be seen from Figure 7, for the KNN-MoCo, that the extra positive samples are fed into the network \emph{that takes the back-propagated gradients}. It will drastically increase training complexity as the network not only performs forward passing, but also the backward passing as well. - Section 4.2 experiments with AutoAugment as a stronger augmentation strategy. One possible trap is that AutoAugment’s policy is obtained by supervise training on ImageNet. Information leaking is likely. Questions - In L114 the authors concluded that for linear classification the pretraining dataset should match the target dataset in terms of being object or-scene centric. If this is true, is it a setback for SSL algorithms that strive to learn more generic representations? Then it goes back again to whether by combining two datasets SSL model can learn better representations. - In L157 the authors discussed that for transfer learning potentially only low- and mid-level visual features are useful. My intuition is that low- and mid-level features are rather easy to learn. Then how does it explain the model’s transferability increasing when we scale up pre-training datasets? Or the recent success of CLIPs? Is it possible that \emph{only} MoCo learns low- and mid-level features? Minor things that don’t play any role in my ratings. - “i.e.” -> “i.e.,”, “e.g.” -> “e.g.,” - In Eq.1, it’s better to write L_{contrastive}(x) = instead of L_{contrastive}. Also, should the equation be normalized by the number of positives? - L241 setup paragraph is overly complicated for an easy-to-explain procedure. L245/246, the use of x+ and x is very confusing. - It’s better to explain that “nearest neighbor mining” in the intro is to mine nearest neighbor in a moving embedding space in the same dataset. Overall, I like the objective of the paper a lot and I think the paper is trying to answer some important questions in SSL. But I have some reservation to confidently recommend acceptance due to the concerns as written in the “weakness” section, because this is an analysis paper and analysis needs to be rigorous. I’ll be more than happy to increase the score if those concerns are properly addressed in the feedback. The authors didn't discuss the limitations of the study. I find no potential negative societal impact.
- Section 4.2 experiments with AutoAugment as a stronger augmentation strategy. One possible trap is that AutoAugment’s policy is obtained by supervise training on ImageNet. Information leaking is likely. Questions - In L114 the authors concluded that for linear classification the pretraining dataset should match the target dataset in terms of being object or-scene centric. If this is true, is it a setback for SSL algorithms that strive to learn more generic representations? Then it goes back again to whether by combining two datasets SSL model can learn better representations.
NIPS_2022_1807
NIPS_2022
Weakness: 1.The authors should provide more descriptions of the wavelet transforms in this paper. It is hard for me to understand the major idea in this paper before learning some necessary knowledge about wavelet whitening, wavelet coefficient, and so on. 2.It is better for authors to display the performance of accelerating SGMs by involving some other baselines with a different perspective, such as “optimizing the discretization schedule or by modifying the original SGM formulation” [16, 15, 23, 46, 36, 31, 37, 20, 10, 25, 35, 45]
2.It is better for authors to display the performance of accelerating SGMs by involving some other baselines with a different perspective, such as “optimizing the discretization schedule or by modifying the original SGM formulation” [16, 15, 23, 46, 36, 31, 37, 20, 10, 25, 35, 45]
NIPS_2018_276
NIPS_2018
. Strengths: * This is the first inconsistency analysis for random forests. (Verified by quick Google scholar search.) * Clearly written to make results (mostly) approachable. This is a major accomplishment for such a technical topic. * The analysis is relevant to published random forest variations; these include papers published at ICDM, AAAI, SIGKDD. Weaknesses: * Relevance to researchers and practitioners is a little on the low side because most people are using supervised random forest algorithms. * The title, abstract, introduction, and discussion do not explain that the results are for unsupervised random forests. This is a fairly serious omission, and casual readers would remember the wrong conclusions. This must be fixed for publication, but I think it would be straightforward to fix. Officially, NIPS reviewers are not required to look at the supplementary material. Because of having only three weeks to review six manuscripts, I was not able to make the time during my reviewing. So I worry that publishing this work would mean publishing results without sufficient peer review. DETAILED COMMENTS * p. 1: I'm not sure it is accurate to say that deep, unsupervised trees grown with no subsampling is a common setup for learning random forests. It appears in Geurts et al. (2006) as a special case, sometimes in mass estimation [1, 2], and sometimes in Wei Fan's random decision tree papers [3-6]. I don't think these are used very much. * You may want to draw a connection between Theorem 3 and isolation forests [7] though. I've heard some buzz around this algorithm, and it uses unsupervised, deep trees with extreme subsampling. * l. 16: "random" => "randomized" * l. 41: Would be clearer with forward pointer to definition of deep. * l. 74: "ambient" seems like wrong word choice * l. 81: Is there a typo here? Exclamation point after \thereexists is confusing. * l. 152; l. 235: I think this mischaracterizes Geurts et al. (2006), and the difference is important for the impact stated in Section 4. Geurts et al. include a completely unsupervised tree learning as a special case, when K = 1. Otherwise, K > 1 potential splits are generated randomly and unsupervised (from K features), and the best one is selected *based on the response variable*. The supervised selection is important for low error on most data sets. See Figures 2 and 3; when K = 1, the error is usually high. * l. 162: Are random projection trees really the same as oblique trees? * Section 2.2: very useful overview! * l. 192: Typo? W^2? * l. 197: No "Eq. (2)" in paper? * l. 240: "parameter setup that is widely used..." This was unclear. Can you add references? For example, Lin and Jeon (2006) study forests with adaptive splitting, which would be supervised, not unsupervised. * Based on the abstract, you might be interested in [8]. REFERENCES [1] Ting et al. (2013). Mass estimation. Machine Learning, 90(1):127-160. [2] Ting et al. (2011). Density estimation based on mass. In ICDM. [3] Fan et al. (2003). Is random model better? On its accuracy and efficiency. In ICDM. [4] Fan (2004). On the optimality of probability estimation by random decision trees. In AAAI. [5] Fan et al. (2005). Effective estimation of posterior probabilities: Explaining the accuracy of randomized decision tree approaches. In ICDM. [6] Fan el al. (2006). A general framework for accurate and fast regression by data summarization in random decision trees. In KDD. [7] Liu, Ting, and Zhou (2012). Isolation-based anomaly detection. ACM Transactions on Knowledge Discovery from Data, 6(1). [8] Wager. Asymptotic theory for random forests. https://arxiv.org/abs/1405.0352
* The title, abstract, introduction, and discussion do not explain that the results are for unsupervised random forests. This is a fairly serious omission, and casual readers would remember the wrong conclusions. This must be fixed for publication, but I think it would be straightforward to fix. Officially, NIPS reviewers are not required to look at the supplementary material. Because of having only three weeks to review six manuscripts, I was not able to make the time during my reviewing. So I worry that publishing this work would mean publishing results without sufficient peer review. DETAILED COMMENTS * p.
iQHL76NqJT
ICLR_2024
1. In the Introduction, the authors assert that "i) To the best of our knowledge, we are the first to learn node embeddings using the abstention-based GAT architecture." This claim seems overstated. 2. In Section 3.1, the authors introduce NodeCwR-Cov and mention that "There are two more fully connected layers after the softmax layer (with 512 nodes and one node) to model the selection function g." The meaning of "having 512 nodes and one node" is unclear in this context. Additionally, the selection function threshold is set to 0.5, but the rationale behind choosing this value and its impact on the model or performance is not explained. This threshold serves to filter eligible candidates. It is essential to consider the accuracy of these candidates for each threshold, as they significantly impact the overall performance. 3. The presentation of results in tables and figures is unclear. For instance, in Table 1, the meanings of Cov and LS are not explained. The experimental analysis lacks depth and clarity. 4. GAT is chosen as the backbone for the proposed model. How does it compare to other graph neural network models? 5. In my opinion, the contribution of this paper appears somewhat limited, and the proposed model seems incremental in its approach.
5. In my opinion, the contribution of this paper appears somewhat limited, and the proposed model seems incremental in its approach.
NIPS_2018_297
NIPS_2018
-- * Although the method is intended for noise crowd-labeling, none of the experiments actually includes truly crowd-labeled annotations. Instead, all labels are simulated as if from the true two-coin model, so it is difficult to understand how the model might perform on data actually generated by human labelers. * The claimed difference between the fully-Bayesian inference of BayesSCDC and the point-estimation of global parameters in SCDC seems questionable to me... without showing the results of multiple independent runs and breaking down the differences more finely to separate q(z,x) issues from global parameter issues, its tough to be sure that this difference isn't due to poor random initialization, the q(z,x) difference, or other confounding issues. Originality -- The key novelty claimed in this paper seems to be the shared mixture model architecture used to explain both observed features (via a deep Gaussian model) and observed noisy pairwise annotations (via a principled hierarchical model from [16]). While I have not seen this exact modeling combination before, the components themselves are relatively well understood. The inference techniques used, while cutting edge, are used more or less in an "out-of-the-box" fashion by intelligently combining ideas from recent papers. For example, the recognition networks for non-conjugate potentials in BayesSCDC come from Johnson et al. NIPS 2016 [12], or the amortized inference approach to SCDC with marginalization over discrete local variables from Kingma, Mohamed, Rezende, and Welling [13]. Overall, I'm willing to rate this as just original enough for NIPS, because of the technical effort required to make all these work in harmony and the potential for applications. However, I felt like the paper had a chance to offer more compelling insight about why some approaches to variational methods work better than others, and that would have really made it feel more original. Significance -- The usage of noisy annotations to guide unsupervised modeling is of significant interest to many in the NIPS community, so I expect this paper will be reasonably well-received, at least by folks interested in clustering with side information. I think the biggest barriers to widespread understanding and adoption of this work would be the lack of real crowd-sourced data (all experiments use simulated noisy pairwise annotations) and helping readers understand exactly why the BayesSCDC approach is better than SCDC alone when so much changes between the two methods. Quality Issues -- ## Q1) Correctness issues in pair-wise likelihood in Eq. 2 In the pair-wise model definition in Sec. 2.2, a few things are unclear, potentially wrong: * The 1/2 exponent is just wrong as a poor post-hoc correction to the symmetry issue. It doesn't result in a valid distribution over L (e.g. that integrates to unity over the support of all binary matrices). A better correction in Eq. 2 would be to restrict the sum to those pairs (i,j) that satisify i < j (assuming no self edges allowed). * Are self-edges allowed? That is, is L_11 or L_22 a valid entry? The sum over pairs i,j in Eq. 2 suggests so, but I think logically self-edges should maybe be forbidden. ## Q2) Correctness issue in formula for mini-batch unbiased estimator of pair-wise likelihood In lines 120-122, given a minibatch of S annotations, the L_rel term is computed by reweighting a minibatch-specific sum by a scale factor N_a / |S|, so that the term has similar magnitude as the full dataset. However, the N_a term as given is incorrect. It should count the total number of non-null observations in L. Instead, as written it counts the total number of positive entries in L. ## Q3) Differences between SCDC and BayesSCDC are confusing, perhaps useful to breakdown more finely The presented two approximation approaches, SCDC and BayesSCDC, seem to differ on several axes, so any performance difference is hard to attribute to one change. First, SCDC assumes a more flexible q(x, z) distribution, while BayesSCDC assumes a mean-field q(x)q(z) with a recognition network for a surrogate potential for p(o|x). Second, SCDC treats the global GMM parameters as point estimates, while BayesSCDC infers a q(\mu, \Sigma) and q(\pi). I think these two axes should be explored independently. In particular, I suggest presenting 3 versions of the method: * the current "SCDC" method * the current "BayesSCDC" method * a version which does q(x)q(z) with a recognition network for a surrogate potential for p(o|x) (Eq. 10), but point estimates global parameters. The new 3rd version should enjoy the fast properties of BayesSCDC (each objective evaluation doesn't require marginalizing over all z values), but be more similar to SCDC. Clarity ------- The paper reads reasonably well. The biggest issue in clarity is that some some key hyperparameters required to reproduce experiments are just not provided (e.g. the Dirichlet concentration for prior on \pi, the Wishart hyperparameters, etc.). These absolutely need to be in a final version. Other reproducibility concerns: * what is the procedure for model initialization? * how many initializations of each method are allowed? how do you choose the best/worst? accuracy on training?
* how many initializations of each method are allowed? how do you choose the best/worst? accuracy on training?
lja4JMesmC
ICLR_2025
The biggest issue of the paper is the lack of depth. While it ablates the impact of each of the algorithmic components, they authors spent little effort trying to understand why each of them work and to compare them against existing methods. 1. It’s not clear what makes EP successful. - I strongly suspect the performance gain is mostly due to the fine-tuning of the connector module. The critical experiment of simply having both the connector and the LLM (LoRA params) trainable is missing. - Additionally, an experiment comparing EP with prefixing tuning [1] will tell whether it’s necessary to condition the prefix (additional tokens to the LLM’s embedding space) on the image at all to get good performance. Essentially, I need to see experiments showing me EP > fine-tuning the original VLM’s connector + prefix tuning to be convinced it’s novel. - I also don’t buy the claim that fine-tuning the Vision model in VLM will distort vision language alignment at all. If fine-tuning the Vision model is harmful, wouldn’t the trained LoRA weights be more harmful as well? A controlled experiment where the vision encode is also trained is needed. I am confident this will make EP perform even better. - Finally, other works with the same core methodology should be discussed. For example, Graph Neural Prompting [2] builds a knowledge graph based on the prompt and multiple choice candidates and generates a conditional prefix to prompt the LLM. I think the idea is extremely similar to EP. 2. Regarding RDA: this is essentially a fancy way of saying knowledge distillation but no relevant papers are cited. Regarding implementation, the author mentions gradient detachment. If I understood it correctly, this just means the TSM, or the “teacher”, is not trained while the goal is to train the student. Shouldn’t this be the default setting anyway? 3. Contrastive Response Tuning: as part of the core methodology, the paper should compare its effectiveness against existing methods, such as contrastive decoding [3][4]. Issues mentioned above should be addressed. Otherwise this work should aim for a more application-oriented venue. The notations issues. - In equations (1), (2), (3), (5), (6), why is there a min() operator on the left hand side? The author seems to mix it up with the argmin notation. I think the author should remove the min() and avoid argmin() like notation since not all parameters are trained. Minor grammar issues - For example, Takeaway #1: TSM features can prompts (prompt) VLMs to generate desired responses. References: [1] Li, Xiang Lisa, and Percy Liang. "Prefix-Tuning: Optimizing Continuous Prompts for Generation." Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021. [2] Tian, Yijun, et al. "Graph neural prompting with large language models." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 17. 2024. [3] Leng, Sicong, et al. "Mitigating object hallucinations in large vision-language models through visual contrastive decoding." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [4] Favero, Alessandro, et al. "Multi-modal hallucination control by visual information grounding." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
3. Contrastive Response Tuning: as part of the core methodology, the paper should compare its effectiveness against existing methods, such as contrastive decoding [3][4]. Issues mentioned above should be addressed. Otherwise this work should aim for a more application-oriented venue. The notations issues.
NIPS_2022_655
NIPS_2022
1. How to get a small degree of bias from a clear community structure needs more explanations. Theorem 1 and 2 prove that GCL conforms to a clearer community structure via intra-community concentration and inter-community scatter, but its relationship with degree bias is not intuitive enough. 2. There is some confusion in the theoretical analysis. Why is the supremum in Definition 1 \gamma(\frac{B}{\hat{d}_{\min}^k})^{\frac{1}{2}}? Based on this definition, how to prove that the proposed GRADE reduces this supremum? 3. There is a lack of significance test in Table 1. Despite the weaknesses mentioned above, I believe that this paper is worth publishing. They consider an important degree-bias problem in the graph domain, given that node degrees of real-world graphs often follow a long-tailed power-law distribution. And they show an exciting finding that GCL is more stable w.r.t. the degree bias, and give a preliminary explanation for the underlying mechanism. Although the improvement does not seem significant in Table 1, they may inspire more future research on this promising solution.
1. How to get a small degree of bias from a clear community structure needs more explanations. Theorem 1 and 2 prove that GCL conforms to a clearer community structure via intra-community concentration and inter-community scatter, but its relationship with degree bias is not intuitive enough.
pUOesbrlw4
ICLR_2024
1. The paper is lacking a clear and precise definition of unlearning. Its is important to show the definition of unlearning that you want to achieve through your algorithm. 2. The proposed algorithm is an empirical algorithm without any theoretical guarantees. It is important for unlearning papers to provide unlearning guarantees against an adversary. 3. The approach is very similar to this method (http://proceedings.mlr.press/v130/izzo21a/izzo21a.pdf) applied on each layer, which is not cited. 4. A simple baseline is just applying all the unlearning algorithm mentioned in the paper to the last layer vs the entire model. This comparison is missing. 5. All the unlearning verification are only show wrt accuracy of the model or the confusion matrix, however, the information is usually contained in the weights of the model, hence other metrics like membership attack or re-train time after forgetting show be considered. 6. The authors should also consider applying this method a linear perturbation of the network, as in those settings you will be able to get theoretical guarantees in regards to the proposed method, and also get better results. 7. Since the method is applied on each layer, the authors should provide a plot of how different different weights of the model move, for instance plot the relative weight change after unlearning to see which layers are affected the most after unlearning.
7. Since the method is applied on each layer, the authors should provide a plot of how different different weights of the model move, for instance plot the relative weight change after unlearning to see which layers are affected the most after unlearning.
ACL_2017_128_review
ACL_2017
----- I'm not very convinced by the empirical results, mostly due to the lack of details of the baselines. Comments below are ranked by decreasing importance. - The proposed model has two main parts: sentence embedding and substructure embedding. In Table 1, the baseline models are TreeRNN and DCNN, they are originally used for sentence embedding but one can easily take the node/substructure embedding from them too. It's not clear how they are used to compute the two parts. - The model uses two RNNs: a chain-based one and a knowledge guided one. The only difference in the knowledge-guided RNN is the addition of a "knowledge" vector from the memory in the RNN input (Eqn 5 and 8). It seems completely unnecessary to me to have separate weights for the two RNNs. The only advantage of using two is an increase of model capacity, i.e. more parameters. Furthermore, what are the hyper-parameters / size of the baseline neural networks? They should have comparable numbers of parameters. - I also think it is reasonable to include a baseline that just input additional knowledge as features to the RNN, e.g. the head of each word, NER results etc. - Any comments / results on the model's sensitivity to parser errors? Comments on the model: - After computing the substructure embeddings, it seems very natural to compute an attention over them at each word. Is there any reason to use a static attention for all words? I guess as it is, the "knowledge" is acting more like a filter to mark important words. Then it is reasonable to include the baseline suggest above, i.e. input additional features. - Since the weight on a word is computed by inner product of the sentence embedding and the substructure embedding, and the two embeddings are computed by the same RNN/CNN, doesn't it means nodes / phrases similar to the whole sentence gets higher weights, i.e. all leaf nodes? - The paper claims the model generalizes to different knowledge but I think the substructure has to be represented as a sequence of words, e.g. it doesn't seem straightforward for me to use constituent parse as knowledge here. Finally, I'm hesitating to call it "knowledge". This is misleading as usually it is used to refer to world / external knowledge such as a knowledge base of entities, whereas here it is really just syntax, or arguably semantics if AMR parsing is used. -----General Discussion----- This paper proposes a practical model which seems working well on one dataset, but the main ideas are not very novel (see comments in Strengths). I think as an ACL paper there should be more takeaways. More importantly, the experiments are not convincing as it is presented now. Will need some clarification to better judge the results. -----Post-rebuttal----- The authors did not address my main concern, which is whether the baselines (e.g. TreeRNN) are used to compute substructure embeddings independent of the sentence embedding and the joint tagger. Another major concern is the use of two separate RNNs which gives the proposed model more parameters than the baselines. Therefore I'm not changing my scores.
- The paper claims the model generalizes to different knowledge but I think the substructure has to be represented as a sequence of words, e.g. it doesn't seem straightforward for me to use constituent parse as knowledge here. Finally, I'm hesitating to call it "knowledge". This is misleading as usually it is used to refer to world / external knowledge such as a knowledge base of entities, whereas here it is really just syntax, or arguably semantics if AMR parsing is used.
NIPS_2016_238
NIPS_2016
- My biggest concern with this paper is the fact that it motivates “diversity” extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned that there is no diversity. - The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions: - The first sentence of the abstract needs to be re-written. - Diversity should be toned down. - line 108, the first “f” should be “g” in “we fixed the form of ..” - extra “.” in the middle of a sentence in line 115. One Question: For the baseline MCL with deep learning, how did the author ensure that each of the networks have converged to a reasonable results. Cutting the learners early on might significantly affect the ensemble performance.
- line 108, the first “f” should be “g” in “we fixed the form of ..” - extra “.” in the middle of a sentence in line 115. One Question: For the baseline MCL with deep learning, how did the author ensure that each of the networks have converged to a reasonable results. Cutting the learners early on might significantly affect the ensemble performance.
NIPS_2016_478
NIPS_2016
weakness is in the evaluation. The datasets used are very simple (whether artificial or real). Furthermore, there is no particularly convincing direct demonstration on real data (e.g. MNIST digits) that the network is actually robust to gain variation. Figure 3 shows that performance is worse without IP, but this is not quite the same thing. In addition, while GSM is discussed and stated as "mathematically distinct" (l.232), etc., it is not clear why GSM cannot be used on the same data and results compared to the PPG model's results. Minor comments (no need for authors to respond): - The link between IP and the terms/equations could be explained more explicitly and prominently - Pls include labels for subfigures in Figs 3 and 4, and not just state in the captions. - Have some of the subfigures in Figs 1 and 2 been swapped by mistake?
- The link between IP and the terms/equations could be explained more explicitly and prominently - Pls include labels for subfigures in Figs 3 and 4, and not just state in the captions.
NIPS_2021_2247
NIPS_2021
1). Lack of speed analysis, the experiments have compared GFLOPs of different segmentation networks. However, there is no comparisons of inference speed between the proposed network and prior work. The improvement on inference speed will be more interesting than reducing FLOPs. 2). For the detail of the proposed NRD, it is reasonable that the guidance maps are generated from the low-level feature maps. And the guidance maps can be predicted from the the first-stage feature maps or the second-stage feature maps. It is better to provide one ablation study about the effect for them. 3). Important references are missing. The GFF[1] and EfficientFCN[2] both aims to implement the fast semantic segmentation method in the encode-decoder architecture. I encourage the authors to have a comprehensive comparison with these work. [1]. Gated Fully Fusion for Semantic Segmentation, AAAI'20. [2]. EfficientFCN: Holistically-guided Decoding for Semantic Segmentation, ECCV'20. See above. The societal impact is shown one the last page of the manuscript.
1). Lack of speed analysis, the experiments have compared GFLOPs of different segmentation networks. However, there is no comparisons of inference speed between the proposed network and prior work. The improvement on inference speed will be more interesting than reducing FLOPs.
NIPS_2016_394
NIPS_2016
- The theoretical results don't have immediate practical implications, although this is certainly understandable given the novelty of the work. As someone who is more of an applied researcher who occasionally dabbles in theory, it would be ideal to see more take-away points for practitioners. The main take-away point that I observed is to query a cluster proportionally to the square root of its size, but it's unclear if this is a novel finding in this paper. - The proposed model produces only 1 node changing cluster per time step on average because the reassignment probability is 1/n. This allows for only very slow dynamics. Furthermore, the proposed evolution model is very simplistic in that no other edges are changed aside from edges with the (on average) 1 node changing cluster. - Motivation by the rate limits of social media APIs is a bit weak. The motivation would suggest that it examines the error given constraints on the number of queries. The paper actually examines the number of probes/queries necessary to achieve a near-optimal error, which is a related problem but not necessarily applicable to the social media API motivation. The resource-constrained sampling motivation is more general and a better fit to the problem actually considered in this paper, in my opinion. Suggestions: Please comment on optimality in the general case. From the discussion in the last paragraph in Section 4.3, it appears that the proposed queue algorithm would is a multiplicative factor of 1/beta from optimality. Is this indeed the case? Why not also show experiment results for just using the algorithm of Theorem 4 in addition to the random baselines? This would allow the reader to see how much practical benefit the queue algorithm provides. Line 308: You state that you show the average and standard deviation, but standard deviation is not visible in Figure 1. Are error bars present but just too small to be visible? If so, state that it is the case. Line 93: "asymptoticall" -> "asymptotically" Line 109: "the some relevant features" -> Remove "the" or "some" Line 182: "queries per steps" -> "queries per step" Line 196-197: "every neighbor of neighbor of v" -> "neighbor of" repeated Line 263: Reference to Appendix in supplementary material shows ?? Line 269: In the equation for \epsilon, perhaps it would help to put parentheses around log n, i.e. (log n)/n rather than log n/n. Line 276: "issues query" -> I believe this should be "issues 1 query" Line 278: "loosing" -> "losing" I have read the author rebuttal and other reviews and have decided not to change my scores.
- The proposed model produces only 1 node changing cluster per time step on average because the reassignment probability is 1/n. This allows for only very slow dynamics. Furthermore, the proposed evolution model is very simplistic in that no other edges are changed aside from edges with the (on average) 1 node changing cluster.
NIPS_2017_585
NIPS_2017
weakness of the paper is in the experiments: there should be more complete comparisons in computation time, and comparisons with QMC-based methods of Yang et al (ICML2014). Without this the advantage of the proposed method remains unclear. - The limitation of the obtained results: The authors assume that the spectrum of a kernel is sub-gaussian. This is OK, as the popular Gaussian kernels are in this class. However, another popular class of kernels such as Matern kernels are not included, since their spectrum only decay polynomially. In this sense, the results of the paper could be restrictive. - Eq. (3): What is $e_l$? Corollaries 1, 2 and 3 and Theorem 4: All of these results have exponential dependence on the diameter $M$ of the domain of data: a required feature size increases exponentially as $M$ grows. While this factor does not increase as a required amount of error $\varepsilon$ decreases, the dependence on $M$ affects the constant factor of the required feature size. In fact, Figure 1 shows that the performance is more quickly getting worse than standard random features. This may exhibit the weakness of the proposed approaches (or at least of the theoretical results). - The equation in Line 170: What is $e_i$? - Subsampled dense grid: This approach is what the authors used in Section 5 on experiments. However, it looks that there is no theoretical guarantee for this method. Those having theoretical guarantees seem not to be practically useful. - Reweighted grid quadrature: (i) It looks that there is no theoretical guarantee with this method. (ii) The approach reminds me of Bayesian quadrature, which essentially obtains the weights by minimizing the worst case error in the unit ball of an RKHS. I would like to look at comparison with this approach. (iii) Would it be possible to derive a time complexity? (iv) How do you chose the regularization parameter $\lambda$ in the case of the $\ell_1$ approach? - Experiments in Section 5: (i) The authors reported the results of computation time very briefly (320 secs vs. 384 seconds for 28800 features in MNIST and "The quadrature-based features ... are about twice as fast to generate, compared to random Fourier features ..." in TIMIT). I do not they are not enough: the authors should report the results in the form of Tables, for example, varying the number of features. (ii) There should be comparison with the QMC-based methods of Yang et al. (ICML2014, JMLR2016). It is not clear what is the advantage of the proposed method over the QMC-based methods. (iii) There should be explanation on the settings of the MNIST and TIMIT classification tasks: what classifiers did you use, and how did you determine the hyper-parameters of these methods? At least such explantion should be included in the appendix.
- Eq. (3): What is $e_l$? Corollaries 1, 2 and 3 and Theorem 4: All of these results have exponential dependence on the diameter $M$ of the domain of data: a required feature size increases exponentially as $M$ grows. While this factor does not increase as a required amount of error $\varepsilon$ decreases, the dependence on $M$ affects the constant factor of the required feature size. In fact, Figure 1 shows that the performance is more quickly getting worse than standard random features. This may exhibit the weakness of the proposed approaches (or at least of the theoretical results).
VoI4d6uhdr
ICLR_2025
1. Although the authors present the exact formulation of the risk in the main text, it is complicated to understand the implications of those formulas. It would be helpful to include more discussion to explain each term to better understand the results. 2. The paper's main contribution is to examine the bias amplification phenomenon using the formula. However, a formal statement about how different components affect the bias amplification is lacking. I would suggest the authors write them in formal theorems. 3. It is unclear how these theoretical findings relate to real-world deep learning models, I would suggest the authors verify the conclusion about the label noise and model size on MNIST and CNN as well.
3. It is unclear how these theoretical findings relate to real-world deep learning models, I would suggest the authors verify the conclusion about the label noise and model size on MNIST and CNN as well.
7GxY4WVBzc
EMNLP_2023
* The contribution of the vector database to improving QA performance is unclear. More analysis and ablation studies are needed to determine its impact and value for the climate change QA task. * Details around the filtering process used to create the Arabic climate change QA dataset are lacking. More information on the translation and filtering methodology is needed to assess the dataset quality. * The work is focused on a narrow task (climate change QA) in a specific language (Arabic), so its broader impact may be limited. * The limitations section lacks specific references to errors and issues found through error analysis of the current model. Performing an analysis of the model's errors and limitations would make this section more insightful.
* Details around the filtering process used to create the Arabic climate change QA dataset are lacking. More information on the translation and filtering methodology is needed to assess the dataset quality.
NIPS_2021_304
NIPS_2021
/Questions: I only have a few minor points: 1.) For equation (7), does treating | u − l | as the length require the bins to be equally spaced? I don't think this is stated. 2.) It may be good to briefly mention the negligible computational cost of CHR (which is in the appendix) in the main paper to help motivate the method. A rough example of some run-times in the experiments may also be useful for readers looking to apply the method. 3.) Just a few typographical/communication points: I found Section 2.2 slightly difficult to read, as the notation gets a little heavy. This may not be necessary, but the authors could consider presenting the nested intervals without randomization (e.g. after Line 119), with the randomization in the Appendix, as it is not needed in Theorem 2. This would give more room for intuitive discussions, related to my next point. It may be helpful to introduce some intuition on the conformity score in equation (12) and why we need the sets to be nested for readers unfamiliar with previous work, perhaps at the start of Section 2.3. Line 113: ϵ is mentioned here before it is defined Line 188: 'increased' instead of 'increase' ##################################################################### Overall: This paper is an interesting extension of previous work, and the provided asymptotic justifications of attaining oracle width and conditional coverage is useful. The method is also general and can empirically provide better average widths and conditional coverage than other methods, particularly under skewed data, making it useful in practice. ##################################################################### References: Romano, Y., Sesia, M., & Candes, E. (2020). Classification with Valid and Adaptive Coverage. Advances in Neural Information Processing Systems, 33, 3581-3591. Gupta, C., Kuchibhotla, A. K., & Ramdas, A. K. (2019). Nested conformal prediction and quantile out-of-bag ensemble methods. arXiv preprint arXiv:1910.10562. The authors have described the limitations of their method - in particular their method does not control for upper and lower miscoverage, and they provide alternative recommendations.
2.) It may be good to briefly mention the negligible computational cost of CHR (which is in the appendix) in the main paper to help motivate the method. A rough example of some run-times in the experiments may also be useful for readers looking to apply the method.
NIPS_2022_554
NIPS_2022
Several technical details unclear: see "Questions" section. These are not absolutely critical to the paper but I would appreciate clarification. Unclear effect of "different objective" vs "exact solution": the proposed method has 2 deviations from using the empirical mean/variance of a tree ensemble for Bayesopt: 1) making predictions with a tree kernel GP instead of empirical mean/variance 2) optimizing UCB exactly using a cone solver. In section 5 it was generally unclear to me how much of the difference in final performance came from using the trees to make predictions in a different way (GP instead of empirical mean/variance), and how much came from performing [nearly] exact maximization of the acquisition function. A good experiment to do would be use the GP formulation to find a promising set of candidate points (perhaps using multiple seeds), then use the original empirical mean/variance UCB objective to select a point to query in the end. This would be combining "exact optimization" with the standard way of making predictions. Code requires a non-public library gurobi. This is not a deal-breaker, and I realize there are academic licenses available, but it would be really nice to have a fully open-source version for the camera-ready paper. I ask some additional questions about the choice of solver below. Suggestions Given that the paper focuses specifically on non-continuous spaces, why is x defined to be a real number when introducing Bayesopt on line 44? I think x should just be an element of a general space X Many readers at NeurIPS may not be very familiar with cone problems / conic solvers. It would be nice to put some information on these in the background, even if it is just a short section saying that "it is a very general way of formulating certain optimization problems, with a variety of established methods for solving them" "LCB" is a fairly uncommon formulation of Bayesopt; I think many readers are accustomed to seeing problems posed as UCB maximization. Perhaps it would be slightly helpful to the reader to frame the problem as maximization and multiple everything in the paper by − 1 ? I saw in the appendix that you use Adam to optimize GP kernel hyperparameters. One problem with Adam is that it may converge very slowly if the learning rate is too low, or not fully converge if it is too large. Especially since you are only optimizing 2 hyperparameters, I recommend using L-BFGS instead. In my personal experience this method is much more robust. The objective in equation 12 would also be another interesting baseline to see: it would show how much the GP variance actually contributes, as opposed to just finding a point with a sufficient amount of disagreement among ensemble members. Overall Originality: seems high, but I am not very familiar with related work in this area Quality: high, good paper, good method, good evaluation Clarity: very high! Significance: high. The proposed method solves a real problem and is likely to be used in practice, especially if a good open-source library is published. I definitely recommend accepting this paper and my score reflects that. I would consider increasing my score slightly if my suggestions are addressed. I could also imagine decreasing my score if: The novelty is not as high as I thought (I am not very familiar with related work in this area, so perhaps there is a paper that does something very similar which was not cited) There were technical flaws that I did not catch (I know only the basics of convex optimization/cone problems, so this is possible) As far as I can tell, the limitations of this work are discussed adequately.
1) making predictions with a tree kernel GP instead of empirical mean/variance
ICLR_2023_4411
ICLR_2023
Weakness • The reviewer thinks the authors need to elaborate how the output labels are defined for density assessment. In section 3. Datasets, it seems the authors gives confusing definitions of density and BIRADS findings like “we categorized BI-RADS density scores into two separate categories: BI-RADS 2 and 3 as benign and BI-RADS 5 and 6 as malignant”. There is no description about what “Density A”, “Density B”, “Density C”, and “Density D” mean. Also, as the reviewer knows, benign or malignant classification can be confirmed with biopsy results not BIRADS scores. Even though the reviewer is not familiar with the two public datasets, the reviewer thinks the datasets should have biopsy information to annotate lesions whether malignant or benign. • As a preprocessing step, the authors segmented and removed the region of the pectoral muscle from MLO views. However, the authors did not explain how the segmentation model was developed (they just mentioned employed the prior work) and the review has a concern that important features can be removed from this preprocessing step. It might be useful to compare model performance using MLO views with and without this preprocessing step to confirm the benefit of this pectoral muscle removal. • How did you calculate precision/recall/F1-score for 4-class classification of breast density? Also, for breast cancer detection, researchers usually report AUC with sensitivity and specificity at different operating points to compare model performance. It might be more informative to provide AUC results for comparisons. • The reviewer thinks comparison of their proposed approach with the single-view result is unfair. This is because information that multi views contain is 4x larger than the one that the single view has. So, to demonstrate the benefit of using the proposed fusion strategy, they need to report performance of multi-view results with simple fusion approach like the average/maximum of 4 view scores, or max over mean values of each breast. • Are the results reported in this study based on patient/study level? How did you calculate performance when using single views? Did you assume that each study has only one view? • What fusion strategy was used for results in Table 2? Are these results based on image level?
• How did you calculate precision/recall/F1-score for 4-class classification of breast density? Also, for breast cancer detection, researchers usually report AUC with sensitivity and specificity at different operating points to compare model performance. It might be more informative to provide AUC results for comparisons.
NIPS_2020_68
NIPS_2020
1. It is unclear how guaranteeing stationary points that have small gradient norms translates to good generalization. The bounds just indicate that these algorithms reach one of the many stationary points for adaptive gradient methods and don't talk about how reaching one of the potentially many population stationary points especially in the non-convex regime can translate to good generalization. A remark on this would be helpful. 2. Line 124-125: For any w, the Hoeffding's bound holds true as long as the samples are drawn independently and so it is always possible to show inequality (2). Stochastic algorithms moreover impose conditioning on the previous iterate further guaranteeing that Hoeffding inequality holds. It will be great if the authors can elaborate on this. 3. The bounds in Theorem 1 have a dependence on d, which the authors have discussed. However, if \mu is small, the bounds are moot. If \mu is large, then the concentration guarantees are not very useful. Based on values in Theorem 2, latter seems to be the case. 4. It seems weird that the bounds in Theorems 2 and 4 do not depend on the initialization w_0 but on w_1. 5. For experiments on Penn-Tree bank, it seems that the algorithms are not stable with respect to train perplexity.
2. Line 124-125: For any w, the Hoeffding's bound holds true as long as the samples are drawn independently and so it is always possible to show inequality (2). Stochastic algorithms moreover impose conditioning on the previous iterate further guaranteeing that Hoeffding inequality holds. It will be great if the authors can elaborate on this.
ICLR_2021_2892
ICLR_2021
- Proposition 2 seems to lack an argument why Eq 16 forms a complete basis for all functions h. The function h appears to be defined as any family of spherical signals parameterized by a parameter in [-pi/2, pi/2]. If that’s the case, why eq 16? As a concrete example, let \hat{h}^\theta_lm = 1 if l=m=1 and 0 otherwise, so constant in \theta. The only constant associated Legendre polynomial is P^0_0, so this h is not expressible in eq 16. Instead, it seems like there are additional assumptions necessary on the family of spherical functions h to let the decomposition eq 16, and thus proposition 2, work. Hence, it looks like that proposition 2 doesn’t actually characterize all azimuthal correlations. - In its discussion of SO(3) equivariant spherical convolutions, the authors do not mention the lift to SO(3) signals, which allow for more expressive filters than the ones shown in figure 1. - Can the authors clarify figure 2b? I do not understand what is shown. - The architecture used for the experiments is not clearly explained in this paper. Instead the authors refer to Jiang et al. (2019) for details. This makes the paper not self-contained. - The authors appear to not use a fast spherical Fourier transform. Why not? This could greatly help performance. Could the authors comment on the runtime cost of the experiments? - The sampling of the Fourier features to a spherical signal and then applying a point-wise non-linearity is not exactly equivariant (as noted by Kondor et al 2018). Still, the authors note at the end of Sec 6 “This limitation can be alleviated by applying fully azimuthal-rotation equivariant operations.”. Perhaps the authors can comment on that? - The experiments are limited to MNIST and a single real-world dataset. - Out of the many spherical CNNs currently in existence, the authors compare only to a single one. For example, comparisons to SO(3) equivariant methods would be interesting. Furthermore, it would be interesting to compare to SO(3) equivariant methods in which SO(3) equivariance is broken to SO(2) equivariance by adding to the spherical signal a channel that indicates the theta coordinate. - The experimental results are presented in an unclear way. A table would be much clearer. - An obvious approach to the problem of SO(2) equivariance of spherical signals, is to project the sphere to a cylinder and apply planar 2D convolutions that are periodic in one direction and not in the other. This suffers from distortion of the kernel around the poles, but perhaps this wouldn’t be too harmful. An experimental comparison to this method would benefit the paper. Recommendation: I recommend rejection of this paper. I am not convinced of the correctness of proposition 2 and proposition 1 is similar to equivariance arguments made in prior work. The experiments are limited in their presentation, the number of datasets and the comparisons to prior work. Suggestions for improvement: - Clarify the issue around eq 16 and proposition 2 - Improve presentation of experimental results and add experimental details - Evaluate the model of more data sets - Compare the model to other spherical convolutions Minor points / suggestions: - When talking about the Fourier modes as numbers, perhaps clarify if these are reals or complex. - In Def 1 in the equation it is confusing to have theta twice on the left-hand side. It would be clearer if h did not have a subscript on the left-hand side.
- The experiments are limited to MNIST and a single real-world dataset.
NIPS_2021_2257
NIPS_2021
- Missing supervised baselines. Since most experiments are done on datasets of scale ~100k images, it is reasonable to assume that full annotation is available for a dataset at this scale in practice. Even if it isn’t, it’s an informative baseline to show where these self-supervised methods are at comparing to a fully supervised pre-trained network. - The discussion in section 3 is interesting and insightful. The authors compared training datasets such as object-centric versus scene-centric ones, and observed different properties that the model exhibited. One natural question is then what would happen if a model is trained on \emph{combined} datasets. Can the SSL model make use of different kinds of data? - The authors compared two-crop and multi-crop augmentation in section 4, and observed that multi-crop augmentation yielded better performance. One important missing factor is the (possible) computation overhead of multi-crop strategies. My estimation is that it would increase the computation complexity (i.e., slowing the speed) of training. Therefore, one could argue that if we could train the two-crop baseline for a longer period of time it would yield better performance as well. To make the comparison fair, the computation overhead must be discussed. It can also be seen from Figure 7, for the KNN-MoCo, that the extra positive samples are fed into the network \emph{that takes the back-propagated gradients}. It will drastically increase training complexity as the network not only performs forward passing, but also the backward passing as well. - Section 4.2 experiments with AutoAugment as a stronger augmentation strategy. One possible trap is that AutoAugment’s policy is obtained by supervise training on ImageNet. Information leaking is likely. Questions - In L114 the authors concluded that for linear classification the pretraining dataset should match the target dataset in terms of being object or-scene centric. If this is true, is it a setback for SSL algorithms that strive to learn more generic representations? Then it goes back again to whether by combining two datasets SSL model can learn better representations. - In L157 the authors discussed that for transfer learning potentially only low- and mid-level visual features are useful. My intuition is that low- and mid-level features are rather easy to learn. Then how does it explain the model’s transferability increasing when we scale up pre-training datasets? Or the recent success of CLIPs? Is it possible that \emph{only} MoCo learns low- and mid-level features? Minor things that don’t play any role in my ratings. - “i.e.” -> “i.e.,”, “e.g.” -> “e.g.,” - In Eq.1, it’s better to write L_{contrastive}(x) = instead of L_{contrastive}. Also, should the equation be normalized by the number of positives? - L241 setup paragraph is overly complicated for an easy-to-explain procedure. L245/246, the use of x+ and x is very confusing. - It’s better to explain that “nearest neighbor mining” in the intro is to mine nearest neighbor in a moving embedding space in the same dataset. Overall, I like the objective of the paper a lot and I think the paper is trying to answer some important questions in SSL. But I have some reservation to confidently recommend acceptance due to the concerns as written in the “weakness” section, because this is an analysis paper and analysis needs to be rigorous. I’ll be more than happy to increase the score if those concerns are properly addressed in the feedback. The authors didn't discuss the limitations of the study. I find no potential negative societal impact.
- Missing supervised baselines. Since most experiments are done on datasets of scale ~100k images, it is reasonable to assume that full annotation is available for a dataset at this scale in practice. Even if it isn’t, it’s an informative baseline to show where these self-supervised methods are at comparing to a fully supervised pre-trained network.
ACL_2017_333_review
ACL_2017
There are some few details on the implementation and on the systems to which the authors compared their work that need to be better explained. - General Discussion: - Major review: - I wonder if the summaries obtained using the proposed methods are indeed abstractive. I understand that the target vocabulary is build out of the words which appear in the summaries in the training data. But given the example shown in Figure 4, I have the impression that the summaries are rather extractive. The authors should choose a better example for Figure 4 and give some statistics on the number of words in the output sentences which were not present in the input sentences for all test sets. - page 2, lines 266-272: I understand the mathematical difference between the vector hi and s, but I still have the feeling that there is a great overlap between them. Both "represent the meaning". Are both indeed necessary? Did you trying using only one of them. - Which neural network library did the authors use for implementing the system? There is no details on the implementation. - page 5, section 44: Which training data was used for each of the systems that the authors compare to? Diy you train any of them yourselves? - Minor review: - page 1, line 44: Although the difference between abstractive and extractive summarization is described in section 2, this could be moved to the introduction section. At this point, some users might no be familiar with this concept. - page 1, lines 93-96: please provide a reference for this passage: "This approach achieves huge success in tasks like neural machine translation, where alignment between all parts of the input and output are required." - page 2, section 1, last paragraph: The contribution of the work is clear but I think the authors should emphasize that such a selective encoding model has never been proposed before (is this true?). Further, the related work section should be moved to before the methods section. - Figure 1 vs. Table 1: the authors show two examples for abstractive summarization but I think that just one of them is enough. Further, one is called a figure while the other a table. - Section 3.2, lines 230-234 and 234-235: please provide references for the following two passages: "In the sequence-to-sequence machine translation (MT) model, the encoder and decoder are responsible for encoding input sentence information and decoding the sentence representation to generate an output sentence"; "Some previous works apply this framework to summarization generation tasks." - Figure 2: What is "MLP"? It seems not to be described in the paper. - page 3, lines 289-290: the sigmoid function and the element-wise multiplication are not defined for the formulas in section 3.1. - page 4, first column: many elements of the formulas are not defined: b (equation 11), W (equation 12, 15, 17) and U (equation 12, 15), V (equation 15). - page 4, line 326: the readout state rt is not depicted in Figure 2 (workflow). - Table 2: what does "#(ref)" mean? - Section 4.3, model parameters and training. Explain how you achieved the values to the many parameters: word embedding size, GRU hidden states, alpha, beta 1 and 2, epsilon, beam size. - Page 5, line 450: remove "the" word in this line? " SGD as our optimizing algorithms" instead of "SGD as our the optimizing algorithms." - Page 5, beam search: please include a reference for beam search. - Figure 4: Is there a typo in the true sentence? " council of europe again slams french prison conditions" (again or against?) - typo "supper script" -> "superscript" (4 times)
- Section 4.3, model parameters and training. Explain how you achieved the values to the many parameters: word embedding size, GRU hidden states, alpha, beta 1 and 2, epsilon, beam size.
ICLR_2022_2531
ICLR_2022
I have several concerns about the clinical utility of this task as well as the evaluation approach. - First of all, I think clarification is needed to describe the utility of the task setup. Why is the task framed as generation of the ECG report rather than framing the task as multi-label classification or slot-filling, especially given the known faithfulness issues with text generation? There are some existing approaches for automatic ECG interpretation. How does this work fit into the existing approaches? A portion of the ECG reports from the PTB-XL dataset are actually automatically generated (See Data Acquisition under https://physionet.org/content/ptb-xl/1.0.1/). Do you filter out those notes during evaluation? How does your method compare to those automatically generated reports? - A major claim in the paper is that RTLP generates more clinically accurate reports than MLM, yet the only analysis in the paper related to this is a qualitative analysis of a single report. A more systematic analysis of the quality of generation would be useful to support the claim made in the appendix. Can you ask clinicians to evaluate the utility of the generated reports or evaluate clinical utility by using the generated reports to predict conditions identifiable from the ECG? I think that it’s fine that the RTLP method performs comparable to existing methods, but I am not sure from the current paper what the utility of using RTLP is. - More generally, I think that this paper is trying to do two things at once – present new methods for multilingual pretraining while also developing a method of ECG captioning. If the emphasis is on the former, then I would expect to see evaluation against other multilingual pretraining setups such as the Unicoder (Huang 2019a). If the core contribution is the latter, then clinical utility of the method as well as comparison to baselines for ECG captioning (or similar methods) is especially important. - I’m a bit confused as to why the diversity of the generated reports is emphasized during evaluation. While I agree that the generated reports should be faithful to the associated ECG, diversity may not actually be necessary metric to aim for in a medical context. For instance, if many of the reports are normal, you would want similar reports for each normal ECG (i.e. low diversity). - My understanding is that reports are generated in other languages using Google Translate. While this makes sense to generate multilingual reports for training, it seems a bit strange to then evaluate your model performance on these silver-standard noisy reports. Do you have a held out set of gold standard reports in different languages for evaluation (other than German)? Other Comments: - Why do you only consider ECG segments with one label assigned to them? I would expect that the associated reports would be significantly easier than including all reports. - You might consider changing the terminology from “cardiac arrythmia” categories to something broader since hypertrophy (one of the categories) is not technically a cardiac arrythmia (although it can be detected via ECG & it does predispose you to them) - I think it’d be helpful to include an example of some of the tokens that are sampled during pretraining using your semantically similar strategy for selecting target tokens. How well does this work in languages that have very different syntactic structures compared to the source language? - Do you pretrain the cardiac signal representation learning model on the entire dataset or just the training set? If the entire set, how well does this generalize to setting where you don’t have the associated labels? - What kind of tokenization is used in the model? Which Spacy tokenizer? - It’d be helpful to reference the appendix when describing the setup in section 3/5 so that the reader knows that more detailed architecture information is there. - I’d be interested to know if other multilingual pretraining setups also struggle with Greek. - It’d be helpful to show the original ECG report with punctuation + make the ECG larger so that they are easier to read - Why do you think RTLP benefits from fine-tuning on multiple languages, but MARGE does not?
- Do you pretrain the cardiac signal representation learning model on the entire dataset or just the training set? If the entire set, how well does this generalize to setting where you don’t have the associated labels?