paper_id
stringlengths
10
19
venue
stringclasses
14 values
focused_review
stringlengths
7
8.45k
point
stringlengths
60
643
NIPS_2020_1592
NIPS_2020
Major concerns: 1. While it is impressive that this work gets slightly better results than MLE, there are more hyper-parameters to tune, including mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, MLE pretraining (according to appendix). I find it disappointing that so many tricks are needed. If you get rid of pretraining/initialization from T5/BART, would this method work? 2. This work requires MLE pretraining, while prior work "Training Language GANs from Scratch" does not. 3. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the generator, and it might get stuck at a local optimum. 4. This work is claiming that it is the first time that language GANs outperform MLE, while prior works like seqGAN or scratchGAN all claim to be better than MLE. Is this argument based on the tradeoff between BLEU and self-BLEU from "language GANs falling short"? If so, Figure 2 is not making a fair comparison since this work uses T5/BART which is trained on external data, while previous works do not. What if you only use in-domain data? Would this still outperform MLE? Minor concerns: 5. This work only uses answer generation and summarization to evaluate the proposed method. While these are indeed conditional generation tasks, they are close to "open domain" generation rather than "close domain" generation such as machine translation. I think this work would be more convincing if it is also evaluated in machine translation which exhibits much lower uncertainties per word. 6. The discriminator accuracy of ~70% looks low to me, compared to "Real or Fake? Learning to Discriminate Machine from Human Generated Text" which achieves almost 90% accuracy. I wonder if the discriminator was not initialized with a pretrained LM, or is that because the discriminator used is too small? ===post-rebuttal=== The added scratch GAN+pretraining (and coldGAN-pretraining) experiments are fairer, but scratch GAN does not need MLE pretraining while this work does, and we know that MLE pretraining makes a big difference, so I am still not very convinced. My main concern is the existence of so many hyper-parameters/tricks: mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, and MLE pretraining. I think some sensitivity analysis similar to scratch GAN's would be very helpful. In addition, rebuttal Figure 2 is weird: when generating only one word, why would cold GAN already outperform MLE by 10%? To me, this seems to imply that improvement might be due to hyper-parameter tuning.
3. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the generator, and it might get stuck at a local optimum.
NIPS_2017_631
NIPS_2017
1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is not the best one out there. But in order to claim that CBN should help even the more powerful VQA models, I would like the authors to conduct experiments on more than one VQA model – favorably the ones which are closer to state-of-art (and whose codes are publicly available) such as MCB (Fukui et al., EMNLP16), HieCoAtt (Lu et al., NIPS16). It could be the case that these more powerful VQA models are already so powerful that the proposed early modulating does not help. So, it is good to know if the proposed conditional batch norm can advance the state-of-art in VQA or not. 2. L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to? 3. In table 1, the results on the VQA dataset are reported on the test-dev split. However, as mentioned in the guidelines from the VQA dataset authors (http://www.visualqa.org/vqa_v1_challenge.html), numbers should be reported on test-standard split because one can overfit to test-dev split by uploading multiple entries. 4. Table 2, applying Conditional Batch Norm to layer 2 in addition to layers 3 and 4 deteriorates performance for GuessWhat?! compared to when CBN is applied to layers 4 and 3 only. Could authors please throw some light on this? Why do they think this might be happening? 5. Figure 4 visualization: the visualization in figure (a) is from ResNet which is not finetuned at all. So, it is not very surprising to see that there are not clear clusters for answer types. However, the visualization in figure (b) is using ResNet whose batch norm parameters have been finetuned with question information. So, I think a more meaningful comparison of figure (b) would be with the visualization from Ft BN ResNet in figure (a). 6. The first two bullets about contributions (at the end of the intro) can be combined together. 7. Other errors/typos: a. L14 and 15: repetition of word “imagine” b. L42: missing reference c. L56: impact -> impacts Post-rebuttal comments: The new results of applying CBN on the MRN model are interesting and convincing that CBN helps fairly developed VQA models as well (the results have not been reported on state-of-art VQA model). So, I would like to recommend acceptance of the paper. However I still have few comments -- 1. It seems that there is still some confusion about test-standard and test-dev splits of the VQA dataset. In the rebuttal, the authors report the performance of the MCB model to be 62.5% on test-standard split. However, 62.5% seems to be the performance of the MCB model on the test-dev split as per table 1 in the MCB paper (https://arxiv.org/pdf/1606.01847.pdf). 2. The reproduced performance reported on MRN model seems close to that reported in the MRN paper when the model is trained using VQA train + val data. I would like the authors to clarify in the final version if they used train + val or just train to train the MRN and MRN + CBN models. And if train + val is being used, the performance can't be compared with 62.5% of MCB because that is when MCB is trained on train only. When MCB is trained on train + val, the performance is around 64% (table 4 in MCB paper). 3. The citation for the MRN model (in the rebuttal) is incorrect. It should be -- @inproceedings{kim2016multimodal, title={Multimodal residual learning for visual qa}, author={Kim, Jin-Hwa and Lee, Sang-Woo and Kwak, Donghyun and Heo, Min-Oh and Kim, Jeonghee and Ha, Jung-Woo and Zhang, Byoung-Tak}, booktitle={Advances in Neural Information Processing Systems}, pages={361--369}, year={2016} } 4. As AR2 and AR3, I would be interested in seeing if the findings from ResNet carry over to other CNN architectures such as VGGNet as well.
2.L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to?
NIPS_2022_528
NIPS_2022
weakness 1 The Algorithm should be presented and described in detail. 2 The background of Sharpness-Aware Minimization (SAM) shoud be described in detail. 1 The Algorithm should be presented and described in detail, which is helpful for understanding the proposed method. 2 The background of Sharpness-Aware Minimization (SAM) shoud be described in detail.
1 The Algorithm should be presented and described in detail, which is helpful for understanding the proposed method.
NIPS_2016_321
NIPS_2016
#ERROR!
- Since the paper mentions the possibility to use Chebyshev polynomials to achieve a speed-up, it would have been interesting to see a runtime comparison at test time.
NIPS_2020_11
NIPS_2020
1. The proposed method seems only works for digit or text images, such as MNIST and SVHN. Can it be used on natural images, such as CIFAR10, which has wider applications in the real world then digit/text. 2. Are the results obtained on Synbols dataset generalizable to large-scale datasets? For example, if you find algorithm A is better than B on Synbols dataset, will the conclusion hold on large images (e.g., ImageNet scale) in real-world applications? This need to be discussed in the paper.
1. The proposed method seems only works for digit or text images, such as MNIST and SVHN. Can it be used on natural images, such as CIFAR10, which has wider applications in the real world then digit/text.
tbRPPWDy76
EMNLP_2023
There might be not enough theoretical discussion and in-depth analyses which help readers understand the prompt design. More motivations and insights are needed. The engineering part might still need refinement. * Considering that this work is all about evaluation, there might be a lack of experiments currently. It might be beneficial to conduct more evaluation experiments categorized by language types and dialog content. * Although the style design is clean, the prompts are not well-organized (Table 6, 7). All sentences squeeze together. * The Chinese translation of the proposed prompt (Table 7) is bad.
* Although the style design is clean, the prompts are not well-organized (Table 6, 7). All sentences squeeze together.
ICLR_2023_2622
ICLR_2023
Weakness: 1. The figures are not clear. For example, in figure 2, it’s confused for the relation of 3 sub-figures. Some modules are not labeled in figure, such as CMAF, L_BT, VoLTA. 2. The experiments results are not significant. 3. Three steps for training are shown in VoLTA, a) switching off CMAF, b) switching on CMAF, c) keep CMAF and random sampling for training. The ablation study on these parts should be conducted. 4. The key point of this paper is GOT, but no ablation on this part. The authors are encouraged to verify which parts works.
1. The figures are not clear. For example, in figure 2, it’s confused for the relation of 3 sub-figures. Some modules are not labeled in figure, such as CMAF, L_BT, VoLTA.
ARR_2022_314_review
ARR_2022
1. Although the work is important and detailed, from the novelty perspective, it is an extension of norm-based and rollout aggregation methods to another set of residual connections and norm layer in the encoder block. Not a strong weakness, as the work makes a detailed qualitative and quantitative analysis, roles of each component, which is a novelty in its own right. 2. The impact of the work would be more strengthened with the proposed approach's (local and global) applicability to tasks other than classification like question answering, textual similarity, etc. ( Like in the previous work, Kobayashi et al. (2020)) 1. For equations 12 and 13, authors assume equal contribution from the residual connection and multi-head attention. However, in previous work by Kobayashi et al. (2021), it is observed and revealed that residual connections have a huge impact compared to mixing (attention). This assumption seems to be the opposite of the observations made previously. What exactly is the reason for that, for simplicity (like assumptions made by Abnar and Zuidema (2020))? 2. At the beginning of the paper, including the abstract and list of contributions, the claim about the components involved is slightly inconsistent with the rest. For instance, the 8th line in abstract is "incorporates all components", line 73 also says the "whole encoder", but on further reading, FFN (Feed forward layers) is omitted from the framework. This needs to be framed (rephrased) better in the beginning to provide a clearer picture. 3. While FFNs are omitted because a linear decomposition cannot be obtained (as mentioned in the paper), is there existing work that offers a way around (an approximation, for instance) to compute the contribution? If not, maybe a line or two should be added that there exists no solution for this, and it is an open (hard) problem. It improves the readability and gives a clearer overall picture to the reader. 4. Will the code be made publicly available with an inference script? It's better to state it in the submission, as it helps in making an accurate judgement that the code will be useful for further research.
3. While FFNs are omitted because a linear decomposition cannot be obtained (as mentioned in the paper), is there existing work that offers a way around (an approximation, for instance) to compute the contribution? If not, maybe a line or two should be added that there exists no solution for this, and it is an open (hard) problem. It improves the readability and gives a clearer overall picture to the reader.
KE5QunlXcr
EMNLP_2023
- The PLMs used (BERT) are by current standards, quite old, and quite small. As work in scaling PLMs up to sizes orders of magnitude greater, performance on syntactic tasks has shown to improve naturally (along with many other useful emergent forms of knowledge). Some comparison to larger models / application of this method to such models, is necessary to ensure that the method has any practical purpose. - There are also no baselines from existing work. There are other forms of fine-tuning, such as adapters, which seek to add additional knowledge to PLMs with less chance of catastrophic forgetting. The authors even cite one of these papers. This is also a confusing oversight, because the many appropriate inline citations which contrast various decisions in this work to decisions in existing work demonstrate a great familiarity with the literation, so lacking any comparison to any existing methods is an odd oversight. - Some questionable design choices. Perplexity is used as a measure of the model retaining semantic information after fine-tuning, and while that does relate to the original task, there are also aspects of domain drift which are possible and separate from catastrophic forgetting. How are such factors controlled? - Related, there is questionable motivation. Often when talking about catastrophic forgetting, both the original training and the new task are both relevant. This is clear in the context of robotics, where learning a new behavior should not result in hindering the robot from performing existing behaviors. Is this true in this case? PPL is almost always not a valuable end goal, and the entire PLM/LLM paradigm is built around this notion of pre-training in whatever way leads to learning useful linguistic representations, before fine-tuning, aligning, or few-shotting the model towards the task the user actually cares about. If users never care about both tasks in approximately equal measure, than what good is retaining the original model weights which were not pertinent to the target task? - There's arguably too much going on here. The focus of the paper aims to be about catastrophic forgetting, but secondary to that, is also the problem of matching the right syntactic fine-tuning task to the right problem. This is not entirely known a priori, so all possible pairings are explored, but realistically a good guess can be made (as it would likely be if pursued in a more practical setting). For instance, it is no surprise that the phrase syntax task helps with key phrase identification. The disadvantage of the exhaustive approach is that it has both distracted from the main takeaway points while cutting into the space available for supporting the main hypothesis. - No inclusion of baselines from existing work / SOTA on performance tables - Key extraction F1 results are better than standard optimizers, but negligibly so. - No discussion of GC vs EWC. When a priori would you choose which method? If the paper included only one such method, traditional optimizers would be the best choice in most situations.
- Some questionable design choices. Perplexity is used as a measure of the model retaining semantic information after fine-tuning, and while that does relate to the original task, there are also aspects of domain drift which are possible and separate from catastrophic forgetting. How are such factors controlled?
ICLR_2021_2208
ICLR_2021
+ Nice idea Consistent improvements over cross entropy for hierarchical class structures Improvements w.r.t other competitors (though not consistent) Good ablation study The improvements are small The novelty is not very significant More comments: Figure 1: - It is not clear what distortion is at this stage - It is not clear what perturbed MNist is, and respectively: why is the error of a 3-layer CNN so high (12-16% error are reported)? CNNs with 2-3 layers can solve MNist with accuracy higher than 99.5%? - This figure cannot be presented on page 2 without proper definitions. It should be either presented on page 5, where the experiment is defined, or better explained Page 4: It is said that s can be computed efficiently and this is shown in the appendix, but the version I have do not have an appendix Page 6: the XE+EMD method is not present in a comprehensible manner. 1) p_k symbols are used without definition (tough I think I these are the network predictions p(\hat{y}=k I) 2) the relation of the formula presented to the known EMD is not clear. The latter is a problem solved as linear programming or similar, and not a closed form formula 3) it is not clear what the role of \mu is and why can be set to 3 irrespective of the scale of metric D page 7: The experiments show small, but consistent improvements of the suggested method over standard cross entropy, and improvements versus most competitors in most cases I have read the reviews of others and the author's response. My main impression of the work remains as it was: that it is nice idea with small but significant empirical success. However, my acquaintance with the previous literature in this subject is partial compared to the acquaintance of other reviewers, so It may well be possible that they are in a better position than me to see the incremental nature of the proposed work. I therefore reduce the rating a bit, to become closer to the consensus.
1) p_k symbols are used without definition (tough I think I these are the network predictions p(\hat{y}=k I) 2) the relation of the formula presented to the known EMD is not clear. The latter is a problem solved as linear programming or similar, and not a closed form formula
NIPS_2021_953
NIPS_2021
Although the paper gives detailed theoretical proof, the experiments are somewhat weak. I still have some concerns: 1)The most related works SwaV and Barlow Twins outperform the proposed method in some experimental results, as shown in Table 1,2,5. What are the main advantages of this method compared with SwaV and Barlow Twins? 2) HSIC(Z, Y) can be seen as a distance metric in the kernel space, where the cluster structure is defined by the identity. Although this paper maps identity labels into the kernel space, the information of one-hot label is somewhat limited compared with views embeddings in Barlow Twins. 3)Since the cluster structure is defined by the identity. How does the number of images impact the model performance? Do more training images make the performance worse or better ? BYOL in the abstract should be explained for its first appearance.
3)Since the cluster structure is defined by the identity. How does the number of images impact the model performance? Do more training images make the performance worse or better ? BYOL in the abstract should be explained for its first appearance.
NIPS_2020_1821
NIPS_2020
- To my understanding, the experimental section only compares results generated for this paper. This is good because it keeps apples-to-apples comparisons, however it is suspicious since the task is not novel. Some comparison with results from other works (or a justification of why this is not possible/suitable) would be welcome. For example [2, table 3] seems to have directly comparable results, yet these are nowhere mentioned in this paper. - Albeit the observed effects are strong, it remains unclear “why does the method work?” in particular regarding the L_pixel component. Providing stronger arguments or intuitions of why these particular losses are “bound to help” would be welcome.
- Albeit the observed effects are strong, it remains unclear “why does the method work?” in particular regarding the L_pixel component. Providing stronger arguments or intuitions of why these particular losses are “bound to help” would be welcome.
NJUzUq2OIi
ICLR_2025
I found the proposed idea, experiments, and analyses conducted by the authors to be valuable, especially in terms of their potential impact on low-resource scenarios. However, for the paper to fully meet the ICLR standards, there are still areas that need additional work and detail. Below, I outline several key points for improvement. I would be pleased to substantially raise my scores if the authors address these suggestions and enhance the paper accordingly. **General Feedback** - I noticed that the title of the paper does not match the one listed on OpenReview. - The main text should indicate when additional detailed discussions are deferred to the Appendix for better reader guidance. **Introduction** - The Introduction lacks foundational references to support key claims. Both the second and third paragraphs would benefit from citations to strengthen the arguments. For instance, the statement: "This method eliminates the need for document chunking, *a common limitation in current retrieval systems that often results in loss of context and reduced accuracy*" needs a supporting citation to substantiate this point. - The sentence: "Second, to be competitive with embedding approaches, a retrieval language model needs to be small" requires further justification. The authors should include in the paper a complexity analysis comparison discussing time and GPU memory consumption to support this assertion. **Related Work** - The sentence "Large Language Models are found to be inefficient processing long-context documents" should be rewritten for clarity, for example: "Large Language Models are inefficient when processing long-context documents." - The statements "Transformer models suffer from quadratic computation during training and linear computation during inference" and "However, transformer-based models are infeasible to process extremely long documents due to their linear inference time" are incorrect. Transformers, as presented in "Attention is All You Need," scale quadratically in both training and inference. - The statement regarding State Space Models (SSMs) having "linear scaling during training and constant scaling during inference" is inaccurate. SSMs have linear complexity for both training and inference. The term "constant scaling" implies no dependence on sequence length, which is incorrect. - The Related Work section is lacking details. The paragraph on long-context language models should provide a more comprehensive overview of existing methods and their limitations, positioning SSMs appropriately. This includes discussing sparse-attention mechanisms [1, 2], segmentation-based approaches [3, 4, 5], memory-enhanced segmentation strategies [6], and recursive methods [7] for handling very long documents. - Similarly, the paragraph on Retrieval-Augmented Generation should specify how prior works addressed different long document tasks. Examples include successful applications of RAG in long-document summarization [8, 9] and query-focused multi-document summarization [10, 11], which are closely aligned with the present work. **Figures** - Figures 1 and 2 are clear but need aesthetic improvements to meet the conference's standard presentation quality. **Model Architecture** - The description "a subset of tokens are specially designated, and the classification head is applied to these tokens. In the current work, the classification head is applied to the last token of each sentence, giving sentence-level resolution" is ambiguous. Clarify whether new tokens are added to the sequence or if existing tokens (e.g., periods) are used to represent sentence ends. **Synthetic Data Generation** - The "lost in the middle" problem when processing long documents [12] is not explicitly discussed. Have the authors considered the position of chunks during synthetic data generation? Ablation studies varying the position and distance between linked chunks would provide valuable insights into Mamba’s effectiveness in addressing this issue. - More details are needed regarding the data decontamination pipeline, chunk size, and the relative computational cost of the link-based method versus other strategies. - The authors claim that synthetic data generation is computationally expensive but provide no supporting quantitative evidence. Information such as time estimates and GPU demand would strengthen this argument and assess feasibility. - There is no detailed evaluation of the synthetic data’s quality. An analysis of correctness and answer factuality would help validate the impact on retrieval performance beyond benchmark metrics. **Training** - This section is too brief. Consider merging it with Section 3, "Model Architecture," for a more cohesive presentation. - What was the training time for the 130M model? **Experimental Method** - Fix minor formatting issues, such as adding a space after the comma in ",LVeval." - Specify in Table 1 which datasets use free-form versus multiple-choice answers, including the number of answers and average answer lengths. - Consider experimenting with GPT-4 as a retriever. - Expand on "The accuracy of freeform answers is judged using GPT-4." - Elaborate on the validation of the scoring pipeline, particularly regarding "0.942 macro F1." Clarify the data and method used for validation. - Justify the selection of "50 sentences" for Mamba retrievers and explain chunk creation methods for embedding models. Did the chunks consist of 300 fixed-length segments, or was semantic chunking employed [3, 5]? Sentence-level embedding-based retrieval could be explored to align better with the Mamba setting. - The assertion that "embedding models were allowed to retrieve more information than Mamba" implies an unfair comparison, but more context can sometimes degrade performance [12]. - Clarify the use of the sliding window approach for documents longer than 128k tokens, especially given the claim that Mamba could process up to 256K tokens directly. **Results** - Remove redundancy in Section 7.1.2, such as restating the synthetic data generation strategies. - Expand the ablation studies to cover different input sequence lengths during training and varying the number of retrieved sentences to explore robustness to configuration changes. - Highlight that using fewer training examples (500K vs. 1M) achieved comparable accuracy (i.e., 59.4 vs. 60.0, respectively). - Why not train both the 130M and 1.3B models on a dataset size of 500K examples, but compare using 1M and 400K examples, respectively? **Limitations** - The high cost of generating synthetic training data is mentioned but lacks quantification. How computationally expensive is it in terms of time or resources? **Appendix** - Note that all figures in Appendices B and C are the same, suggesting an error that needs correcting. **Missing References** [1] Longformer: The Long-Document Transformer. arXiv 2020. [2] LongT5: Efficient Text-To-Text Transformer for Long Sequences. NAACL 2022. [3] Semantic Self-Segmentation for Abstractive Summarization of Long Documents in Low-Resource Regimes. AAAI 2022. [4] Summ^n: A Multi-Stage Summarization Framework for Long Input Dialogues and Documents. ACL 2022. [5] Align-Then-Abstract Representation Learning for Low-Resource Summarization. Neurocomputing 2023. [6] Efficient Memory-Enhanced Transformer for Long-Document Summarization in Low-Resource Regimes. Sensors 2023. [7] Recursively Summarizing Books with Human Feedback. arXiv 2021. [8] DYLE: Dynamic Latent Extraction for Abstractive Long-Input Summarization. ACL 2022. [9] Towards a Robust Retrieval-Based Summarization System. arXiv 2024. [10] Discriminative Marginalized Probabilistic Neural Method for Multi-Document Summarization of Medical Literature. ACL 2022. [11] Retrieve-and-Rank End-to-End Summarization of Biomedical Studies. SISAP 2023. [12] Lost in the Middle: How Language Models Use Long Contexts. TACL 2024.
- The Related Work section is lacking details. The paragraph on long-context language models should provide a more comprehensive overview of existing methods and their limitations, positioning SSMs appropriately. This includes discussing sparse-attention mechanisms [1, 2], segmentation-based approaches [3, 4, 5], memory-enhanced segmentation strategies [6], and recursive methods [7] for handling very long documents.
eCXfUq3RDf
EMNLP_2023
1. Very limited reproducibility - Unless the authors release their training code, dialogue dataset, as well as model checkpoints, I find it very challenging to reproduce any of the claims in this paper. I encourage the authors to attach their code and datasets via anonymous repositories in the paper submission so that reviewers may verify the claims and try out the model for themselves. 2. Very high model complexity - The proposed paper employs a mathematically and computationally complex approach as compared to the textual input method. Does the proposed method outperform sending textual inputs to a large foundation model such as LLAMA or ChatGPT? The training complexity seems too high for any practical deployment of this model. 3. Dependent on the training data - I'm unsure if 44k dialogues is sufficient to capture a wide range of user traits and personalities across different content topics. LLMs are typically trained on trillions of tokens, I do not see how 44k dialogues can capture the combinations of personalities and topics. In theory, this dataset also needs to be massive to cover varied domains. 4. The paper is hard to read and often unintuitive. The mathematical complexity must be simplified and replaced with more intuitive design and modeling choice explanations so that readers may grasp core ideas faster.
3. Dependent on the training data - I'm unsure if 44k dialogues is sufficient to capture a wide range of user traits and personalities across different content topics. LLMs are typically trained on trillions of tokens, I do not see how 44k dialogues can capture the combinations of personalities and topics. In theory, this dataset also needs to be massive to cover varied domains.
Uj2Wjv0pMY
ICLR_2024
• Compared to Assembly 101 (error detection), the paper seems like an inferior / less complicated dataset. Claims like higher ratio of error to normal videos needs to be validated. • Compared to datasets, the dataset prides itself on adding different modalities especially depth channel (RGB-D). The paper fails to validate the necessities of such modality. One crucial different between assembly dataset is use of depth values. What role does it play in training baseline models? Does it boost the model’s performance if these weren’t present. In current deep learning area, depth channels should be reasonably be producible via the help of existing models. • I’m not convinced that the binary classification is a justifiable baseline metrics. While I agree with the TAL task is really important here and a good problem to solve, I’m not sure how coarse grained binary classification can assess models understanding of fine-grained error like technique error. • Timing Error (Duration of an activity) and Temperature based error, does these really need ML based solutions? In sensitive tasks, simple sensor reading can indicate error. I’m not sure testing computer vision models on such tasks is justifiable. These require more heuristics-based methods, working with if-else statement. • Procedure Learning: its very vaguely defined, mostly left unexplained and seems like an after thought. I recommend authors devote passage to methods “M1 (Dwibedi et al., 2019)” and “M2 (Bansal, Siddhant et al., 2022)”. In Table 5, value of lambda? Is not mentioned. • The authors are dealing with a degree of subjectivity in terms of severity of errors. It would have been greatly beneficial, if the errors could be finely measured. For example if the person uses a tablespoon instead of teaspoon, is still an error? Some errors are more grave than others, is there a weighted scores? Is there a way to measure level of deviation for each type of error or time stamp of occurrence of error. Is one recipe more difficult than the other recipe.
• I’m not convinced that the binary classification is a justifiable baseline metrics. While I agree with the TAL task is really important here and a good problem to solve, I’m not sure how coarse grained binary classification can assess models understanding of fine-grained error like technique error.
NIPS_2019_653
NIPS_2019
of the method. Clarity: The paper has been written in a manner that is straightforward to read and follow. Significance: There are two factors which dent the significance of this work. 1. The work uses only binary features. Real world data is usually a mix of binary, real and categorical features. It is not clear if the method is applicable to real and categorical features too. 2. The method does not seem to be scalable, unless a distributed version of it is developed. It's not reasonable to expect a single instance can hold all the training data that the real world datasets ususally contain.
1. The work uses only binary features. Real world data is usually a mix of binary, real and categorical features. It is not clear if the method is applicable to real and categorical features too.
NIPS_2018_857
NIPS_2018
Weakness: - Long range contexts may be helpful for object detection as shown in [a, b]. For example, the sofa in Figure 1 may help detect the monitor. But in the SNIPER, images are cropped into chips, which makes the detector cannot benefit from long range contexts. Is there any idea to address this? - The writing should be improved. Some points in the paper is unclear to me. 1. In line 121, authors said partially overlapped ground-truth instances are cropped. But is there any threshold for the partial overlap? In the lower left figure of the Figure 1 right side, there is a sofa whose bounding-box is partially overlapped with the chip, but not shown in a red rectangle. 2. In line 165, authors claimed that a large object which may generate a valid small proposal after being cropped. This is a follow-up question of the previous one. In the upper left figure of the Figure 1 right side, I would imagine the corner of the sofa would make some very small proposals to be valid and labelled as sofa. Does that distract the training process since there may be too little information to classify the little proposal to sofa? 3. Are the negative chips fixed after being generated from the lightweight RPN? Or they will be updated while the RPN is trained in the later stage? Would this (alternating between generating negative chips and train the network) help the performance? 4. What are the r^i_{min}'s, r^i_{max}'s and n in line 112? 5. In the last line of table3, the AP50 is claimed to be 48.5. Is it a typo? [a] Wang et al. Non-local neural networks. In CVPR 2018. [b] Hu et al. Relation Networks for Object Detection. In CVPR 2018. ----- Authors' response addressed most of my questions. After reading the response, I'd like to remain my overall score. I think the proposed method is useful in object detection by enabling BN and improving the speed, and I vote for acceptance. The writing issues should be fixed in the later versions.
- The writing should be improved. Some points in the paper is unclear to me.
9Ax0pyaLgh
EMNLP_2023
1. Authors are suggested to use other metrics to evaluate the Results (e.g. BERTScore). 2. Often it is not sufficient to show automatic evaluation results. The author does not show any human evaluation results and does not even perform a case study and proper error analysis. This does not reflect well on the qualitative aspects of the proposed model. 3. It is difficult to understand the methodology without Figure 1. Parts of section 2 should be written in alignment with Figure 1, and the authors are expected to follow a step-by-step description of the proposed method. (See questions to authors)
1. Authors are suggested to use other metrics to evaluate the Results (e.g. BERTScore).
6iM2asNCjK
ICLR_2024
1. My primary concern is with the limited scope of the paper. The paper primarily considers only evaluating sentence embeddings from LLMs, which while important, is a small part of the overall evaluation landscape of LLMs. Consequently, the title "Robustness-Accuracy characterization of Large Language Models using synthetic datasets" is somewhat misleading. Furthermore, the generation methodology for the synthetic tasks using SentiWordNet for polarity detection does seem somewhat restrictive. For sentence embedding evaluation, it does seem to be a good methodology, but it is not clear how well it would generalize to any generative tasks (e.g. question answering, summarization, etc.). Whether this metric can be leveraged for other tasks (especially for a different class of tasks) needs to be demonstrated in my opinion. 2. While the proposed methodology of using a ratio of positive / negative to neutral sentiment words is a good way of defining difficulty, it does seem somewhat restrictive given the contextual nature of languages. Interesting linguistic phenomena such as sarcasm, irony, etc. are not captured by the proposed methodology, which arguably form for a large part of the difficulty in language understanding especially for such large LLMs. While the authors briefly touch upon the issue of negation, negation in natural language is not limited to structured rules, and any methodology testing the robustness of LLMs should provide a way of capturing this, given that LLMs are generally have a poor understanding of negations ([1]). 3. The baseline metrics are still computed on the synthetic dataset. For a generative LLM model training for example, this potentially results in bad sentence embeddings, which subsequently may result in bad task performance. This is especially problematic when done for a single dataset (as is the case for all the baseline metrics). In contrast, the proposed SynTextBench benefits from aggregating across different difficulty levels, and is somewhat more robust to this issue compared to the baseline metrics. A better way for considering the baselines might be to treat them in the same way as SynTextBench is treated (aggregated across different difficulty levels, thresholded for some value of the metric, and then computing the area under the curve). 4. Additionally, there has been a large amount of work on LLM evaluation [2]. While some of the metrics there do not satisfy the proposed desiderata, it would still be good to see how SynTextBench metric compares to the other metrics proposed in the literature. Concretely, from the paper, it is hard to understand under what conditions should one use SynTextBench over other metrics (eg: say MMLU / Big Bench for language generation).
4. Additionally, there has been a large amount of work on LLM evaluation [2]. While some of the metrics there do not satisfy the proposed desiderata, it would still be good to see how SynTextBench metric compares to the other metrics proposed in the literature. Concretely, from the paper, it is hard to understand under what conditions should one use SynTextBench over other metrics (eg: say MMLU / Big Bench for language generation).
ICLR_2022_2425
ICLR_2022
1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting. 2)Clarity: Since the coreset construction algorithm is built up on previous works, a reader without the background in literature on coresets would find it hard to understand why the particular sampling probabilities are chosen and why they give particular guarantees. It would be useful rewrite the algorithm preview and to give at least a bit of intuition on how the importance sampling scores are chosen and how they can give the coreset guarantees Suggestions: In the experiment section, other than uniform sampling, it would be interesting to use some other classical k-means coreset as baselines for comparison. Please highlight the technical challenges and contributions clearly when compared to coresets for classical k-means.
1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting.
NIPS_2019_203
NIPS_2019
* Technical innovation is fairly limited. The bLVNet is a straightforward extension of bLNet (an image model) to video. The TAM involves the use of 1D temporal convolution and depthwise convolution. Both mechanisms that have been widely leveraged before. On the other hand, the paper does not make bold novelty claims and recognizes the contribution as being more empirical than technical. The TAM shares many similarities with Timeception [Hussein et al., CVPR 19], which was not yet published at the time of this submission and thus does not diminish the value of this work. Nevertheless, given the many analogies between these concurrent approaches, it'd be advisable to discuss their relations in future versions (or the camera-ready version) of the paper. * While the memory/efficiency gains are convincingly demonstrated, they are not substantial enough to be a game-changer in the practice of training video understanding models. Due to the overhead of setting up the proposed framework (even though quite simple), adoption by the community may be fairly limited. Final rating: - After having read the other reviews and the author responses, I decide to maintain my initial rating (6). The contribution of this work is mostly empirical. The stronger results compared to more complex models and the promise to release the code imply that this work deserves to be known, even if fairly incremental.
- After having read the other reviews and the author responses, I decide to maintain my initial rating (6). The contribution of this work is mostly empirical. The stronger results compared to more complex models and the promise to release the code imply that this work deserves to be known, even if fairly incremental.
NIPS_2018_245
NIPS_2018
Weakness] 1: Poor writing and annotations are a little hard to follow. 2: Although applying GCN on FVQA is interesting, the technical novelty of this paper is limited. 3: The motivation is to solve when the question doesn't focus on the most obvious visual concept when there are synonyms and homographs. However, from the experiment, it's hard to see whether this specific problem is solved or not. Although the number is better than the previous method, it will be great if the authors could product more experiments to show more about the question/motivation raised in the introduction. 4: Following 3, applying MLP after GCN is very common, and I'm not surprised that the performance will drop without MLP. The authors should show more ablation studies on performance when varying the number of facts retrieval, what happened if we different number of layer of GCN?
1: Poor writing and annotations are a little hard to follow.
WC9yjSosSA
EMNLP_2023
- The reported experimental results cannot strongly demonstrate the effectiveness of the proposed method. - In Table 1, for the proposed method, only 6 of the total 14 evaluation metrics achieve SOTA performances. - In Table 2, for the proposed method, only 8 of the total 14 evaluation metrics achieve SOTA performances. In addition, under the setting of "Twitter-2017 $\rightarrow$ Twitter-2015", why the proposed method achieves best overall F1, while not achieves best F1 in all single types? - In Table 3, for the proposed method, 9 of the total 14 evaluation metrics achieve SOTA performances, which means that when ablating some modules, the performance of the proposed method will improve. Furthermore, The performance improvement that adding a certain module can bring is not obvious. - In line 284, a transformer layer with self-attention is used to capture the intra-modality relation for the test modality. However, there're a lot of self-attention transformer layers in BERT. Why not using the attention scores in the last self-attention transformer layer? - In line 322, softmmax -> softmax - Will the coordination of $b_d$ exceed the scope of the patches?
- In Table 2, for the proposed method, only 8 of the total 14 evaluation metrics achieve SOTA performances. In addition, under the setting of "Twitter-2017 $\rightarrow$ Twitter-2015", why the proposed method achieves best overall F1, while not achieves best F1 in all single types?
ICLR_2022_2531
ICLR_2022
I have several concerns about the clinical utility of this task as well as the evaluation approach. - First of all, I think clarification is needed to describe the utility of the task setup. Why is the task framed as generation of the ECG report rather than framing the task as multi-label classification or slot-filling, especially given the known faithfulness issues with text generation? There are some existing approaches for automatic ECG interpretation. How does this work fit into the existing approaches? A portion of the ECG reports from the PTB-XL dataset are actually automatically generated (See Data Acquisition under https://physionet.org/content/ptb-xl/1.0.1/). Do you filter out those notes during evaluation? How does your method compare to those automatically generated reports? - A major claim in the paper is that RTLP generates more clinically accurate reports than MLM, yet the only analysis in the paper related to this is a qualitative analysis of a single report. A more systematic analysis of the quality of generation would be useful to support the claim made in the appendix. Can you ask clinicians to evaluate the utility of the generated reports or evaluate clinical utility by using the generated reports to predict conditions identifiable from the ECG? I think that it’s fine that the RTLP method performs comparable to existing methods, but I am not sure from the current paper what the utility of using RTLP is. - More generally, I think that this paper is trying to do two things at once – present new methods for multilingual pretraining while also developing a method of ECG captioning. If the emphasis is on the former, then I would expect to see evaluation against other multilingual pretraining setups such as the Unicoder (Huang 2019a). If the core contribution is the latter, then clinical utility of the method as well as comparison to baselines for ECG captioning (or similar methods) is especially important. - I’m a bit confused as to why the diversity of the generated reports is emphasized during evaluation. While I agree that the generated reports should be faithful to the associated ECG, diversity may not actually be necessary metric to aim for in a medical context. For instance, if many of the reports are normal, you would want similar reports for each normal ECG (i.e. low diversity). - My understanding is that reports are generated in other languages using Google Translate. While this makes sense to generate multilingual reports for training, it seems a bit strange to then evaluate your model performance on these silver-standard noisy reports. Do you have a held out set of gold standard reports in different languages for evaluation (other than German)? Other Comments: - Why do you only consider ECG segments with one label assigned to them? I would expect that the associated reports would be significantly easier than including all reports. - You might consider changing the terminology from “cardiac arrythmia” categories to something broader since hypertrophy (one of the categories) is not technically a cardiac arrythmia (although it can be detected via ECG & it does predispose you to them) - I think it’d be helpful to include an example of some of the tokens that are sampled during pretraining using your semantically similar strategy for selecting target tokens. How well does this work in languages that have very different syntactic structures compared to the source language? - Do you pretrain the cardiac signal representation learning model on the entire dataset or just the training set? If the entire set, how well does this generalize to setting where you don’t have the associated labels? - What kind of tokenization is used in the model? Which Spacy tokenizer? - It’d be helpful to reference the appendix when describing the setup in section 3/5 so that the reader knows that more detailed architecture information is there. - I’d be interested to know if other multilingual pretraining setups also struggle with Greek. - It’d be helpful to show the original ECG report with punctuation + make the ECG larger so that they are easier to read - Why do you think RTLP benefits from fine-tuning on multiple languages, but MARGE does not?
- Why do you only consider ECG segments with one label assigned to them? I would expect that the associated reports would be significantly easier than including all reports.
NIPS_2016_238
NIPS_2016
- My biggest concern with this paper is the fact that it motivates “diversity” extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned that there is no diversity. - The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions: - The first sentence of the abstract needs to be re-written. - Diversity should be toned down. - line 108, the first “f” should be “g” in “we fixed the form of ..” - extra “.” in the middle of a sentence in line 115. One Question: For the baseline MCL with deep learning, how did the author ensure that each of the networks have converged to a reasonable results. Cutting the learners early on might significantly affect the ensemble performance.
- The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions:
pZk9cUu8p6
ICLR_2025
1.Limited Discussion of Scalability Bounds:The paper doesn't thoroughly explore the upper limits of FedDES's scalability;No clear discussion of memory requirements or computational complexity. 2.Validation Scope:Evaluation focuses mainly on Vision Transformer with CIFAR-10;Could benefit from testing with more diverse models and datasets; Limited exploration of edge cases or failure scenarios 3.Network Modeling:While network delays are considered, there's limited discussion of complex network topologies or dynamic network conditions; The paper could benefit from more detailed analysis of how network conditions affect simulation accuracy
1.Limited Discussion of Scalability Bounds:The paper doesn't thoroughly explore the upper limits of FedDES's scalability;No clear discussion of memory requirements or computational complexity.
fsDZwS49uY
ICLR_2025
- The authors may want to generate instances with more constraints and variables, as few instances in the paper have more than 7 variables. Thus, this raises my concern about LLMs' ability to model problems with large instance sizes. - Given that a single optimization problem can have multiple valid formulations, it would be beneficial for the authors to verify the accuracy and equivalence of these formulations with ground-truth ones. - There are questions regarding the solving efficiency of the generated codes. It would be valuable to assess whether the code produced by LLMs can outperform human-designed formulations and codes.
- The authors may want to generate instances with more constraints and variables, as few instances in the paper have more than 7 variables. Thus, this raises my concern about LLMs' ability to model problems with large instance sizes.
RsnWEcuymH
ICLR_2024
- My main concern is that the performance improvement, though generally better, is not particularly too significant, not to mention that those proxy-based method achieves also pretty good IM results while using only a negligible amount of time compared to BOIM (or other simulation-based method in general) - Other choices of graph kernel are not considered and experimented with such as random walk or Graphlet kernel? There are probably easy tricks to turn them into valid GP kernels. - Despite the time reduction introduced by BOIM, proxy-based methods are still substantially cheaper. Would it be possible to use proxy-based methods as heuristics to seed BOIM or other zero-order optimization method (e.g., CMA-ES). - While the author has shown that GSS has theoretically lower variance, it’d be nice to compare against with random sampling and check empirically how well it performs. - Results presentation can be improved. For example, in Figure 2 and 3, the y-axis is labeled as “performance” which is ambiguous, and the runtime is not represented in those figure. A scatter plot with x/y axes being runtime/performance could help the reader better understand and interpret the results. Best results in tables can also be highlighted. Minor: - Typo in Section 2: “Mockus (1998) and has since become…” → “Mockus (1998) has since become…”
- Results presentation can be improved. For example, in Figure 2 and 3, the y-axis is labeled as “performance” which is ambiguous, and the runtime is not represented in those figure. A scatter plot with x/y axes being runtime/performance could help the reader better understand and interpret the results. Best results in tables can also be highlighted. Minor:
ICLR_2023_3063
ICLR_2023
The novelty and technical contribution are limited. It is unclear for the deformable graph attention module. It is unclear why the proposed method has lower computational complexity. Detailed comments: What is the motivation to choose personalized pagerank score, bfs, and feature similarity as sorting criteria? For NodeSort, 1) how to choose the base node, or is every node a base node? 2)“NodeSort differentially sorts nodes depending on the base node.” Does this mean that the base node affects the ordering, affects the key nodes for attention, and further affects the model performance? 3)After getting the sorted node sequence, how to sample the key nodes for each node? And how many key nodes are sampled?is the number of key nodes a hyper-parameter? 4)What are the Value nodes used in Transformer in this paper? 5)How to fuse node representations generated by attention for different ranking criteria. Intuitively, the design of deformable graph attention is complicated, and the Katz positional encoding involves the exponentiation of adjacency matrix, so Is the computational complexity really reduced? Where can the reduction in complexity be explained from the proposed method compared to baselines? or just from the sparse implementation?
2)“NodeSort differentially sorts nodes depending on the base node.” Does this mean that the base node affects the ordering, affects the key nodes for attention, and further affects the model performance?
ICLR_2022_176
ICLR_2022
There are two main (and easily fixable) weaknesses. a) I think the role of the normalizing flow is underexplained. It is stated multiple times that the normalizing flow provides the evidence updates and its purpose is to estimate epistemic uncertainty. The remaining questions for me are 1. From which space to which does the NF map the latent variable z? 2. Why is the arrow in Figure 2 from a Gaussian space into the latent space, rather than from the latent space to n^(i)? I thought the main purpose was to influence n^(i)? 3. Which experiments show that the normalizing flow contributes meaningfully to the epistemic uncertainty (see b))? b) Figure 1 does a good job of showing the intuition behind NatPNs but it lacks some components and a discussion in the text. The authors choose to show aleatoric (un-)certainty and predictive certainty respectively but don’t show epistemic (un-)certainty. Technically, you could deduce epistemic uncertainty from aleatoric and predictive uncertainty but it would be easier to compare and follow your argument if it was made explicit. Furthermore, I would like to see an explicit discussion of the results. Why, for example, is the difference between aleatoric and predictive uncertainty so low? Is there no or little epistemic uncertainty in this setting? There are two things that would convince me more regarding this problem: a) an additional toy experiment similar to Figure 1 which includes more epistemic uncertainty, e.g. with fewer data points. This could show that the epistemic uncertainty is well-calibrated. b) An argument for why the epistemic uncertainty is (presumably) so low in your setting. a) and b) are not mutually exclusive, doing both would convince me more. There are a couple of minor improvements: Figure 1 is not referenced in the main text. I find it hard to spot the difference w.r.t the symbols in Figure 1. Maybe just making it less crowded would already improve visibility. In the last paragraph of 3.1, you mention “warm-up” and “fine-tuning”. It would be helpful to explain these concepts briefly in one additional sentence or provide references. What would raise my score? I would raise my score by 1 or 2 points if my main weaknesses are well addressed or if evidence is provided that my criticism is the consequence of a misunderstanding. I would raise my score even further if I’m convinced of the high significance of this work. This will be mostly dependent on the estimate of more expert reviewers but I’m also open to arguments by the authors.
2. Why is the arrow in Figure 2 from a Gaussian space into the latent space, rather than from the latent space to n^(i)? I thought the main purpose was to influence n^(i)?
ICLR_2023_1946
ICLR_2023
Weakness: 1. This work raises an essential issue in partial domain adaptation and evaluates many PDA algorithms and model selection strategies. However, it does not present any solution to this problem. 2. The findings of the experiments are a bit trivial. No target label for model selection strategies will hurt the performance, and the random seed would influence the performance. They are commonsense in domain adaptation, even in deep learning. 3. The writing needs to improve. The Tables are referenced but always placed on different pages. For example, Table 2 is referred in page 4 but placed on page 3, making it hard to read. The paper also has many typos, e.g., ‘that’ instead of ‘than’ in section 3. 4. Many abbreviations lack definition and cause confusion. ‘AR’ in Table 5 stands for domain adaptation tasks and algorithms. 5. In section 4.2, heuristic strategies for Hyper-parameter turning is not clearly described. And the author said, “we only consider the model at the end of training”, but should we use the model selection strategies? 6. In section 5.2, part of Model Selection Strategies, the authors give a conclusion that seems to be wrong “only the JUMBOT and SND pair performed reasonably well with respect to the JUMBOT and ORACLE pair on both datasets.” In Table 6, the JUMBOT and SND pair performs worse than the JUMBOT and ORACLE pair by a large margin. For instance, on OFFICE-HOME, the JUMBOT and SND pair reaches 72.29 accuracies, while the JUMBOT and ORACLE pair achieves 77.15 accuracies.
4. Many abbreviations lack definition and cause confusion. ‘AR’ in Table 5 stands for domain adaptation tasks and algorithms.
NIPS_2020_1477
NIPS_2020
1. I think it is a bit overstated (Line 10 and Line 673) to use the term \epsilon-approximate stationary point of J -- there is still function approximation error as in Theorem 4.5. I think the existence of this function approximation error should be explicitly acknowledged whenever the conclusion about sample complexity is stated. Otherwise readers may have the impression that compatible features (Konda, 2002, Sutton et al 2000) are used to deal with these errors, which are not the case. 2. As shown by Konda (2002) and Sutton et al (2000), compatible features are useful tools to address the function approximation error of the critic. I'm wondering if it's possible to introduce compatible features and TD(1) critic in the finite sample complexity analysis in this paper to eliminate the \epsilon_app term. 3. I feel the analysis in the paper depends heavily on the property of the stationary distribution (e.g., Line 757). I'm wondering if it's possible to conduct a similar analysis for the discounted setting (instead of the average reward setting). Although a discounted problem can be solved by methods for the average reward problem (e.g., discarding each transition w.p. 1 - \gamma, see Konda 2002), solving the discounted problem directly is more common in the RL community. It would be beneficial to have a discussion w.r.t. the discounted objective. 4. Although using advantage instead of q value is more common in practice, I'm wondering if there is other technical consideration for conducting the analysis with advantage instead of q value. 5. The assumption about maximum eigenvalues in Line 215 seems artificial. I can understand this assumption, as well as the projection in Line 8 in Algorithm 1, is mainly used to ensure the boundedness of the critic. However, as in Line 219, R_w indeed depends on \lambda, which we do not know in practice. So it means we cannot implement the exact Algorithm 1 in practice. Instead of using this assumption and the projection, is it possible to use regularization (e.g., ridge) for the critic to ensure it's bounded, as done in asymptotic analysis in Zhang et al (2020)? Also Line 216 is a bit misleading. Only the first half (negative definiteness) is used to ensure solvability. But as far as I know, in policy evaluation setting, we do not need the second half (maximum eigenvalue). 6. Some typos: Line 463 should include \epsilon_app and replace the first + with \leq \epsilon_app (the last term of Line 587) is missing in Line 585 and 586 There shouldn't be (1 - \gamma) in Line 589 In Line 618, there should be no need to introduce the summation from k=0 to t - \tau_t, as the summation from k=\tau_t to t is still used in Line 624. In Line 625, it should be \tau_t instead of \tau In Line 640, I personally think it's not proper to cite [25] (the S & B book) -- that book includes too many. Referring to the definition of w^* should be more easy to follow. In Line 658, it should be ||z_k||^2 In Line 672, \epsilon_app is missing In Line 692, it should be E[....] = 0 In Line 708, there shouldn't be \theta_1, \theta_2, \eta_1, \eta_2 In Line 774, I think expectation is missing in the LHS Konda, V. R. Actor-critic algorithms. PhD thesis, Massachusetts Institute of Technology, 2002. Zhang, S., Liu, B., Yao, H., & Whiteson. Provably Convergent Two-Timescale Off-Policy Actor-Critic with Function Approximation. ICML 2020.
4. Although using advantage instead of q value is more common in practice, I'm wondering if there is other technical consideration for conducting the analysis with advantage instead of q value.
jPrl18r4RA
EMNLP_2023
1. The setting of Unsupervised Online Adaptation is a little bit strange. As described in Sec 3.1, the model requires a training set, including documents, quires and labels. It seems that the adaptation process is NOT "Unsupervised" because the training set also requires annatations. 2. The problem that this paper focuses on may be unrealistic. Figure 8 shows that adapting to test documents leads to performance degradation on unrelated queries. In practice, we expect to update the knowledge of Large Language Models without affecting the performance on general tasks. Besides, existing large scale QA systems, e.g. GPT-4, show strong In-Context Learning abilities. In other words, the model can reason about new documents and answer questions that have never been seen before.
1. The setting of Unsupervised Online Adaptation is a little bit strange. As described in Sec 3.1, the model requires a training set, including documents, quires and labels. It seems that the adaptation process is NOT "Unsupervised" because the training set also requires annatations.
ICLR_2022_1923
ICLR_2022
Weakness: 1. The novelty of this paper is limited. First, the analysis of the vertex-level imbalance problem is not new, which is a reformulation of the observations in previous works [Rendle and Freudenthaler, 2014; Ding et al., 2019]. Second, the designed negative sampler uses reject sampling to increase the chance of popular items, which is similar to the proposed one in PRIS [Lian et al., 2020]. 2. The paper overclaims on its ability of debiasing sampling. The “debiased” term in the paper title is confusing. 3. The methodology detail is unclear in Sec. 4.2. The proposed design that improves sampling efficiency seems interesting but the corresponding description is hard to follow given the limited space. 4. Space complexity of the proposed VINS should also be analyzed and compared in empirical studies, given that each (u, i) corresponds to a b u f f e r u i . 5. Experiment results are not convincing enough to demonstrate the superiority of VINS on effectiveness and efficiency. - For effectiveness, the performance comparison in Table 1 is unfair. VINS sets different sample weights W u i in the training process, while most compared baselines like DNS, AOBPR, SA, PRIS set all sample weights as 1. - For efficiency, Table 2 should also include the theoretical analysis for contrast.
- For effectiveness, the performance comparison in Table 1 is unfair. VINS sets different sample weights W u i in the training process, while most compared baselines like DNS, AOBPR, SA, PRIS set all sample weights as 1.
NIPS_2019_1366
NIPS_2019
Weakness: - Although the method discussed by the paper can be applied in general MDP, the paper is limited in navigation problems. Combining RL and planning has already been discussed in PRM-RL~[1]. It would be interesting whether we can apply such algorithms in more general tasks. - The paper has shown that pure RL algorithm (HER) failed to generalize to distance goals but the paper doesn't discuss why it failed and why planning can solve the problem that HER can't solve. Ideally, if the neural networks are large enough and are trained with enough time, Q-Learning should converge to not so bad policy. It will be better if the authors can discuss the advantages of planning over pure Q-learning. - The time complexity will be too high if the reply buffer is too large. [1] PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning
- The time complexity will be too high if the reply buffer is too large. [1] PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning
NIPS_2022_1807
NIPS_2022
Weakness: 1.The authors should provide more descriptions of the wavelet transforms in this paper. It is hard for me to understand the major idea in this paper before learning some necessary knowledge about wavelet whitening, wavelet coefficient, and so on. 2.It is better for authors to display the performance of accelerating SGMs by involving some other baselines with a different perspective, such as “optimizing the discretization schedule or by modifying the original SGM formulation” [16, 15, 23, 46, 36, 31, 37, 20, 10, 25, 35, 45]
2.It is better for authors to display the performance of accelerating SGMs by involving some other baselines with a different perspective, such as “optimizing the discretization schedule or by modifying the original SGM formulation” [16, 15, 23, 46, 36, 31, 37, 20, 10, 25, 35, 45]
ICLR_2021_243
ICLR_2021
Weakness: 1. As several modifications mentioned in Section 3.4 were used, it would be better to provide some ablation experiments of these tricks to validate the model performance further. 2. The model involves many hyperparameters. Thus, the selection of the hyperparameters in the paper needs further explanation. 3. A brief conclusion of the article and a summary of this paper's contributions need to be provided. 4. Approaches that leveraging noisy label noise label regularization and multi-label co-regularization were not reviewed or compared in this paper.
3. A brief conclusion of the article and a summary of this paper's contributions need to be provided.
ICLR_2022_1838
ICLR_2022
1. When introducing the theoretical results, we should make a detailed comparison with the existing cross-entropy loss results. The current writing method cannot reflect the advantages of square loss. 2. The synthetic experiment in a non-separable case seems to be a problem. Considering the nonlinear expression ability of neural networks, how to explain that the data distribution illustrated in Figure 1 is inseparable from the network model? 3. This paper presents that the loss functions like hinge loss don’t provide reliable information on the prediction confidence. In this regard, there is a lack of references to some relevant literature. [Gao, 2013] has given a detailed analysis of the advantages and disadvantages between the entire margin distribution and the minimum margin. Based on this, [Lyu, 2018] designed a square-type margin distribution loss to improve the generalization ability of DNN. [Gao, 2013] W. Gao and Z.-H. Zhou. On the doubt about margin explanation of boosting. Artificial Intelligence 203:1-18 2013. [Lyu, 2018] Shen-Huan Lyu, Lu Wang, and Zhi-Hua Zhou. Improving Generalization of Neural Networks by Leveraging Margin Distribution. http://arxiv.org/abs/1812.10761
2. The synthetic experiment in a non-separable case seems to be a problem. Considering the nonlinear expression ability of neural networks, how to explain that the data distribution illustrated in Figure 1 is inseparable from the network model?
ICLR_2021_2717
ICLR_2021
1: The writing could be further improved, e.g., “via being matched to” should be “via matching to” in Abstract. 2: The “Def-adv” needs to be clarified. 3: The accuracies of the target model using different defenses against the FGSM attack are not shown in Figure 1. Hence, it is unclear the difference between the known attacks and the unknown attacks. 4: Even though authors compare their framework with an advanced defense APE-GAN, they can further compare the proposed framework with a method that is designed to defend against multiple attacks (maybe the research on defense against multiple attacks is relatively rare). The results would be more meaningful if the authors could present this comparison in their paper. Overall the paper presents an interesting study that would be useful for defending the threat of increasing malicious perturbations.
4: Even though authors compare their framework with an advanced defense APE-GAN, they can further compare the proposed framework with a method that is designed to defend against multiple attacks (maybe the research on defense against multiple attacks is relatively rare). The results would be more meaningful if the authors could present this comparison in their paper. Overall the paper presents an interesting study that would be useful for defending the threat of increasing malicious perturbations.
82VzAtBZGk
ICLR_2025
The problem formulation is incomplete. The paper does not define the safety properties expected from the RL agent. - Lack of theoretical results. This paper provides only empirical results to support its claims. - The results are presented in a convoluted way. In particular, the results disregard the safety violations of the agent in the first 1000 episodes. The reason for presenting the results in this way is unclear. - The presentation of the DDPG-Lag as a constrained RL algorithm is imprecise, as it uses a fixed weight for the costs, which works as simple reward engineering. In general, with a Lagrangian relaxation, this weight should be adjusted online to ensure the accumulated cost stays below a predefined threshold [1]. - The evaluation in CMDPs is inconsistent. These approaches solve different problems where a predefined accumulated cost is allowed. - Weak baseline. From the results in Figure 10, it is clear that Tabular Shield does not recognize any unsafe state-action pairs, making it an unsuitable baseline. This is not surprising considering how the state-action space is discretized. Perhaps it is necessary to finetune the discretization of this baseline. Alternatively, it would be more suitable to consider stronger baselines, such as the accumulating safety rules [2] **references** - [1] Ray, A., Achiam, J., and Amodei, D. (2019). *Benchmarking safe exploration in deep reinforcement learning*. <https://github.com/openai/safety-gym> - [2] Shperberg, S. S., Liu, B., Allievi, A., and Stone, P. (2022). A rule-based shield: Accumulating safety rules from catastrophic action effects. *CoLLAs*, 231–242. <https://proceedings.mlr.press/v199/shperberg22a.html>
- The results are presented in a convoluted way. In particular, the results disregard the safety violations of the agent in the first 1000 episodes. The reason for presenting the results in this way is unclear.
NIPS_2017_110
NIPS_2017
of this work include that it is a not-too-distant variation of prior work (see Schiratti et al, NIPS 2015), the search for hyperparameters for the prior distributions and sampling method do not seem to be performed on a separate test set, the simultion demonstrated that the parameters that are perhaps most critical to the model's application demonstrate the greatest relative error, and the experiments are not described with adequate detail. This last issue is particularly important as the rupture time is what clinicians would be using to determine treatment choices. In the experiments with real data, a fully Bayesian approach would have been helpful to assess the uncertainty associated with the rupture times. Paritcularly, a probabilistic evaluation of the prospective performance is warranted if that is the setting in which the authors imagine it to be most useful. Lastly, the details of the experiment are lacking. In particular, the RECIST score is a categorical score, but the authors evaluate a numerical score, the time scale is not defined in Figure 3a, and no overall statistics are reported in the evaluation, only figures with a select set of examples, and there was no mention of out-of-sample evaluation. Specific comments: - l132: Consider introducing the aspects of the specific model that are specific to this example model. For example, it should be clear from the beginning that we are not operating in a setting with infinite subdivisions for \gamma^1 and \gamma^m and that certain parameters are bounded on one side (acceleration and scaling parameters). - l81-82: Do you mean to write t_R^m or t_R^{m-1} in this unnumbered equation? If it is correct, please define t_R^m. It is used subsequently and it's meaning is unclear. - l111: Please define the bounds for \tau_i^l because it is important for understanding the time-warp function. - Throughout, the authors use the term constrains and should change to constraints. - l124: What is meant by the (*)? - l134: Do the authors mean m=2? - l148: known, instead of know - l156: please define \gamma_0^{***} - Figure 1: Please specify the meaning of the colors in the caption as well as the text. - l280: "Then we made it explicit" instead of "Then we have explicit it"
- l111: Please define the bounds for \tau_i^l because it is important for understanding the time-warp function.
ICLR_2023_1980
ICLR_2023
Motivated by the fact that local learning can limit memory when training the network and the adaptive nature of each individual block, the paper extends local learning to the ResNet-50 to handle large datasets. However, it seems that the results of the paper do not demonstrate the benefits of doing so. The detailed weaknesses are as follows: 1)The method proposed in the paper essentially differs very little from the traditional BP method. The main contribution of the paper is adding the stop gradient operation between blocks, which appears to be less innovative. 2)The local learning strategy is not superior to the BP optimization method. In addition, the model is more sensitive to each block after the model is blocked, especially the first block. More additional corrections are needed to improve the performance and robustness of the model, although still lower than BP's method. 3)Experimental results show that simultaneous blockwise training is better than sequential blockwise training. But the simultaneous blockwise training strategy cannot limit memory. 4)The blockwise training strategy relies on a special network structure like the block structure of the ResNet-50 model. 5)There are some writing errors in the paper, such as "informative informative" on page 5 and "performance" on page 1, which lacks a title.
5)There are some writing errors in the paper, such as "informative informative" on page 5 and "performance" on page 1, which lacks a title.
NIPS_2019_1377
NIPS_2019
- The proof works only under the assumption that the corresponding RNN is contractive, i.e. has no diverging directions in its eigenspace. As the authors point out (line #127), for expansive RNN there will usually be no corresponding URNN. While this is true, I think it still imposes a strong limitation a priori on the classes of problems that could be computed by an URNN. For instance chaotic attractors with at least one diverging eigendirection are ruled out to begin with. I think this needs further discussion. For instance, could URNN/ contractive RNN still *efficiently* solve some of the classical long-term RNN benchmarks, like the multiplication problem? Minor stuff: - Statement on line 134: Only true for standard sigmoid [1+exp(-x)]^-1, depends on max. slope - Theorem 4.1: Would be useful to elaborate a bit more in the main text why this holds (intuitively, since the RNN unlike the URNN will converge to the nearest FP). - line 199: The difference is not fundamental but only for the specific class of smooth (sigmoid) and non-smooth (ReLU) activation functions considered I think? Moreover: Is smoothness the crucial difference at all, or rather the fact that sigmoid is truly contractive while ReLU is just non-expansive? - line 223-245: Are URNN at all practical given the costly requirement to enforce the unitary matrix after each iteration?
- Statement on line 134: Only true for standard sigmoid [1+exp(-x)]^-1, depends on max. slope - Theorem 4.1: Would be useful to elaborate a bit more in the main text why this holds (intuitively, since the RNN unlike the URNN will converge to the nearest FP).
NIPS_2021_2304
NIPS_2021
There are four limitations: 1. In this experiment, single dataset training and single dataset testing cannot verify the generalizable ability of models, it should conduct experiments on large-scale datasets. 2. The efficiency of such pairwise matching is very low, making it difficult to be used in practical application systems. 3. I hope to see that you can compare your model with ResNet-IBN / ResNet of FastReID, which is practical work in the person Reid task. 4. I think the authors only use the transformer to achieve the local matching, therefore, the contribution is limited.
2. The efficiency of such pairwise matching is very low, making it difficult to be used in practical application systems.
rYhDcQudVI
ICLR_2024
The methodology appears incremental, building marginally upon JEM's foundation of interpreting classifiers as time-dependent EBMs. The newly introduced self-calibration loss primarily enhances this by applying a standard DSM technique to train the internal score function, thus lacking substantial novelty. The authors have judiciously selected a range of baseline candidates for semi-supervised learning and reported performance results. However, the focus on datasets like CIFAR-10 and CIFAR-100 limits the assessment of the methodology's generalizability to more diverse or complex data scenarios. * Minor weaknesses The allocation of Figure 1 is too naive. Overall, you could have edited the space of main paper more wisely.
* Minor weaknesses The allocation of Figure 1 is too naive. Overall, you could have edited the space of main paper more wisely.
xNn2nq5kiy
ICLR_2024
* The plan-based method requires manually designing a plan based on the ground truth in advance, which is unrealistic in real-world scenarios. The learned plan methods are not comparable to the methods with pre-defined plans based on Table 2. It indicates that the proposed method may be difficult to generalize to a new dataset without the ground truth summary. * The novelty of the proposed method is limited. The most effective part is the manually designed plan. Based on that, they should also discuss some plan-based /outline-based prompt studies, such as [1-3]. * Some experimental details are missing. For example, * The detailed information of the proposed new data sets. For example, the size, average document length, average summary length, average citation number, training/validation/testing split, and so on. * How to choose the X (sentence number) and Y (words) in plan-based methods? * What generation configuration is used in LLaMA-2, ChatGPT-3.5, and GPT-4? For example, the greedy decoding or the sampling with temperature. * How was the human evaluation (In Section 5) conducted? The number of annotators, the inner agreement among annotators, the average compensation, the working hours, and the procedure of annotation should be described in detail. * How are Avg Rating, Avg Win Rate, and Coverage in Table 8 calculated? [1] Re3: Generating Longer Stories With Recursive Reprompting and Revision [2] Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models [3] Self-planning Code Generation with Large Language Models
* The plan-based method requires manually designing a plan based on the ground truth in advance, which is unrealistic in real-world scenarios. The learned plan methods are not comparable to the methods with pre-defined plans based on Table 2. It indicates that the proposed method may be difficult to generalize to a new dataset without the ground truth summary.
NIPS_2016_238
NIPS_2016
- My biggest concern with this paper is the fact that it motivates “diversity” extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned that there is no diversity. - The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions: - The first sentence of the abstract needs to be re-written. - Diversity should be toned down. - line 108, the first “f” should be “g” in “we fixed the form of ..” - extra “.” in the middle of a sentence in line 115. One Question: For the baseline MCL with deep learning, how did the author ensure that each of the networks have converged to a reasonable results. Cutting the learners early on might significantly affect the ensemble performance.
- The first sentence of the abstract needs to be re-written.
NIPS_2019_1312
NIPS_2019
weakness of this paper is its isolation to the plain GP regression setting. Although this is expected given the methodology used to enable tractability, I would have appreciated at least some discussion into whether any of the material presented here can be extended to the classification setting. Of course, one could argue that using Laplace or EP already implicitly takes away from the ‘exactness’ of a GP, but I think there is still scope for having an interesting discussion here (possibly akin to that provided in Cutajar et al, 2015). Likewise, any intuition of how/whether this can be extended to more advanced GP set-ups, such as multi-task, convolutional, and recurrent variations (among many others) would also be useful. -- Technical Quality/Evaluation -- The technical contributions and implementation details are easy to follow, and I did not find any faults in that respect. The experimental evaluation is also varied and convincingly shows that exact GP inference widely outperforms standard approximations. Nonetheless I have a few concerns listed below; - It appears that in nearly all experiments, the results are reported for a single held-out test set. Standard practice in most papers on GPs involves using a number of train/test splits or folds which give a more accurate illustration of the method’s performance. While I imagine that the size of the datasets considered in this work entail that this can take quite a long time to complete, I highly encourage the authors to carry out this exercise; - If I understood correctly, a kernel with a shared length scale is used in all experiments, which does not conform to the ARD kernels one would typically use in a practical setting. While several papers presenting approximate GPs have also made this assumption in the past (e.g. Hensman et al (2013)), more recent work such as the AutoGP by Krauth et al. (2017) emphasise why ARD should more consistently be used, and demonstrate how automatic differentiation frameworks minimise the performance penalty introduced by using such schemes. I believe this has to be addressed more clearly in the paper, and would also give more meaning to the commentary provided in L243-248, which otherwise feels spurious. I consider this to be crucial for painting a more complete picture in the Results section. - For a GP paper, I also find it strange that results for negative log likelihood are not reported here. While these are expected to follow a similar trend to RMSE, I would definitely include such results in a future version of the manuscript since this also has implications on uncertainty calibration. On a related note, I was surprised this paper did not have any supplementary material attached, because further experiments and more results would definitely be welcome. -- Overall recommendation -- This paper does not introduce any major theoretical elements, but the technical contributions featured here, along with the associated practical considerations, are timely in showing how modern GPU architectures can be exploited for carrying out exact GP training and inference to an extent which had not previously been considered. The paper is well written and some of the results should indeed stimulate interesting discussions on whether standard GP approximations are still worthwhile for plain regression tasks. Unfortunately, given how this paper’s worth relies heavily on the quality of the experimental results, there are a few technical issues in the paper which I believe should be addressed in a published version. I also think that several of the discussions featured in the paper can be expanded further - the authors should not refrain from including their own intuition on the broader implications of this work, which I feel is currently missing. -- Post-rebuttal update -- I thank the authors for their rebuttal. I had a positive opinion of the paper in my initial review, and most of my principal concerns were sufficiently addressed in the rebuttal. After reading the other reviews, I do believe that there is a common interest in having more experiments included. Coupled with my other suggested changes to the current experimental set-up, I think there is still some work to be done in this respect. This also coincides with my wish to see more of your own insights included in the paper, which I think will steer and hopefully also encourage more discussion on this interesting dilemma on where to invest computational resources. Nevertheless, I do not expect such additional experiments to majorly alter the primary ‘storyline’ of the paper, which is why I’m not lowering my current evaluation of the paper. Due to the limited novelty of the technical aspects, I am likewise not inclined to raise my score either, but I think this is a good paper nonetheless. Irrespective of whether this paper is ultimately accepted or not, I definitely hope to see an updated version containing an extended experimental evaluation and discussion.
- It appears that in nearly all experiments, the results are reported for a single held-out test set. Standard practice in most papers on GPs involves using a number of train/test splits or folds which give a more accurate illustration of the method’s performance. While I imagine that the size of the datasets considered in this work entail that this can take quite a long time to complete, I highly encourage the authors to carry out this exercise;
NIPS_2020_335
NIPS_2020
- The paper reads too much like LTF-V1++, and at some points assumes too much familiarity of the reader to LTF-V1. Since this method is not well known, I wish the paper was a bit more pedagogical/self-contained. - The method seems more involved that it needs to be. One would suspect that there is an underlying, simpler, principle that is propulsing the quality gains.
- The method seems more involved that it needs to be. One would suspect that there is an underlying, simpler, principle that is propulsing the quality gains.
ICLR_2022_3330
ICLR_2022
1) One very serious problem is that this paper is full of grammatical errors. It is too many and many of them can be detected and corrected by grammatical checker. I only list some in here to justify my observations, instead of all because I don’t want to proofread the authors’ paper. Page 1, learned,, Page 2 and Kurakin et al. (2018) starts Page 2, MI-FGSM, which integrate Page 2, several run-on sentences in the section Gradient Optimization Based Attack Page 2, which is divide Page 2 can removes Page 3 is extend Page 4 mollifer Page 5 hyper-parameters .. is Page 8 with SP-MI-FGSM should be SE-MI-FGSM? Page 8 via SP-MI-FGSM should be SE-MI-FGSM? Page 9, overcomes these two feedbacks. I would like to emphasize once again that this list is far from complete. 2) Although this paper has only 6 equations and several of them are copied from previous papers, the authors made a mistake. In Eq. 4, when i=N-1, alpha=l+(N-1) *(r-l), which is not r, different from what is described before (above Eq. 4). 3) The organization of this paper is also problematic. The authors reviewed other input transformation methods in the method section. They should be putted in Section 2. 4) In the comparison, the authors limit their baselines on input transformation methods and only compare with VR, DI and SI. However, they intentionally/unintentionally ignore the state-of-the-art LinBP and some other methods. As far as I know, LinBP is the currently the best transfer method. 5) The authors only evaluate their method on epsilon=16. Although they follow Wang&He 2021, it does not give a complete picture. More different epsilons are expected. 6) Adding a method on the top of other methods to improve transferability is good but cannot be considered a significant contribution. 7) In addition to untargeted attack, target attack, which is more challenging, should be considered. 8) One problem in the experiments is that some images are incorrect classified by the models. However, they authors do not say it clearly. Only said that “mostly classified correctly by the evaluation models”. How do they impact the experimental results? They will naturally provide better numbers to all methods. It means that the success attack accuracy should be lower than the one reported.
6) Adding a method on the top of other methods to improve transferability is good but cannot be considered a significant contribution.
NIPS_2018_15
NIPS_2018
- The hGRU architecture seems pretty ad-hoc and not very well motivated. - The comparison with state-of-the-art deep architectures may not be entirely fair. - Given the actual implementation, the link to biology and the interpretation in terms of excitatory and inhibitory connections seem a bit overstated. Conclusion: Overall, I think this is a really good paper. While some parts could be done a bit more principled and perhaps simpler, I think the paper makes a good contribution as it stands and may inspire a lot of interesting future work. My main concern is the comparison with state-of-the-art deep architectures, where I would like the authors to perform a better control (see below), the results of which may undermine their main claim to some extent. Details: - The comparison with state-of-the-art deep architectures seems a bit unfair. These architectures are designed for dealing with natural images and therefore have an order of magnitude more feature maps per layer, which are probably not necessary for the simple image statistics in the Pathfinder challenge. However, this difference alone increases the number of parameters by two orders of magnitude compared with hGRU or smaller CNNs. I suspect that using the same architectures with smaller number of feature maps per layer would bring the number of parameters much closer to the hGRU model without sacrificing performance on the Pathfinder task. In the author response, I would like to see the numbers for this control at least on the ResNet-152 or one of the image-to-image models. The hGRU architecture seems very ad-hoc. - It is not quite clear to me what is the feature that makes the difference between GRU and hGRU. Is it the two steps, the sharing of the weights W, the additional constants that are introduced everywhere and in each iteration (eta_t). I would have hoped for a more systematic exploration of these features. - Why are the gain and mix where they are? E.g. why is there no gain going from H^(1) to \tilde H^(2)? - I would have expected Eqs. (7) and (10) to be analogous, but instead one uses X and the other one H^(1). Why is that? - Why are both H^(1) and C^(2) multiplied by kappa in Eq. (10)? - Are alpha, mu, beta, kappa, omega constrained to be positive? Otherwise the minus and plus signs in Eqs. (7) and (10) are arbitrary, since some of these parameters could be negative and invert the sign. - The interpretation of excitatory and inhibitory horizontal connections is a bit odd. The same kernel (W) is applied twice (but on different hidden states). Once the result is subtracted and once it's added (but see the question above whether this interpretation even makes sense). Can the authors explain the logic behind this approach? Wouldn't it be much cleaner and make more sense to learn both an excitatory and an inhibitory kernel and enforce positive and negative weights, respectively? - The claim that the non-linear horizontal interactions are necessary does not appear to be supported by the experimental results: the nonlinear lesion performs only marginally worse than the full model. - I do not understand what insights the eigenconnectivity analysis provides. It shows a different model (trained on BSDS500 rather than Pathfinder) for which we have no clue how it performs on the task and the authors do not comment on what's the interpretation of the model trained on Pathfinder not showing these same patterns. Also, it's not clear to me where the authors see the "association field, with collinear excitation and orthogonal suppression." For that, we would have to know the preferred orientation of a feature and then look at its incoming horizontal weights. If that is what Fig. 4a shows, it needs to be explained better.
- The hGRU architecture seems pretty ad-hoc and not very well motivated.
ICLR_2023_1214
ICLR_2023
As the authors note, it seems the method still requires a few tweaks to work well empirically. For example, we need to omit the log of the true rewards and scale the KL term in the policy objective to 0.1. While the authors provide a brief intuition on why those modifications are needed, I think the authors should provide more concrete analysis (e.g., empirical results) on what problems the original formula have and how the modifications fix them. Also, it would be better if the authors provide ablation results on those modifications. For example, does the performance drop drastically if the scale of the KL term changes (to 0.05, 0.2, 0.5, ...) ? The compute comparison vs. REDQ on Figure 3 seems to be misleading. First, less runtime does not necessarily mean less computational cost. Second, if the authors had used the official implementation of REDQ [1], it should be emphasized that this implementation is very inefficient in terms of runtime. In detail, the implementation feeds forward to each Q-network one at a time while this feed-forward process is embarrassingly parallelizable. The runtime of REDQ will drop significantly if we parallelize this process. The ablation experiments seem to show that the value term for the encoder is not necessary. It would be better to provide an explanation on this result. Some of the equations seem to have typos. 1) On equation (4), the first product and the second product have exactly the same form. 2) On algorithm 1 Line 8, shouldn't we use s_n instead of s_t? Questions I am curious of the asymptotic performance of the proposed method. If possible, can the authors provide average return results with more env steps? [1] https://github.com/watchernyu/REDQ
2) On algorithm 1 Line 8, shouldn't we use s_n instead of s_t? Questions I am curious of the asymptotic performance of the proposed method. If possible, can the authors provide average return results with more env steps? [1] https://github.com/watchernyu/REDQ
ICLR_2023_3705
ICLR_2023
1)The main assumption is borrowed from other works but is actually rarely used in the optimization field. Moreover, the benefits of this assumption is not well investigated. For example, a) why it is more reasonable than the previous one? B) why it can add gradient norm L_1 \nabla f(w_1) in Eqn (3) or why we do not add other term? It should be mentioned that a milder condition does not mean it is better, since it may not reflect the truth. For me, problem B) is especially important in this work, since the authors do not well explain and investigate it. 2)Results in Theorem 1 show that Adam actually does not converge, since this is a constant term O(D_0^{0.5}\delta) in Eqn. (5). This is not intuitive, the authors claim it is because the learning rate may not diminish. But many previous works, e.g. [ref 1], can prove Adam-type algorithms can converge even using a constant learning rate. Of course, they use the standard smooth condition. But (L0,L1)-smoothness condition should not cause this kind of convergence, since for nonconvex problem, in most cases, we only need the learning rate to be small but does not care whether it diminishes to zero. [ref 1] Dongruo Zhou, Jinghui Chen, et al. On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization 3)It is not clear what are the challenges when the authors analyze Adam under the (L0,L1)-smoothness condition. It seems one can directly apply standard analysis on the (L0,L1)-smoothness condition. So it is better to explain the challenges, especially the difference between this one and Zhang et al. 4)Under the same assumption, the authors use examples to show the advantage of Adam over GD and SGD. This is good. But one issue is that is the example reasonable or does it share similar properties with practical problems, especially for networks. This is important since both SGD and ADAM are widely used in the deep learning field. 5)In the work, when comparing SGD and ADAM, the authors explain the advantage of adam comes from the cases when the local smoothness varies drastically across the domain. It is not very clear for me why Adam could better handle this case. Maybe one intuitive example could help. 6)The most important problem is that this work does not provide new insights, since it is well known that the second order moment could help the convergence of Adam. This work does not provide any insights beyond this point and also does not give any practical solution to further improve.
3)It is not clear what are the challenges when the authors analyze Adam under the (L0,L1)-smoothness condition. It seems one can directly apply standard analysis on the (L0,L1)-smoothness condition. So it is better to explain the challenges, especially the difference between this one and Zhang et al.
NIPS_2022_285
NIPS_2022
Terminology: Introduction l. 24-26 "pixels near the peripheral of the object of interest can generally be challenging, but not relevant to topology." I think this statement is problematic. When considering the inverse e.g. in the case of a surface or vessel, that a foreground pixel changes to background. Such a scenario would immediately lead to a topology mismatch (Betti error 1). Terminology: "topologically critical location" --> I find this terminology to be not optimally chosen. I agree that the warping concept appears to help with identifying pixels which may close loops or fill holes. However, considering the warping I do not see a guarantee that such "locations" (as in the exact location) which I understand to refer to individual or groups of pixels are indeed part of the real foreground, nor are these locations unique. A slightly varying warping may propose a set of different pixels. The identified locations are more likely to be relevant to topological errors. --> this statement should be statistically supported. Compared to what exactly? Does this rely on the point estimate for any pixel? Or given a particularly trained network? Theorem 1: The presentation of a well known definition from Kong et al. is trivial and could be presented in a different way. Experimentation, lack of implementation details: In Table 2 and a dedicated section, the authors show an ablation study on the influence of lamda on the results. Lamda is a linear parameter, weighting the contribution of the new loss to the overall loss. Similarly, the studied baseline methods, e.g. TopoNet [24], DMT [25], and clDice [42] have a loss weighting parameter. It would be important to understand how and if the parameters of the baselines were chosen and experimented with. (I understand that the authors cannot train ablation studies for all baselines etc.) However, it is an important information to understand the results in Table 1. Terminology: l. 34 "to force the neural network to memorize them" --> I would tone down this statement, in my understanding, the neural network does not memorize an exact "critical point" as such in TopoNet [24]. Minor: I find the method section to be a bit wordy, it could be compressed on the essential definitions. There exist several grammatical errors, please double-check these with a focus on plurals and articles. E.g. l. 271 "This lemma is naturally generalized to 3D case." l. 52 "language of topology" I find this to be an imprecise definition or formulation. Note: After rebuttal and discussion I increased the rating to 5.
34 "to force the neural network to memorize them" --> I would tone down this statement, in my understanding, the neural network does not memorize an exact "critical point" as such in TopoNet [24]. Minor: I find the method section to be a bit wordy, it could be compressed on the essential definitions. There exist several grammatical errors, please double-check these with a focus on plurals and articles. E.g. l.
ICLR_2022_2791
ICLR_2022
The technical contribution of this paper is limited, which is far from a decent ICLR paper. In particular, All kinds of evaluations, i.e., single-dataset setting (most of existing person re-ID methods), cross-dataset setting [1, 2,3] and live re-id setting [4], have been discussed in previous works. This paper simply makes a systematic discussion. For cross-dataset setting, this paper only evaluates standard person re-ID methods that train on one dataset and evaluate on another, but fails to evaluate the typical cross-dataset person re-ID methods, e.g., [1, 2, 3]. For live re-ID setting, this paper does not compare with the particular live re-ID baseline [4] Though some conclusions are drawn from the experiments, the novelty is limited. For example, 1) most of person re-ID methods build on the basis of pedestrian detector (two-step method), and there are also end-to-end method that combines detection and re-ID [5]; 2) It is common that distribution bias exists between datasets. It is hard to find a standard re-ID approach and a training dataset to address the problem unless the dataset is large enough that can cover as much as scenes. 3) cross-dataset methods try to mitigate the generalization problem. [1] Hu, Yang, Dong Yi, Shengcai Liao, Zhen Lei, and Stan Z. Li. "Cross dataset person re-identification." In Asian Conference on Computer Vision, pp. 650-664. Springer, Cham, 2014. [2] Lv, Jianming, Weihang Chen, Qing Li, and Can Yang. "Unsupervised cross-dataset person re-identification by transfer learning of spatial-temporal patterns." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7948-7956. 2018. [3] Li, Yu-Jhe, Ci-Siang Lin, Yan-Bo Lin, and Yu-Chiang Frank Wang. "Cross-dataset person re-identification via unsupervised pose disentanglement and adaptation." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7919-7929. 2019. [4] Sumari, Felix O., Luigy Machaca, Jose Huaman, Esteban WG Clua, and Joris Guérin. "Towards practical implementations of person re-identification from full video frames." Pattern Recognition Letters 138 (2020): 513-519. [5] Xiao, Tong, Shuang Li, Bochao Wang, Liang Lin, and Xiaogang Wang. "Joint detection and identification feature learning for person search." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3415-3424. 2017.
1) most of person re-ID methods build on the basis of pedestrian detector (two-step method), and there are also end-to-end method that combines detection and re-ID [5];
ICLR_2021_1740
ICLR_2021
are in its clarity and the experimental part. Strong points Novelty: The paper provides a novel approach for estimating the likelihood of p(class image), by developing a new variational approach for modelling the causal direction (s,v->x). Correctness: Although I didn’t verify the details of the proofs, the approach seems technically correct. Note that I was not convinced that s->y (see weakness) Weak points Experiments and Reproducibility: The experiments show some signal, but are not through enough: • shifted-MNIST: it is not clear why shift=0 is much better than shift~ N ( 0 , σ 2 ) , since both cases incorporate a domain shift • It would be useful to show the performance the model and baselines on test samples from the observational (in) distribution. • Missing details about evaluation split for shifted-MNIST: Did the experiments used a validation set for hyper-param search with shifted-MNIST and ImageCLEF? Was it based on in-distribution data or OOD data? • It would be useful to provide an ablation study, since the approach has a lot of "moving parts". • It would be useful to have an experiment on an additional dataset, maybe more controlled than ImageCLEF, but less artificial than shifted-MNIST. • What were the ranges used for hyper-param search? What was the search protocol? Clarity: • The parts describing the method are hard to follow, it will be useful to improve their clarity. • It will be beneficial to explicitly state which are the learned parametrized distributions, and how inference is applied with them. • What makes the VAE inference mappings (x->s,v) stable to domain shift? E.g. [1] showed that correlated latent properties in VAEs are not robust to such domain shifts. • What makes v distinctive of s? Is it because y only depends on s? • Does the approach uses any information on the labels of the domain? Correctness: I was not convinced about the causal relation s->y. I.e. that the semantic concept cause the label, independently of the image. I do agree that there is a semantic concept (e.g. s) that cause the image. But then, as explained by [Arjovsky 2019] the labelling process is caused by the image. I.e. s->image->y, and not as argued by the paper. The way I see it, is like a communication channel: y_tx -> s -> image -> y_rx. Could the authors elaborate how the model will change if replacing s->y by y_tx->s ? Other comments: • I suggest discussing [2,3,4], which learned similar stable mechanisms in images. • I am not sure about the statement that this work is the "first to identify the semantic factor and leverage causal invariance for OOD prediction" e.g. see [3,4] • The title may be confusing. OOD usually refers to anomaly-detection, while this paper relates to domain-generalization and domain-adaptation. • It will be useful to clarify that the approach doesn't use any external-semantic-knowledge. • Section 3.2 - I suggest to add a first sentence to introduce what this section is about. • About remark in page 6: (1) what is a deterministic s-v relation? (2) chairs can also appear in a workspace, and it may help to disentangle the desks from workspaces. [1] Suter et al. 2018, Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness [2] Besserve et al. 2020, Counterfactuals uncover the modular structure of deep generative models [3] Heinze-Deml et al. 2017, Conditional Variance Penalties and Domain Shift Robustness [4] Atzmon et al. 2020, A causal view of compositional zero-shot recognition EDIT: Post rebuttal I thank the authors for their reply. Although the authors answered most of my questions, I decided to keep the score as is, because I share similar concerns with R2 about the presentation, and because experiments are still lacking. Additionally, I am concerned with one of the author's replies saying All methods achieve accuracy 1 ... on the training distribution, because usually there is a trade-off between accuracy on the observational distribution versus the shifted distribution (discussed by Rothenhäusler, 2018 [Anchor regression]): Achieving perfect accuracy on the observational distribution, usually means relying on the spurious correlations. And under domain-shift scenarios, this would hinder the performance on the shifted-distribution.
• Section 3.2 - I suggest to add a first sentence to introduce what this section is about.
NIPS_2021_1604
NIPS_2021
). Weaknesses - Some parts of the paper are difficult to follow, see also Typos etc below. - Ideally other baselines would also be included, such as the other works discussed in related work [29, 5, 6]. After the Authors' Response My weakness points after been addressed in the authors' response. Consequently I raised my score. All unclear parts have been answered The authors' explained why the chosen baseline makes the most sense. It would be great if this is added to the final version of the paper. Questions - Do you think there is a way to test beforehand whether I(X_1, Y_1) would be lowered more than I(X_2, Y_1) beforehand? - Out of curiosity, did you consider first using Aug and then CF.CDA? Especially for the correlated palate result it could be interesting to see if now CF.CDA can improve. - Did both CDA and MMI have the same lambda_RL (Eq 9) value? From Figure 6 it seems the biggest difference between CDA and MMI is that MMI has more discontinuous phrase/tokens. Typos, representation etc. - Line 69: Is X_2 defined as all features of X not in X_1? Stating this explicitly would be great. - Line 88: What ideas exactly do you take from [19] and how does your approach differ? - Eq 2: Does this mean Y is a value in [0, 1] for two possible labels? Can this be extended to more labels? This should be clarified. - 262: What are the possible Y values for TripAdvisor’s location aspect? - The definitions and usage of the various variables is sometimes difficult to follow. E.g. What exactly is the definition of X_2? (see also first point above). When does X_M become X_1? Sometimes the augmented data has a superscript, sometimes it does not. In line 131 the meaning of x_1 and x_2 are reverse, which can get confusing - maybe x’_1 and x’_2 would make it easier to follow together with a table that explains the meaning of different variables? - Section 2.3: Before line 116 mentioned the change when adding the counterfactual example, it would be helpful to first state what I(X_2, Y_1) and I(X_1, Y_1) are without it. Minor points - Line 29: How is desired relationship between input text and target labels defined? - Line 44: What is meant by the initial rationale selector is perfect? It seems if it were perfect no additional work needs to be done. - Line 14, 47: A brief explanation of “multi-aspect” would be helpful - Figure 1: Subscripts s and t should be 1 and 2? - 184: Delete “the” There is a broader impact section which discusses the limitations and dangers adequately.
- Line 44: What is meant by the initial rationale selector is perfect? It seems if it were perfect no additional work needs to be done.
ICLR_2021_2047
ICLR_2021
As noted below, I have concerns around the experimental results. More specifically, I feel that there is a relative lack of discussion around the (somewhat surprising) outperformance of baselines that VPBNN is aiming to approximate, and I feel that the experiments are missing what I see as key VPBNN results that otherwise leave the reader with questions. Additionally, I think the current paper would benefit from including measurements and discussion around the specifics of computational and memory costs of their method. Recommendation In general, I think this could be a great paper. However, given the above concerns, I'm currently inclined to suggest rejection of the paper in its current state. I would highly recommend that authors push further on the noted areas! Additional comments p. 1: "The uncertainty is defined based on the posterior distribution." For more clarity it could be helpful to update this to say that the epistemic model uncertainty is represented in the prior distribution, and upon observing data, those beliefs can be updated in the form of a posterior distribution, which yields model uncertainty conditioned on observed data. p. 2: "The MC dropout requires a number of repeated feed-forward calculations with randomly sampled weight parameters in order to obtain the predictive distribution." This should be updated to indicate that in MC dropout, dropout is used (in an otherwise deterministic model) at test time with "a number of repeated feed-forward calculations" to effectively sample from the approximate posterior, but not directly via different weight samples (as in a variational BNN). With variational dropout, this ends up having a nice interpretation as a variational Bayes method, though no weight distributions are typically directly used with direct MC dropout. p. 2: Lakshminarayanan et al. (2017) presented random seed ensembles, not bootstrap ensembles (see p. 4 of their work for more info). They used the full dataset, and trained M ensemble members with different random seeds, rather than resampled data. p. 4: For variance propagation in a dropout layer with stochastic input, it's not exactly clear from the text how variance from the inputs and dropout is being combined into an output Gaussian. I believe using a Gaussian is an approximation, and while that would be fine, I think it would be informative to indicate that. The same issue comes up with local reparameterization for BNNs with parameter distributions, where they can be reparameterized exactly as output distributions (for, say, mean-field Gaussian weight dists) so long as the inputs are deterministic. Otherwise, the product of, say, two Gaussian RVs is non-Gaussian. p. 7: Figure 1 is too small. p. 7: "Estimation of ρ is possible by observing the outputs of middle layers several times under the approximate predictive distribution. The additional computation cost is still kept quite small compared to MC dropout." How exactly is ρ estimated? Is it a one-time cost irregardless of data that can then be used for all predictions from the trained model? Without details, this seems like a key component that can yield arbitrary amounts of uncertainty. p. 7, 8: For the language modeling experiment, why do you think VPBNN was able to achieve lower perplexity values than MC dropout? The text generally focuses on VPBNN as an approximation to MC dropout, and yet it outperforms it. The text would greatly benefit from more discussion around this point. p. 8: For the OOD detection experiment, I'm surprised that ρ = 0 was the only VPBNN model used, since Section 5.1 and Figure 1 indicated that it led to overconfident models. Can you include results with other settings of ρ ? Moreover, from Figure 1 we see that (for that model) VPBNN with ρ = 0 qualitatively yielded the same amount of predictive variance as the Taylor approximation. However, in Table 2, we see VPBNN with ρ = 0 outperform MC dropout (with 100 or 2000 samples) and the Taylor approximation. Why do you think this is the case, particularly if the standard deviation was used as the uncertainty signal for the OOD decision. I see that "This is because the approximation accuracy of the Taylor approximation is not necessarily high as shown in Section B", but I did not find Section B or Figure 3 to be clear. I think the text would benefit from more discussion here, and from the additional experiments for ρ . Can you include a discussion and measurements for FLOPS and memory usage for VPBNN? Specifically, given the discussion around efficiency and the implementation that doubles the dimensionality of the intermediates throughout the model, I believe it would be informative to have theoretical and possibly runtime measurements. Minor p. 1: s/using the dropout/using dropout/ p. 1: s/of the language modeling/of language modeling/ p. 2: s/is the representative of/is representative of/ p. 2: s/In the deep learning/In deep learning/ p. 2: s/This relations/This relation/ p. 5: Need to define s as the sigmoid function in the LSTM cell equations.
1: "The uncertainty is defined based on the posterior distribution." For more clarity it could be helpful to update this to say that the epistemic model uncertainty is represented in the prior distribution, and upon observing data, those beliefs can be updated in the form of a posterior distribution, which yields model uncertainty conditioned on observed data. p.
ICLR_2022_1935
ICLR_2022
Weakness: A semi-supervised feature learning baseline is missing. This is my main concern about the paper. The key argument in the paper is that feature learning and classifier learning should 1) be decoupled, 2) use random sampling and class-balanced sampling respectively, 3) train on all labels and only ground-truth labels respectively. The authors, therefore, propose a carefully designed alternate sampling strategy. However, a more straightforward strategy could be 1) train a feature extractor ( f in the paper) and a classifier ( g ′ in the paper) using random sampling and any semi-supervised learning method on all data, then 2) freeze the feature extractor ( f ) and train a new classifier ( g in the paper) using class-balanced sampling on data with ground-truth labels. Compared with the alternate sampling strategy proposed in this paper, the semi-supervised feature learning baseline will take less implementation effort and is easier to combine with any semi-supervised learning methods. The baseline seems to be missing in the paper. Although the naive baseline may not give the best performance, it should be compared to justify the sophisticatedly designed alternate sampling strategy. References are not up-to-date. All references in the paper are in or before 2020. In fact, much research progress has been made since then. For example, some recent works [1, 2] study the class-imbalanced semi-supervised learning as well, a discussion on these methods should be necessary. A recent survey on long-tailed learning [3] could be a useful resource to help update the related works in the paper. Minor issues: "The model is the fine-tuned on the combination of ..." -> "The model is then fine-tuned on the combination of ..." [1] Su et al., A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification, CVPR 2021 [2] Wei et al., CReST: A Class-Rebalancing Self-Training Framework for Imbalanced Semi-Supervised Learning, CVPR 2021 [3] Deep Long-Tailed Learning: A Survey, arXiv 2021
1) train a feature extractor ( f in the paper) and a classifier ( g ′ in the paper) using random sampling and any semi-supervised learning method on all data, then
ARR_2022_40_review
ARR_2022
- Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework. - Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dialogs can be improved by using domain ontologies from the SGD dataset. - Although BlenderBot is finetuned on the SGD dataset, it is not clear how using more specific TOD chatbots can provide better results - Lines 159-162: Authors should provide more information about the type/number of personas created, and how the personas are used by the chatbot to generate the given responses. - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system? - Line 216: How many paraphrases were created for each question, and what was their quality rate? - Line 237: How critical was the finetuning process over the SQuad and CommonsenseQA models? - Line 254-257: How many templates were manually created? - Line 265: How the future utterances are used during evaluation? For the generation part, are the authors generating some sort of sentence embedding representation (similar to SkipThoughs) to learn the generation of the transition sentence? and is it the transition sentence one taken from the list of manual templates? ( In general, this section 2.2.2 is the one I have found less clear) - Merge SGD: Did authors select the TOD dialogue randomly from those containing the same intent/topic? did you tried some dialogue embedding from the ODD part and tried to select a TOD dialogue with a similar dialogue embedding? if not, this could be an idea to improve the quality of the dataset. this could also allow the usage of the lexicalized version of the SGD and avoids the generation of placeholders in the responses - Line 324: how the repeated dialogues are detected? - Line 356: how and how many sentences are finally selected from the 120 generated sentences? - Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones? - The paper: Fusing task-oriented and open-domain dialogues in conversational agents is not included in the background section and it is important in the context of similar datasets - Probably the word salesman is misleading since by reading some of the generated dialogues in the appendixes, it is not clear that the salesman agent is in fact selling something. It seems sometimes that they are still doing chitchat but on a particular topic or asking for some action to be done (like one to be done by an intelligent speaker)
- It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system?