paper_id
stringlengths 10
19
| venue
stringclasses 14
values | focused_review
stringlengths 7
8.09k
| point
stringlengths 54
690
|
---|---|---|---|
NIPS_2020_897 | NIPS_2020 | 1. Not clear how this method can be applied outside of fully cooperative settings, as the authors claim. The authors should justify this claim theoretically or empirically, or else remove it. 2. Missing some citations to set this in context of other MARL work e.g. recent papers on self-play and population-play with respect to exploration and coordination (such as https://arxiv.org/abs/1806.10071, https://arxiv.org/abs/1812.07019). 3. The analysis is somewhat "circumstantial", need more detailed experiments to be a convincing argument in this section. For example the claim in lines 235 - 236 seems to require further evidence to be completely convincing. 4. The link with self-play could be more clearly drawn out. As far as I can tell, the advantage of this over self-play is precisely the different initialization of the separate agents. It is surprising and important that this has such a significant effect, and could potentially spur a meta-learning investigation into optimal initialization for SEAC in future work. | 2. Missing some citations to set this in context of other MARL work e.g. recent papers on self-play and population-play with respect to exploration and coordination (such as https://arxiv.org/abs/1806.10071, https://arxiv.org/abs/1812.07019). |
ICLR_2023_1645 | ICLR_2023 | 1. Can this method be used on both SEEG and EEG simultaneously? 2. It would be better to compare with other self-supervised learning methods that are not based on contrastive learning. | 2. It would be better to compare with other self-supervised learning methods that are not based on contrastive learning. |
t8cBsT9mcg | ICLR_2024 | 1. The abstract should be expanded to encompass key concepts that effectively summarize the paper's contributions. In the introduction, the authors emphasize the significance of interpretability and the challenges it poses in achieving high accuracy. By including these vital points in the abstract, the paper can provide a more comprehensive overview of its content and contributions.
2. Regarding the abstention process, it appears to be based on a prediction probability threshold, where if the probability is lower than the threshold, the prediction is abstained? How does it different from a decision threshold used by the models? Can authors clarify that?
3. In the results and discussion section, there's limited exploration and commentary on the impact of the solution on system accuracy, as seen in Table 2. Notably, the confirmation budget appears to have a limited effect on datasets like "noisyconcepts25" and "warbler" compared to others. The paper can delve into the reasons behind this discrepancy.
4. In real-world applications of this solution, questions about the ease of concept approval and handling conflicting user feedback arise. While these aspects may be considered out of scope, addressing them would be beneficial for evaluating the practicality of implementing this approach in real-world scenarios. This is particularly important when considering the potential challenges of user feedback and conflicting inputs in such applications.
Minor things:
Page 4, confirm. we —> replace . with comma
Section 4.2, Table Table 2 —> Table 2
Shouldn’t Table 2 rather be labelled as Figure 2? | 2. Regarding the abstention process, it appears to be based on a prediction probability threshold, where if the probability is lower than the threshold, the prediction is abstained? How does it different from a decision threshold used by the models? Can authors clarify that? |
NIPS_2021_1743 | NIPS_2021 | 1. While the paper claim the importance of language modeling capability of pre-trained models, the authors did not conduct experments on generation tasks that are more likely to require a well-performing language model. Experiments on word similarity and SquAD in section 5.3 cannot really reflect the capability of language modeling. The authors may consider to include tasks like language modeling, machine translation or text sumarization to strenghen this part, as this is one of the main motivations of COCO-LM. 2. Analysis of SCL in section 5.2 regarding few-shot abaility looks not convincing. The paper claims that a more regularized representation space by SCL may result in better generalization ability in few-shot scenarios. However, results in Figure 7(c) and (d) do not meet our expectation such that COCO-LM achieves much more improvements with less labels and the improvements will gradually disappear with more labels. Besides, the authors may check if COCO-LM brings benefits to sentence retrieval tasks with the learned anisotropy text representations. 3. The comparison with Megatron is a little overrated. The performance of Megatron and COCO-LM is close to other approaches, for examples, RoBERTa, ELECTRA, and DeBERTa, which are with similar sizes as COCO-LM. If the author claim that COCO-LM is parameter-efficient, the conclusion is also applicable to the above related works.
Questions for the Authors 1. In experimental setup, why did the authors switch the types of BPE vocabulary, i.e., uncased and cased. Will the change of BPE cause the variance of performance? 2. In Table 2, it looks like COCO-LM especially affects the performance on CoLA and RTE hence the final performance. Can the authors provide some explanation on how the proposed pre-training tasks affect the two different GLEU tasks? 3. In section 5.1, the authors say that the benefits of the stop gradient operation are more on stability. What stability, the training process? If so, are there any learning curves of COCO-LM with and without stop gradient during pre-training to support this claim? 4. In section 5.2, the term “Data Argumentation” seems wrong. Did the authors mean data augmentation?
Typos 1. Check the term “Argumentation” in line 164, 252, and 314. 2. Line 283, “a unbalanced task”, should be “an unbalanced task”. 3. Line 326, “contrast pairs”, should be “contrastive pairs” to be consistent throughout the paper? | 3. The comparison with Megatron is a little overrated. The performance of Megatron and COCO-LM is close to other approaches, for examples, RoBERTa, ELECTRA, and DeBERTa, which are with similar sizes as COCO-LM. If the author claim that COCO-LM is parameter-efficient, the conclusion is also applicable to the above related works. Questions for the Authors 1. In experimental setup, why did the authors switch the types of BPE vocabulary, i.e., uncased and cased. Will the change of BPE cause the variance of performance? |
NIPS_2018_865 | NIPS_2018 | weakness of this paper are listed: 1) The proposed method is very similar to Squeeze-and-Excitation Networks [1], but there is no comparison to the related work quantitatively. 2) There is only the results on image classification task. However, one of success for deep learning is that it allows people leverage pretrained representation. To show the effectiveness of this approach that learns better representation, more tasks are needed, such as semantic segmentation. Especially, the key idea of this method is on the context propagation, and context information plays an important role in semantic segmentation, and thus it is important to know. 3) GS module is used to propagate the context information over different spatial locations. Is the effective receptive field improved, which can be computed from [2]? It is interesting to know how the effective receptive field changed after applying GS module. 4) The analysis from line 128 to 149 is not convincing enough. From the histogram as shown in Fig 3, the GS-P-50 model has smaller class selectivity score, which means GS-P-50 shares more features and ResNet-50 learns more class specific features. And authors hypothesize that additional context may allow the network to reduce its dependency. What is the reason such an observation can indicate GS-P-50 learns better representation? Reference: [1] J. Hu, L. Shen and G. Sun, Squeeze-and-Excitation Networks, CVPR, 2018. [2] W. Luo et al., Understanding the Effective Receptive Field in Deep Convolutional Neural Networks, NIPS, 2016. | 4) The analysis from line 128 to 149 is not convincing enough. From the histogram as shown in Fig 3, the GS-P-50 model has smaller class selectivity score, which means GS-P-50 shares more features and ResNet-50 learns more class specific features. And authors hypothesize that additional context may allow the network to reduce its dependency. What is the reason such an observation can indicate GS-P-50 learns better representation? Reference: [1] J. Hu, L. Shen and G. Sun, Squeeze-and-Excitation Networks, CVPR, 2018. [2] W. Luo et al., Understanding the Effective Receptive Field in Deep Convolutional Neural Networks, NIPS, 2016. |
kfFmqu3zQm | ICLR_2025 | 1. Some conclusions are not convincing. For example, the paper contends that *We believe that continuous learning with unlabeled data accumulates noise, which is detrimental to representation quality.* The results might come from the limited exploration of combination methods. In rehearsal-free continual learning, feature-replay methods have shown great potential, like [R1] in continual learning and [R2] (FRoST) in CCD. A more recent work [R3] also employs feature replay to continually adjust the feature space, which also obtains remarkable performance for continual category discovery.
2. The proposed method is naïve and the novelty is relatively limited. The method includes basic clustering and number estimation. I’m afraid that this method could not provide so many insights to the community.
3. The feature space (i.e., backbone) is only tuned using labeled known classes, does this manner result in overfitting? because the data is purely labeled but the number is limited.
4. The class number estimation algorithm requires a pre-defined threshold $d_{min}$, which is intractable to define in advance and could largely impact the results. Some experiments and ablations should be included.
5. Detailed results of each continual session (at least for one or two datasets) should also be presented to show the performance. References:
[R1]. Prototype Augmentation and Self-Supervision for Incremental Learning. CVPR 2021.
[R2]. Class-incremental Novel Class Discovery. ECCV 2022.
[R3]. Happy: A Debiased Learning Framework for Continual Generalized Category Discovery. NeurIPS 2024. arXiv:2410.0653. | 1. Some conclusions are not convincing. For example, the paper contends that *We believe that continuous learning with unlabeled data accumulates noise, which is detrimental to representation quality.* The results might come from the limited exploration of combination methods. In rehearsal-free continual learning, feature-replay methods have shown great potential, like [R1] in continual learning and [R2] (FRoST) in CCD. A more recent work [R3] also employs feature replay to continually adjust the feature space, which also obtains remarkable performance for continual category discovery. |
TY9mstpD02 | ICLR_2025 | - **generalizability to other models**: the proposed framework is validated using gpt-4-turbo, a costly language model, which may compromise the applicability of the framework at scale. The paper could be further improved by showing how running the experiments using a cheaper model (e.g., gpt-4o) and/or open source models (e.g., Llama 3.1) would affect the obtained results.
- **generalizability of the results**: the conducted experiments are either too simple (simple synthetic regression setting with 4 variables) or include few data-model pairs (6 in Section 4.2, 36 in Section 4.3, and 10 for the human studies), raising questions about the generalizability of the proposed framework to more complex datasets.
- **lack of meaningful baselines**: despite mentioning various model criticism techniques in Section 2, the authors limit their comparisons to simple naive baselines. For example, the authors could compare with a chain-of-thought prompting approach.
- few insights about the generated and correctness of summary statistics: while the authors provide one example in Section 4.3, the paper could be further improved by adding additional insights and contrasting the proposed discrepancies with commonly discussed discrepancies in the literature (e.g., do these resemble the ones commonly found by humans?) | - **lack of meaningful baselines**: despite mentioning various model criticism techniques in Section 2, the authors limit their comparisons to simple naive baselines. For example, the authors could compare with a chain-of-thought prompting approach. |
ICLR_2022_2531 | ICLR_2022 | I have several concerns about the clinical utility of this task as well as the evaluation approach.
- First of all, I think clarification is needed to describe the utility of the task setup. Why is the task framed as generation of the ECG report rather than framing the task as multi-label classification or slot-filling, especially given the known faithfulness issues with text generation? There are some existing approaches for automatic ECG interpretation. How does this work fit into the existing approaches? A portion of the ECG reports from the PTB-XL dataset are actually automatically generated (See Data Acquisition under https://physionet.org/content/ptb-xl/1.0.1/). Do you filter out those notes during evaluation? How does your method compare to those automatically generated reports? - A major claim in the paper is that RTLP generates more clinically accurate reports than MLM, yet the only analysis in the paper related to this is a qualitative analysis of a single report. A more systematic analysis of the quality of generation would be useful to support the claim made in the appendix. Can you ask clinicians to evaluate the utility of the generated reports or evaluate clinical utility by using the generated reports to predict conditions identifiable from the ECG? I think that it’s fine that the RTLP method performs comparable to existing methods, but I am not sure from the current paper what the utility of using RTLP is. - More generally, I think that this paper is trying to do two things at once – present new methods for multilingual pretraining while also developing a method of ECG captioning. If the emphasis is on the former, then I would expect to see evaluation against other multilingual pretraining setups such as the Unicoder (Huang 2019a). If the core contribution is the latter, then clinical utility of the method as well as comparison to baselines for ECG captioning (or similar methods) is especially important. - I’m a bit confused as to why the diversity of the generated reports is emphasized during evaluation. While I agree that the generated reports should be faithful to the associated ECG, diversity may not actually be necessary metric to aim for in a medical context. For instance, if many of the reports are normal, you would want similar reports for each normal ECG (i.e. low diversity). - My understanding is that reports are generated in other languages using Google Translate. While this makes sense to generate multilingual reports for training, it seems a bit strange to then evaluate your model performance on these silver-standard noisy reports. Do you have a held out set of gold standard reports in different languages for evaluation (other than German)?
Other Comments: - Why do you only consider ECG segments with one label assigned to them? I would expect that the associated reports would be significantly easier than including all reports. - You might consider changing the terminology from “cardiac arrythmia” categories to something broader since hypertrophy (one of the categories) is not technically a cardiac arrythmia (although it can be detected via ECG & it does predispose you to them) - I think it’d be helpful to include an example of some of the tokens that are sampled during pretraining using your semantically similar strategy for selecting target tokens. How well does this work in languages that have very different syntactic structures compared to the source language? - Do you pretrain the cardiac signal representation learning model on the entire dataset or just the training set? If the entire set, how well does this generalize to setting where you don’t have the associated labels? - What kind of tokenization is used in the model? Which Spacy tokenizer? - It’d be helpful to reference the appendix when describing the setup in section 3/5 so that the reader knows that more detailed architecture information is there. - I’d be interested to know if other multilingual pretraining setups also struggle with Greek. - It’d be helpful to show the original ECG report with punctuation + make the ECG larger so that they are easier to read - Why do you think RTLP benefits from fine-tuning on multiple languages, but MARGE does not? | - Do you pretrain the cardiac signal representation learning model on the entire dataset or just the training set? If the entire set, how well does this generalize to setting where you don’t have the associated labels? |
NIPS_2022_2152 | NIPS_2022 | The authors clearly addressed some potential limitations of the work: 1) Some observations and subsequent design decisions might be hardware and software dependent; 2) The NAS procedure, specifically the latency-driven slimming procedure is less involved and could be a direction for future exploration. | 1) Some observations and subsequent design decisions might be hardware and software dependent; |
ICLR_2021_147 | ICLR_2021 | the empirical validation is weak. Therefore, more new models need to be compared. For more details, please refer to “Reasons for reject”
Reasons for accept: 1. The structure of this paper is clear and easy to read. Specifically, the motivation of this paper is clear and the structure is well organized; the related work is elaborated in detail; the experimental setup is complete. 2. Based on the use of replay to solve catastrophic forgetting, the current popular graph structure is introduced to capture the similarities between samples. Combined with the proposed Graph Regularization, this paper provides a new perspective for solving catastrophic forgetting. 3. The experimental results given in the paper can basically show that the proposed method is effective. The ablation study also verified the effectiveness of each component.
Reasons for reject: 1. The lack of comparison of experimental effects after replacing Graph Regularization with other regularization methods mentioned in this paper, or other distance measurement methods, eg., L2.
This paper compares relatively few baselines, especially recent studies. I hope to see the comparison results of some papers in the list below. The latest papers on the three types of methods (regularization, expansion, and rehearsal) for solving catastrophic forgetting are included. Therefore, if it can be compared with some of these models, it will be beneficial to the evaluation of GCL.
[1] Ostapenko O , Puscas M , Klein T , et al. Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning. ICML 2019 [2] Y Wu, Y Chen, et al. Large Scale Incremental Learning. CVPR 2019 [3] Liu Y , Liu A A , Su Y , et al. Mnemonics training: Multi-class incremental learning without forgetting. CVPR 2020 [4] Zhang J , Zhang J , Ghosh S , et al. Class-incremental learning via deep model consolidation. 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) [5] Guanxiong Zeng, Yang Chen, Bo Cui, and Shan Yu. Continuous learning of context-dependent processing in neural networks. Nature Machine Intelligence, 2019. [6] Wenpeng Hu, Zhou Lin, et al. Overcoming catastrophic forgetting for continual learning via model adaptation. ICLR 2019 [7] Rao D , Visin F , Rusu A A , et al. Continual Unsupervised Representation Learning. NeurIPS 2019 | 1. The structure of this paper is clear and easy to read. Specifically, the motivation of this paper is clear and the structure is well organized; the related work is elaborated in detail; the experimental setup is complete. |
NIPS_2022_738 | NIPS_2022 | W1) The paper states that "In order to introduce epipolar constraints into attention-based feature matching while maintaining robustness to camera pose and calibration inaccuracies, we develop a Window-based Epipolar Transformer (WET), which matches reference pixels and source windows near the epipolar lines." It claims that it introduces "a window-based epipolar Transformer (WET) for enhancing patch-to-patch matching between the reference feature and corresponding windows near epipolar lines in source features". To me, taking a window around the epipolar line into account seems like an approximation to estimating the uncertainty region around the epipolar lines caused by inaccuracies in calibration and camera pose and then searching within this region (see [Förstner & Wrobel, Photogrammetric Computer Vision, Springer 2016] for a detailed derivation of how to estimate uncertainties). Is it really valid to claim this part of the proposed approach as novel?
W2) I am not sure how significant the results on the DTU dataset are: a) The difference with respect to the best performing methods is less than 0.1 mm (see Tab. 1). Is the ground truth sufficiently accurate enough that such a small difference is actually noticeable / measurable or is the difference due to noise or randomness in the training process? b) Similarly, there is little difference between the results reported for the ablation study in Tab. 4. Does the claim "It can be seen from the table that our proposed modules improve in both accuracy and completeness" really hold? Why not use another dataset for the ablation study, e.g., the training set of Tanks & Temples or ETH3D?
W3) I am not sure what is novel about the "novel geometric consistency loss (Geo Loss)". Looking at Eq. 10, it seems to simply combine a standard reprojection error in an image with a loss on the depth difference. I don't see how Eq. 10 provides a combination of both losses.
W4) While the paper discusses prior work in Sec. 2, there is mostly no mentioning on how the paper under review is related to these existing works. In my opinion, a related work section should explain the relation of prior work to the proposed approach. This is missing.
W5) There are multiple parts in the paper that are unclear to me: a) What is C in line 106? The term does not seem to be introduced. b) How are the hyperparameters in Sec. 4.1 chosen? Is their choice critical? c) Why not include UniMVSNet in Fig. 5, given that UniMVSNet also claims to generate denser point clouds (as does the paper under review)? d) Why use only N=5 images for DTU and not all available ones? e) Why is Eq. 9 a reprojection error? Eq. 9 measures the depth difference as a scalar and no projection into the image is involved. I don't see how any projection is involved in this loss.
Overall, I think this is a solid paper that presents a well-engineered pipeline that represents the current state-of-the-art on a challenging benchmark. While I raised multiple concerns, most of them should be easy to address. E.g., I don't think that removing the novelty claim from W1 would make the paper weaker. The main exception is the ablation study, where I believe that the DTU dataset is too easy to provide meaningful comparisons (the relatively small differences might be explained by randomness in the training process.
The following minor comments did not affect my recommendation:
References are missing for Pytorch and the Adam optimizer.
Post-rebuttal comments
Thank you for the detailed answers. Here are my comments to the last reply:
Q: Relationship to prior work.
Thank you very much, this addresses my concern.
A: Fig. 5 is not used to claim our method achieves the best performance among all the methods in terms of completeness, it actually indicates that our proposed method could help reconstruct complete results while keeping high accuracy (Tab. 1) compared with our baseline network [7] and the most relevant method [3]. In that context, we not only consider the quality of completeness but also the relevance to our method to perform comparison in Fig. 5.
As I understand lines 228-236 in the paper, in particular "The quantitative results of DTU evaluation set are summarized in Tab. 1, where Accuracy and Completeness are a pair of official evaluation metrics. Accuracy is the percentage of generated point clouds matched in the ground truth point clouds, while Completeness measures the opposite. Overall is the mean of Accuracy and Completeness. Compared with the other methods, our proposed method shows its capability for generating denser and more complete point clouds on textureless regions, which is visualized in Fig. 5.", the paper seems to claim that the proposed method generates denser point clouds. Maybe this could be clarified?
A: As a) nearly all the learning-based MVS methods (including ours) take the DTU as an important dataset for evaluation, b) the GT of DTU is approximately the most accurate GT we can obtain (compared with other datasets), c) the final results are the average across 22 test scans, we think that fewer errors could indicate better performance. However, your point about the accuracy of DTU GT is enlightening, and we think it's valuable future work.
This still does not address my concern. My question is whether the ground truth is accurate enough that we can be sure that the small differences between the different components really comes from improvements provided by adding components. In this context, stating that "the GT of DTU is approximately the most accurate GT we can obtain (compared with other datasets)" does not answer this question as, even though DTU has the most accurate GT, it might not be accurate enough to measure differences at this level of accuracy (0.05 mm difference). If the GT is not accurate enough to differentiate in the 0.05 mm range, then averaging over different test scans will not really help. That "nearly all the learning-based MVS methods (including ours) take the DTU as an important dataset for evaluation" does also not address this question. Since the paper claims improvements when using the different components and uses the results to validate the components, I do not think that answering the question whether the ground truth is accurate enough to make these claims in future work is really an option. I think it would be better to run the ablation study on a dataset where improvements can be measured more clearly.
Final rating
I am inclined to keep my original rating ("6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations."). I still like the good results on the Tanks & Temples dataset and believe that the proposed approach is technically sound. However, I do not find the authors' rebuttals particularly convincing and thus do not want to increase my rating. In particular, I still have concerns about the ablation study as I am not sure whether the ground truth of the DTU dataset is accurate enough that it makes sense to claim improvements if the difference is 0.05 mm or smaller. Since this only impacts the ablation study, it is also not a reason to decrease my rating. | 1). Is the ground truth sufficiently accurate enough that such a small difference is actually noticeable / measurable or is the difference due to noise or randomness in the training process? b) Similarly, there is little difference between the results reported for the ablation study in Tab. |
39n570rxyO | ICLR_2025 | This paper has weaknesses to address:
* The major weakness of this paper is the extremely limited experiments section. There are many experiments, yet almost no explanation of how they're run or interpretation of the results. Most of the results are written like an advertisement, mostly just stating the method outperforms others. This leaves the reader unclear why the performance gains happen. Ultimately it's not clear when/why the findings would generalize. The result is that some claims appear to be quite overstated. For example, L423-L424 states *"embeddings of domains with shared high-level semantics cluster together, as depicted in Appendix E.1. For example, embeddings of mono and stereo audio group closely, as do those of banking and economics."* But this is cherry-picked---Temperature is way closer to Mono and Stereo Audio than Banking is to Economics.
* Similarly, many important experimental details are missing or relegated to the Appendix, and the Appendix also includes almost no explanations or interpretations. For example, the PCA experiments in Figures 3, 7, and 8 aren't explained.
* It's unclear how many variables actually overlap between training/testing, which seems to be a key element to make the model outperform others. Yet this isn't analyzed. Showing that others fail by ignoring other variables should be a key element of the experiments. | * Similarly, many important experimental details are missing or relegated to the Appendix, and the Appendix also includes almost no explanations or interpretations. For example, the PCA experiments in Figures 3, 7, and 8 aren't explained. |
NIPS_2017_236 | NIPS_2017 | Weakness:
1. The real applications that the proposed method can be applied to seem to be rather restricted. It seems the proposed algorithm can only be used as a fast evaluation of residual error for 'guessing' or 'predetermining' the range of Tucker ranks, not the real ranks.
2. Since the sampling size 's' depends on the exponential term 2^[1/(e^2K-2)] (in Theorem 3.4), it could be very large if one requires the error tolerance 'e' to be relatively small and the order of tensor 'K' to be high. In that situation, there won't be much benefits to use this algorithm. Question:
1. In the Fig.1, why blue curve with large sample size 's=80' achieves the worst error compared with that of red curve with small sample size 's=20'?
Overall, although proposed algorithm is theoretically sound, but appears be limited in applications for practical propose. | 2. Since the sampling size 's' depends on the exponential term 2^[1/(e^2K-2)] (in Theorem 3.4), it could be very large if one requires the error tolerance 'e' to be relatively small and the order of tensor 'K' to be high. In that situation, there won't be much benefits to use this algorithm. Question: |
UaZe4SwQF2 | EMNLP_2023 | - This paper is a bit difficult to follow. There are some unclear statements, such as motivation.
- In the introduction, the summarized highlights need to be adequately elaborated, and the relevant research content of this paper needs to be detailed.
- No new evaluation metrics are proposed. Only existing evaluation metrics are linearly combined. In the experimental analysis section, there needed to be an in-depth exploration of the reasons for these experimental results.
- A case study should be added.
- What are the advantages of your method compared to other evaluation metrics? Which needs to be emphasized in the motivation.
- How do you evaluate the significance of model structure or metrics on the gender bias encoding of the model? Because you only conduct experiments in the FIBER model. Furthermore, you should conduct generalization experiments on the CLIP model or other models.
- The citation format is chaotic in the paper.
- There are some grammar mistakes in this paper, which could be found in “Typos, Grammar, Style, and Presentation Improvements”. | - No new evaluation metrics are proposed. Only existing evaluation metrics are linearly combined. In the experimental analysis section, there needed to be an in-depth exploration of the reasons for these experimental results. |
NIPS_2021_1759 | NIPS_2021 | The extension from the EH model is natural. In addition, there has been literature that proves the power of FNN from a theoretical point of view, whereas this paper fails to make a review in this regard. Among other works, Schmidt-Hieber (2020) gave an exact upper bound of the approximation error for FNNs involving the least-square loss. Since the DeepEH optimizes a likelihood-based loss, this paper builds up its asymptotic properties by following assumptions and proofs of Theorems 1 and 2 in Schmidt-Hieber (2020) as well as theories on empirical processes.
Additional Feedback: 1) In the manuscript, P
mostly represents a probability but sometimes for a cumulative distribution function (e.g., Eqs. (3) and (4) and L44, all in Appendix), which leads to confusion. 2). The notation K
is abused too: it is used both for a known kernel function (e.g., L166) and the number of layers (e.g., L176). 3). What is K b
in estimating baseline hazard (L172)? | 2). The notation K is abused too: it is used both for a known kernel function (e.g., L166) and the number of layers (e.g., L176). |
NIPS_2022_2005 | NIPS_2022 | Originality: Main Result 1 relies on known formulas for low-rank matrix factorization. It is not clearly explained what are the major technical challenges, if any, in obtaining this result.
Clarity: The community labels in (3) and the model (4) are such that the E X
does not have sparse columns if k
is small. For this reason, I feel the paper is more about the large k
version of the sparse clustering problem.
Edit 08/19: After discussion with authors, the previous point is resolved.
Clarity: From the main text alone, it is unclear how the information-theoretic threshold is obtained. The formula of the MSE is difficult to interpret, so I can't see if the threshold is a consequence of this. Some further explanation is needed here.
Quality/Clarity: It is difficult to assess the rigour of both main results, especially Main Result 2. This is because the appendix is not organized in a conventional way with a clearly demarcated proof of Main Results 1 and 2. For Main Result 2, I do not see in the Appendix any explanation of how the asymptotic algorithmic MSE is computed. I only see plots rather than arguments. I also do not see any derivation of the Bayes-optimal MSE or reference to known (rigorous) formulas.
Minor issues
Line 39: Equation (2) defines vectors that are (i) standard Gaussian with probability ρ
OR (ii) the zero vector with probability 1 − ρ
. I think what is meant is for there to be a random subset of zero entries, with the rest of the entries being Gaussian.
Main Theorem 1: Please take a careful proofread over this. Here v ∗ , u ∗ , and w
have not been defined in (10). Also Z u
does not appear in (10).
Consider making the boundaries bolder in Figure 1. Also I found the color of λ i t
to be hard on the eyes
Line 70: Typo "statitiscal"
Line 85: Typo "analyis"
Line 91: Typo "Statistics", change to "statistics"
Line 111: Change to "In particular, [10] conjectured and [11] proved..."`
Line 143-144: This comment is hard to understand because (38) is in the Appendix, and then I'm having trouble seeing the connection to (6).
Line 195: Typo "Invextigate"
Line 261: Replace "Despite of this fact" with "Despite this fact"
Summary of score
My score is due to concerns mostly about the rigor and partially about the novelty of this submission. I also feel there is a lack of clarity in explaining how the main results are obtained.
Update of score 08/19
The authors' rebuttal addressed my concerns about rigor and somewhat about novelty. I agree with other reviewers that the strengths and technical challenges of this paper are not highlighted enough in the main text. I also think further clarity is needed on the level of rigor and the asymptotic regime to which the results apply (which seems to be for k growing large and rho going to 0). I have raised my overall score from 3 to 4 because further serious revision is needed. I have also upgraded the soundness from 1 to 2.
1. I agree with the authors' assessment that there are no apparent potential negative societal impacts.
2. The weak recovery problem studied here is primarily of theoretical interest, and it is not clear if the AMP algorithm is useful for non-Gaussian problems. So practical impact may be limited. | 2. The weak recovery problem studied here is primarily of theoretical interest, and it is not clear if the AMP algorithm is useful for non-Gaussian problems. So practical impact may be limited. |
NIPS_2017_349 | NIPS_2017 | - The paper is not self contained
Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility.
I also hereby request the authors to release the source code of their experiments to allow reproduction of their results.
- Use of deep-reinforcement learning is not well motivated
The problem domain seems simple enough that a linear approximation would have likely sufficed? The network is fairly small and isn't "deep" either.
- > We argue that such a mechanism is more realistic because it has an effect within the game itself, not just on the scores
This is probably the most unclear part. It's not clear to me why the paper considers one to be more realistic than the other rather than just modeling different incentives? Probably not enough space in the paper but actual comparison of learning dynamics when the opportunity costs are modeled as penalties instead. As economists say: incentives matter. However, if the intention was to explicitly avoid such explicit incentives, as they _would_ affect the model-free reinforcement learning algorithm, then those reasons should be clearly stated.
- Unclear whether bringing connections to human cognition makes sense
As the authors themselves state that the problem is fairly reductionist and does not allow for mechanisms like bargaining and negotiation that humans use, it's unclear what the authors mean by ``Perhaps the interaction between cognitively basic adaptation mechanisms and the structure of the CPR itself has more of an effect on whether self-organization will fail or succeed than previously appreciated.'' It would be fairly surprising if any behavioral economist trying to study this problem would ignore either of these things and needs more citation for comparison against "previously appreciated".
* Minor comments
** Line 16:
> [18] found them...
Consider using \citeauthor{} ?
** Line 167:
> be the N -th agentâs
should be i-th agent?
** Figure 3:
Clarify what the `fillcolor` implies and how many runs were the results averaged over?
** Figure 4:
Is not self contained and refers to Fig. 6 which is in the supplementary. The figure is understandably large and hard to fit in the main paper, but at least consider clarifying that it's in the supplementary (as you have clarified for other figures from the supplementary mentioned in the main paper).
** Figure 5:
- Consider increasing the axes margins? Markers at 0 and 12 are cut off.
- Increase space between the main caption and sub-caption.
** Line 299:
From Fig 5b, it's not clear that |R|=7 is the maximum. To my eyes, 6 seems higher. | - Unclear whether bringing connections to human cognition makes sense As the authors themselves state that the problem is fairly reductionist and does not allow for mechanisms like bargaining and negotiation that humans use, it's unclear what the authors mean by ``Perhaps the interaction between cognitively basic adaptation mechanisms and the structure of the CPR itself has more of an effect on whether self-organization will fail or succeed than previously appreciated.'' It would be fairly surprising if any behavioral economist trying to study this problem would ignore either of these things and needs more citation for comparison against "previously appreciated". |
EODzbQ2Gy4 | ICLR_2024 | - Wording is overly exaggerated in the conclusion: " ... our pioneering
contributions herald a new era in robotic adaptability ... ". Word choice is a bit flamboyant in multiple places in the writing.
- This paper seems to only be tackle in-distribution task-transfer where typically transfer is thought of as learning task A can help with a completely different task B.
- Additionally, object shape transfer is mentioned as one of the applications, but only object pose transfer is considered in the experiments.
- Reward function seems to be very hand-engineered. How many data points is required to fit the Q-network with the pretraining dataset? Is this dataset hard to collect?
- Is there any comparison with other works that use differentiable physics for task transfer?
- One of the claimed novelty in this work is the path planning algorithm for sampling new subtasks. Can you include more comparisons against other path planning algorithms in classical literature like RRT, A*, sampling-based methods, etc?
Typo and writing comments:
Figure 1: Sub-Task Accomplishment ...
Section 5.2 MAML: repeated the word "application"
Confusing last sentence in Section 5.1.4.
Section 5.3, why is this transfer task considered "innovative"? | - Wording is overly exaggerated in the conclusion: " ... our pioneering contributions herald a new era in robotic adaptability ... ". Word choice is a bit flamboyant in multiple places in the writing. |
Va4t6R8cGG | ICLR_2024 | - This paper does not seem to be the first work of fully end-to-end spatio-temporal localization, while TubeR has proposed to directly detect an action tubelet in a video by simultaneously performing action localization and recognition before. This weakens the novelty of this paper. The authors claim the differences with TubeR but the most significant difference is that the proposed method is much less complex.
- The symbols in this paper are inconsistent, e.g., b.
- The authors need to perform ablation experiments to compare the proposed method with other methods (e.g., TubeR) in terms of the number of learnable parameters and GFLOPs. | - The authors need to perform ablation experiments to compare the proposed method with other methods (e.g., TubeR) in terms of the number of learnable parameters and GFLOPs. |
ICLR_2021_2804 | ICLR_2021 | are listed as follows. Strengths:
The paper is easy to read, and the proposed idea is also easy to follow. Figure 1 can help the understanding of the proposed model.
The proposed model does not need the manual labeled relationship between semantic knowledge and target categories, and this may further reduce the supervision for knowledge graph construction. Weaknesses:
Incomplete comparison with existing works: the authors should list a more complete comparison table to compare the results of existing works. Some existing works can perform competitive or even superior results compared with the proposed model under the same feature backbone (ResNet-12).
mini-ImageNet tiered-ImageNet
1-shot 5-shot 1-shot 5-shot
Proposed 63.29+-0.71%. 80.12+-0.22% 66.69+-0.75%. 83.04+-0.61%
CAN[1] 63.85+-0.48%. 79.44+-0.34% 69.89+-0.51%. 84.23+-0.37%
CAN +T[1] 67.19+-0.55%. 80.64+-0.35% 73.21+-0.58%. 84.93+-0.38%
Comparison with unitary modality methods is misleading, as these baseline methods do not use any external information for the training and inference. The proposed method should consider other cross-modal few-shot classification works as baseline methods.
As the proposed model contains several modules, such as knowledge graph construction, graph neural network, and other few linear layers between components, this makes the reader hard to understand which module is crucial. It will be great if the authors could provide an ablation study to verify the effectiveness of each module in the next version of this paper. For example, the authors could provide studies like 1) what is the performance if the semantic information is removed in the knowledge graph, or 2) how the number of graph neural layers affects the overall performance.
Overall, the paper is easy to read. The idea of integrating semantic information in few-shot classification is interesting while it has been widely explored in existing works. Given the reported results of the proposed model and lack of analyses in the proposed model, I am inclined to the score "Ok but not good enough - rejection".
[1] “Cross Attention Network for Few-shot Classification”, Hou et al., NeurIPS ‘19 | 2) how the number of graph neural layers affects the overall performance. Overall, the paper is easy to read. The idea of integrating semantic information in few-shot classification is interesting while it has been widely explored in existing works. Given the reported results of the proposed model and lack of analyses in the proposed model, I am inclined to the score "Ok but not good enough - rejection". [1] “Cross Attention Network for Few-shot Classification”, Hou et al., NeurIPS ‘19 |
NIPS_2018_125 | NIPS_2018 | - Some missing references and somewhat weak baseline comparisons (see below) - Writing style needs some improvement, although, it is overall well written and easy to understand. Technical comments and questions: - The idea of active feature acquisition, especially in the medical domain was studied early on by Ashish Kapoor and Eric Horvitz. See https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/NIPS2009.pdf There is also a number of missing citations to work on using MDPs for acquiring information from external sources. Kanani et al, WSDM 2012, Narsimhan et al, "Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning", and others. - section 3, line 131: "hyperparameter balancing the relative importances of two terms is absorbed in the predefined cost". How is this done? The predefined cost could be externally defined, so it's not clear how these two things interact. - section 3.1, line 143" "Then the state changes and environment gives a reward". This is not true of standard MDP formulations. You may not get a reward after each action, but this makes it sound like that. Also, line 154, it's not clear if each action is a single feature or the power set. Maybe make the description more clear. - The biggest weakness of the paper is that it does not compare to simple feature acquisition baselines like expected utility or some such measure to prove the effectiveness of the proposed approach. Writing style and other issues: - Line 207: I didn't find the pseudo code in the supplementary material - The results are somewhat difficult to read. It would be nice to have a more cleaner representation of results in figures 1 and 2. - Line 289: You should still include results of DWSC if it's a reasonable baseline - Line 319: your dollar numbers in the table don't match! - The paper will become more readable by fixing simple style issues like excessive use of "the" (I personally still struggle with this problem), or other grammar issues. I'll try and list most of the fixes here. 4: joint 29: only noise 47: It is worth noting that 48: pre-training is unrealistic 50: optimal learning policy 69: we cannot guarantee 70: manners meaning that => manner, that is, 86: work 123: for all data points 145: we construct an MDP (hopefully, it will be proper, so no need to mention that) 154: we assume that 174: learning is a value-based 175: from experience. To handle continuous state space, we use deep-Q learning (remove three the's) 176: has shown 180: instead of basic Q-learning 184: understood as multi-task learning 186: aim to optimize a single 208: We follow the n-step 231: versatility (?), we perform extensive 233: we use Adam optimizer 242: We assume uniform acquisition cost 245: LSTM 289: not only feature acquisition but also classification. 310: datasets 316: examination cost? | - The biggest weakness of the paper is that it does not compare to simple feature acquisition baselines like expected utility or some such measure to prove the effectiveness of the proposed approach. Writing style and other issues: |
NIPS_2021_2123 | NIPS_2021 | This paper still existed some problems that I hope the authors could illustrate in a clearer way.
The authors argued that they were the first time to directly training deep SNNs with more than 100 layers. I don’t think this is the core contribution in this paper, because of the residual block, the spiking could be deeper. In my opinion, SEW structure is the most important point in this paper, and directly training a 50-layer and 100-layer snn is not a huge breakthrough. Otherwise, if they could give a more detailed analysis about why other methods can’t train a 100-layer snn except section 3.2, it may be more reasonable.
Why the RBA block can be seen as a special case of the SEW block? I mean SEW is another kind of RBA with binary input and output.
Equ. 11 is wonderful, how about other bit operations?
Fig. 5 a seems strange, please give more explanations.
When the input is aer format, how did you deal with DVS input?
If you can analyze the energy consumption as reference[15] did, this paper would be more solid. | 11 is wonderful, how about other bit operations? Fig. 5 a seems strange, please give more explanations. When the input is aer format, how did you deal with DVS input? If you can analyze the energy consumption as reference[15] did, this paper would be more solid. |
ICLR_2022_21 | ICLR_2022 | However, some key architectural details can be clarified further for full reproducibility and analysis. Specifically: 1. How are historical observations combined with inputs known over all time given differences in sequence lengths (L vs L+M)? The text mentions separate embedding and addition with positional encoding, but clarifications on how the embeddings are combined and fed into the CSCM are needed. 2. Can each node attend to its own lower-level representation? From equation 2, it seems to be that only neighbouring nodes are attended to, based on the description of N_l^(s). 3. Do the authors have any guidelines on how to select S/A/C (and consequently N) for a given receptive field L?
In addition, while the ablation analysis tests the impact of changing CSCM architectures, it would be good to evaluate the base performance without the PAM to determine the value added by attention. This would also provide a simple comparison vs dilated CNNs which have been used successfully in time series forecasting applications (e.g. WaveNet).
Finally, could I double check which dataset was used for the ablation analysis as well? I seem to be having some difficulty lining the numbers in Tables 4-6 up with Table 3. | 1. How are historical observations combined with inputs known over all time given differences in sequence lengths (L vs L+M)? The text mentions separate embedding and addition with positional encoding, but clarifications on how the embeddings are combined and fed into the CSCM are needed. |
50RNY6uM2Q | ICLR_2025 | 1. As mentioned in the article itself, the introduction of multi-granularity and multi-scale to enhance model performance is a common approach to convolutional networks, and merely migrating this approach to the field of MLMs is hardly an innovative contribution. Some of the algorithms used in the article from object detection only do some information enhancement on the input side, while many MLMs can already accomplish the object detection task by themselves nowadays.
2. The scores achieved on both the MMBench as well as SEEDBench datasets, while respectable, are not compared to some of the more competitive models. I identified MMB as version 1 and SEEDBench as Avg based on the scores of Qwen-VL and MiniCPM-V2, and there are a number of scores on both leaderboards that are higher than the scores of MG-LLaVA work, eg. Honeybee (Cha et al., 2024), AllSeeing-v2 (Wang et al. 2024) based on Vicuna-13b at MMB-test. and then you can also find a lot of similar models with higher scores on the same substrate.
3. In addition to Perception Benchmarks. this problem can also be found in Visual QA and Video QA. such as on the MSRVTT-QA dataset. there are already many models with very high scores in 2024. Some of them also use some methods to improve the model's ability on fine-grained tasks. eg. Flash-VStream (Zhang et al. 2024) Monkey (Li et al. 2023). The article does not seem to compare these new 2024 models.
To summarize, I think the approach proposed in the article is valid, but MG-LLaVA does not do the job of making a difference, either from an innovation perspective or from a performance perspective.
[1] Cha, Junbum, et al. "Honeybee: Locality-enhanced projector for multimodal llm." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.
[2] Wang, Weiyun, et al. "The all-seeing project v2: Towards general relation comprehension of the open world." *arXiv preprint arXiv:2402.19474* (2024).
[3] Zhang, Haoji, et al. "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams." *arXiv preprint arXiv:2406.08085* (2024).
[4] Li, Zhang, et al. "Monkey: Image resolution and text label are important things for large multi-modal models." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024. | 1. As mentioned in the article itself, the introduction of multi-granularity and multi-scale to enhance model performance is a common approach to convolutional networks, and merely migrating this approach to the field of MLMs is hardly an innovative contribution. Some of the algorithms used in the article from object detection only do some information enhancement on the input side, while many MLMs can already accomplish the object detection task by themselves nowadays. |
ICLR_2023_2286 | ICLR_2023 | 1. The paper is poorly organized. It is hard to quickly get the motivations and main ideas of the proposed methods.
2. The thermal sensor and environment setting for data collection is not described in details. From Figure 2, why is the quality of thermal image significantly higher than RegDB and SYSU-MM01 ? Does the thermal sensor or capturing time cause it?
3. The paper presents a transformer-based network as backbone, what is the benefits over the CNN-based backbones in traditional methods? The reason using such a transformer-based method is not clearly discussed.
4. The proposed multi-task triplet loss is not clearly clarified. It is strongly to re-organize the part and make a proof-reading. In addition, It seems there is mistake in Eq. (1). I suppose a max( x,0) is lost for both terms in $L_{mtri}$.
5. The sensitivity of hyper-parameters such as $m_1$, $m_2$, $\lambda$ is not discussed. In particular, their values are not specified in the paper.
6. There are lots of grammar mistakes, typos and description blurs that makes the paper hard to follow. It is strongly to find some experts to make a proofreading. | 5. The sensitivity of hyper-parameters such as $m_1$, $m_2$, $\lambda$ is not discussed. In particular, their values are not specified in the paper. |
NIPS_2021_2338 | NIPS_2021 | Weakness: 1. Regarding the adaptive masking part, the authors' work is incremental, and there have been many papers on how to do feature augmentation, such as GraphCL[1], GCA[2]. The authors do not experiment with widely used datasets such as Cora, Citeseer, ArXiv, etc. And they did not compare with better baselines for node classification, such as GRACE[3], GCA[2], MVGRL[4], etc. I think this part of the work is shallow and not enough to constitute a contribution. The authors should focus on the main contribution, i.e., graph-level contrastive learning, and need to improve the node-level augmentation scheme. 2. In the graph classification task, the compared baseline is not sufficient, such as MVGRL[4], gpt-gnn[5] are missing. I hope the authors could add more baselines of graph contrastive learning and test them on some common datasets. 3. I am concerned whether the similarity-aware positive sample selection will accelerate GNN-based encoder over-smoothing, i.e., similar nodes or graphs will be trained with features that converge excessively and discard their own unique features. In addition, whether selecting positive samples in the same dataset without introducing some perturbation noise would lead to lower generalization performance. The authors experimented with the transfer performance of the model on the graph classification task, though it still did not allay my concerns about the model generalization. I hope there will be more experiments on different downstream tasks and across different domains. Remarks: 1. The authors seem to have over-compressed the line spacing and abused vspace. 2. Table 5 is collapsed.
[1] Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” Advances in Neural Information Processing Systems, vol. 33, 2020. [2] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Graph contrastive learning with adaptive augmentation,” arXiv preprint arXiv:2010.14945, 2020. [3] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Deep graph contrastive representation learning,” arXiv preprint arXiv:2006.04131, 2020. [4] Hassani, Kaveh, and Amir Hosein Khasahmadi. "Contrastive multi-view representation learning on graphs." International Conference on Machine Learning. PMLR, 2020. [5] Hu, Ziniu, et al. "Gpt-gnn: Generative pre-training of graph neural networks." Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020. | 3. I am concerned whether the similarity-aware positive sample selection will accelerate GNN-based encoder over-smoothing, i.e., similar nodes or graphs will be trained with features that converge excessively and discard their own unique features. In addition, whether selecting positive samples in the same dataset without introducing some perturbation noise would lead to lower generalization performance. The authors experimented with the transfer performance of the model on the graph classification task, though it still did not allay my concerns about the model generalization. I hope there will be more experiments on different downstream tasks and across different domains. Remarks: |
NIPS_2020_1185 | NIPS_2020 | - The theory (thm 2, cor 1) on the representational power of SMPs is only for simple unlabeled graphs. Is there any similar result for graphs with node and/or edge features ? - The experiments are quite limited. I wish to have seen SMPs in the context of graphs with node and edge features and on standard benchmarks used by GCNs/GIN/GAT models. - how good is SMP at counting cycles ? - is fast SMP less expressive than SMP ? I wish to have seen more discussion on the power of different architectures. - are, in the limit of using sufficiently many layers, all embeddings of node j at each node i becoming equal ? Can this be formally proved ? If this is not true, then isn't it a weakness that each node can learn a potentially different representation of all the nodes in the graph ? - how well is fast SMP perming on the cycle detection task ? ====================== Later edit: I agree with other reviewers' comments on the lack of powerful equivariant baselines and I do believe that the current experimental setup is limited. I am lowering my score as a consequence. | - is fast SMP less expressive than SMP ? I wish to have seen more discussion on the power of different architectures. |
ICLR_2023_2163 | ICLR_2023 | Fully training group labels baselines could include more recent methods such as SGDRO (non-flat version of GDRO proposed in Goel et al.). There is a misleading sentence on page 3 when describing the Waterbirds dataset: “The bird images are then modified with either a water or land background.” There is no “modification” involved, in the dataset the birds are already placed either on water or land background (this is a minor note, but should be revisited). The main weaknesses are all in the “Experiments” section. 1) Although the hyperparameters are fixed, some ablation is still required (for example the regularisation in the second stage for the CivilComments dataset). 2) In Tables 1 and 2 it is not clear what the results are, is it correct to assume they are the mean and std. performances over three different initialization seeds? 3) The results in Tables 1 and 2 are not correct, CROIS is using the validation set for training in the second stage, not only for model selection. For the datasets considered, the validation set has the same distribution as the test set, it does not seem correct to talk about “group shift” anymore. 4) Table 2 should also include the average test accuracy to show the trade-off between average and worst-case performances when varying the size of the validation set. Also, why is the std. not reported here? 5) Table 3 shows promising results, but the best fraction “p” is a hyperparameter that should be tuned using the worst-group performance on the validation set, like the regularization term for GDRO. Its behaviour is not linear (e.g. the more the better) nor consistent across datasets. For example, in both text datasets, only one fraction produces better results than the GDRO baseline. Here, I disagree with the statement “In practice, p is not a parameter to choose (there’s no reason to throw away group labels) but rather is limited by the resources available to obtain group labels”: using p=0.5 is often worse than using p=0.3. 6) To obtain robust results, it would have been better to evaluate the methods across different splits of train-val-test, not simply different initialisation seeds. | 6) To obtain robust results, it would have been better to evaluate the methods across different splits of train-val-test, not simply different initialisation seeds. |
NIPS_2017_631 | NIPS_2017 | 1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is not the best one out there. But in order to claim that CBN should help even the more powerful VQA models, I would like the authors to conduct experiments on more than one VQA model â favorably the ones which are closer to state-of-art (and whose codes are publicly available) such as MCB (Fukui et al., EMNLP16), HieCoAtt (Lu et al., NIPS16). It could be the case that these more powerful VQA models are already so powerful that the proposed early modulating does not help. So, it is good to know if the proposed conditional batch norm can advance the state-of-art in VQA or not.
2. L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to?
3. In table 1, the results on the VQA dataset are reported on the test-dev split. However, as mentioned in the guidelines from the VQA dataset authors (http://www.visualqa.org/vqa_v1_challenge.html), numbers should be reported on test-standard split because one can overfit to test-dev split by uploading multiple entries.
4. Table 2, applying Conditional Batch Norm to layer 2 in addition to layers 3 and 4 deteriorates performance for GuessWhat?! compared to when CBN is applied to layers 4 and 3 only. Could authors please throw some light on this? Why do they think this might be happening?
5. Figure 4 visualization: the visualization in figure (a) is from ResNet which is not finetuned at all. So, it is not very surprising to see that there are not clear clusters for answer types. However, the visualization in figure (b) is using ResNet whose batch norm parameters have been finetuned with question information. So, I think a more meaningful comparison of figure (b) would be with the visualization from Ft BN ResNet in figure (a).
6. The first two bullets about contributions (at the end of the intro) can be combined together.
7. Other errors/typos:
a. L14 and 15: repetition of word âimagineâ
b. L42: missing reference
c. L56: impact -> impacts
Post-rebuttal comments:
The new results of applying CBN on the MRN model are interesting and convincing that CBN helps fairly developed VQA models as well (the results have not been reported on state-of-art VQA model). So, I would like to recommend acceptance of the paper.
However I still have few comments --
1. It seems that there is still some confusion about test-standard and test-dev splits of the VQA dataset. In the rebuttal, the authors report the performance of the MCB model to be 62.5% on test-standard split. However, 62.5% seems to be the performance of the MCB model on the test-dev split as per table 1 in the MCB paper (https://arxiv.org/pdf/1606.01847.pdf).
2. The reproduced performance reported on MRN model seems close to that reported in the MRN paper when the model is trained using VQA train + val data. I would like the authors to clarify in the final version if they used train + val or just train to train the MRN and MRN + CBN models. And if train + val is being used, the performance can't be compared with 62.5% of MCB because that is when MCB is trained on train only. When MCB is trained on train + val, the performance is around 64% (table 4 in MCB paper).
3. The citation for the MRN model (in the rebuttal) is incorrect. It should be -- @inproceedings{kim2016multimodal,
title={Multimodal residual learning for visual qa},
author={Kim, Jin-Hwa and Lee, Sang-Woo and Kwak, Donghyun and Heo, Min-Oh and Kim, Jeonghee and Ha, Jung-Woo and Zhang, Byoung-Tak},
booktitle={Advances in Neural Information Processing Systems}, pages={361--369}, year={2016} }
4. As AR2 and AR3, I would be interested in seeing if the findings from ResNet carry over to other CNN architectures such as VGGNet as well. | 6. The first two bullets about contributions (at the end of the intro) can be combined together. |
vg55TCMjbC | EMNLP_2023 | - Although the situations are checked by human annotators, the seed situations are generated by ChatGPT. The coverage of situation types might be limited.
- The types of situations/social norms (e.g., physical/psychological safety) are not clear in the main paper.
- It’s a bit hard to interpret precision on NormLens-MA, where the different labels could be considered as gold. | - The types of situations/social norms (e.g., physical/psychological safety) are not clear in the main paper. |
NIPS_2016_370 | NIPS_2016 | , and while the scores above are my best attempt to turn these strengths and weaknesses into numerical judgments, I think it's important to consider the strengths and weaknesses holistically when making a judgment. Below are my impressions. First, the strengths: 1. The idea to perform improper unsupervised learning is an interesting one, which allows one to circumvent certain NP hardness results in the unsupervised learning setting. 2. The results, while mostly based on "standard" techniques, are not obvious a priori, and require a fair degree of technical competency (i.e., the techniques are really only "standard" to a small group of experts). 3. The paper is locally well-written and the technical presentation flows easily: I can understand the statement of each theorem without having to wade through too much notation, and the authors do a good job of conveying the gist of the proofs. Second, the weaknesses: 1. The biggest weakness is some issues with the framework itself. In particular: 1a. It is not obvious that "k-bit representation" is the right notion for unsupervised learning. Presumably the idea is that if one can compress to a small number of bits, one will obtain good generalization performance from a small number of labeled samples. But in reality, this will also depend on the chosen model class used to fit this hypothetical supervised data: perhaps there is one representation which admits a linear model, while another requires a quadratic model or a kernel. It seems more desirable to have a linear model on 10,000 bits than a quadratic model on 1,000 bits. This is an issue that I felt was brushed under the rug in an otherwise clear paper. 1b. It also seems a bit clunky to work with bits (in fact, the paper basically immediately passes from bits to real numbers). 1c. Somewhat related to 1a, it wasn't obvious to me if the representations implicit in the main results would actually lead to good performance if the resulting features were then used in supervised learning. I generally felt that it would be better if the framework was (a) more tied to eventual supervised learning performance, and (b) a bit simpler to work with. 2. I thought that the introduction was a bit grandiose in comparing itself to PAC learning. 3. The main point (that improper unsupervised learning can overcome NP hardness barriers) didn't come through until I had read the paper in detail. When deciding what papers to accept into a conference, there are inevitably cases where one must decide between conservatively accepting only papers that are clearly solid, and taking risks to allow more original but higher-variance papers to reach a wide audience. I generally favor the latter approach, I think this paper is a case in point: it's hard for me to tell whether the ideas in this paper will ultimately lead to a fruitful line of work, or turn out to be flawed in the end. So the variance is high, but the expected value is high as well, and I generally get the sense from reading the paper that the authors know what they are doing. So I think it should be accepted. Some questions for the authors (please answer in rebuttal): -Do the representations implicit in Theorems 3.2 and Theorem 4.1 yield features that would be appropriate for subsequent supervised learning of a linear model (i.e., would linear combinations of the features yield a reasonable model family)? -How easy is it to handle e.g. manifolds defined by cubic constraints with the spectral decoding approach? | 3. The paper is locally well-written and the technical presentation flows easily: I can understand the statement of each theorem without having to wade through too much notation, and the authors do a good job of conveying the gist of the proofs. Second, the weaknesses: |
NIPS_2016_221 | NIPS_2016 | weakness: 1. To my understanding, two aspects which are the keys to the segmentation performance are: (1) The local DNN evaluation of shape descriptors in terms of energy, and (2) The back-end guidance of (super)voxel agglomeration. Although experiment showed gains of the proposed method over GALA, it is yet not clear enough which part is the major contributor of such gain. The original paper of GALA used methods different from this paper (3D-CNN) to generate edge probability maps. Is the edge map extraction framework in this paper + GALA a fair enough baseline? It would be great if such edge map can also be visualized. 2. The proposed method to some extent is not that novel. The idea of generating or evaluating segmentation masks with DNN has been well studied by the vision community in many general image segmentation tasks. In addition, the idea of greedily guiding the agglomeration of (super)voxels by evaluating energies pretty much resembles many bottom-up graph-theoretic merging in early segmentation methods. The authors, however, failed to mention and compare with many related works from the general vision community. Although the problem general image segmentation somewhat differs from the task of this paper, showing certain results on general image segmentation datasets (like the GALA paper) and comparing with state-of-the-art general segmentation methods may give a better view of the performance of the proposed method. 3. Certain parts did not provide enough explanation, or are flooded by too much details and fail to give the big picture clearly enough. For example in Appendix B Definition 2, when explaining the relationship between shape descriptor and connectivity region, some graphical illustrations would have helped the readers understanding the ideas in the paper much easier and better. ----------------------------------------------------------------------------- Additional Comments after Discussion Upon carefully reading the rebuttal and further reviewing some of the related literature, I decide to down-grade scores on novelty and impact. Here are some of the reasons: 1. I understand this paper targets a problem which somewhat differs from general segmentation problems. And I do very much appreciate its potential benefit to the neuroscience community. This is indeed a plus for the paper. However, an important question is how much this paper can really improve over the existing solutions. Therefore, to demonstrate that the algorithm is able to correctly find closed contours, and really show stronger robustness against weak boundaries (This is especially important for bottom up methods), the authors do need to refer to more recent trends in the vision community. 2. I noticed one reference "Maximin affinity learning of image segmentation, NIPS 2009" cited in this paper. The paper proposed a very elegant solution to affinity learning. The core ideas proposed in this paper, such as greedy merging, Rand Index like energy function show strong connections to the cited paper. The maximin affinity is basically the weakest edge along a minimum spanning tree (MST), and we know greedy region merging is also based on cutting weakest edges on a MST. The slight difference is the author proposed the energy term at a lot of positions and scales but the previous paper has a single energy term for the global image. In addition cited paper also addressed a similar problem in the experiment. However, the authors not only did not include the citation as an experimental baseline, but also failed to provide detailed discussions on the relation between the two works. 3. The authors argued for greedy strategy, claiming this is better than factorizable energies. "By sacrificing factorization, we are able to achieve the rich combinatorial modeling provided by the proposed 3d shape descriptors." I guess what the authors mean is they are putting more emphasis on local predictions. But this statement is not solidly justified by the paper. In addition, although to some extent I could understand this argument (local prediction indeed seems important because the size of cells vs volume are much smaller than segments vs whole image in general segmentation), there are significant advances on making pretty strong local predictions from the general vision community using deep learning, which the authors fail to mention and compare. I think overall the paper addressed an interesting problem and indeed showed solid research works. However it will better if the authors could better address contemporary segmentation literature and further improve the experiments. | 1. I understand this paper targets a problem which somewhat differs from general segmentation problems. And I do very much appreciate its potential benefit to the neuroscience community. This is indeed a plus for the paper. However, an important question is how much this paper can really improve over the existing solutions. Therefore, to demonstrate that the algorithm is able to correctly find closed contours, and really show stronger robustness against weak boundaries (This is especially important for bottom up methods), the authors do need to refer to more recent trends in the vision community. |
ICLR_2022_2754 | ICLR_2022 | I feel the motivation of the work is confusing. I can understand the authors want to improve CQL somehow further. But it is never made clear:
what existing problems are and why they matter; Is it the lower bound on the exiting CQL is too loose? Why is improving the bound important?
what is the effect you want to achieve? Is it an offline algorithm that can learn a better policy from data generated by a poor behavior policy?
The contribution is incremental, and I doubt the significance. As the paper cited in section 2.2, Kumar et al. (2020) propose to penalize the actions not described by the dataset, which enables a general definition of \mu. Note that the additional weighting scheme can be essentially thought of as a new type of \mu. I don’t see a clear difference between (2) and (3). Both can be considered as the specially designed action sampling distribution.
Why is theorem 1 useful? If I understand correctly, the key thing you want to say is the additional weighting can provide a tighter bound for OOD state-action pairs or those not close to the dataset? But the first step should be figuring out the effects of having a tight/loose bound. Does it hurt optimality/convergence rate/generalization…? Even partially answering this question can better motivate the reweighting approach. I believe the proof of the theorem is a simple modification from the existing CQL work.
The choice of the weighting scheme lacks justification. An intuitive choice is the RBF function. Furthermore, according to theorem 1, the proposed weighting is useful only when the action is OOD, i.e., the weight is 1; when the action is ID, weight should be 0, but your weighting scheme does not give zero?
Section 4.1, the proposed method comes out suddenly. Is there any reason to choose normalizing flows? Of course, normalizing flows is a good method enabling both efficient sampling and density evaluation. In your algorithm, you only need to evaluate the density but not to sample. There should be plenty of other choices. When testing ideas, it is more natural to start with some simple methods.
What is the purpose of section 4.3?
I expect to see how various concrete choices of the weighting scheme can affect the distance (e.g., KL divergence) between the learned policy and the behavior policy.
The experiments. 1. more random seeds should be tested (figure 1) — it is hard to distinguish algorithms from the current learning curves. Readers cannot see a clear message from them. 2. I expect more baselines to be compared and more domains to be tested. As I mentioned, the choices of the weighting and the way of learning density functions are not strongly motivated. In this case, I have to ask for stronger empirical results: baselines with other design choices and more domains. 3. The experiments in Fig 2 are incomplete. Why are there no experiments for half cheetah and walker with expert data? 4. Please provide reproducing details.
The abstract says, “… with a strong theoretical guarantee.” I don’t think there is any strong theory in the paper.
Page 3. Last paragraph. The criticism towards using empirical dataset distribution for \hat{pi}_\beta does not make sense to me. When the state/action is continuous, the empirical estimation should be kernel density estimation, which is a consistent estimator for estimating empirical distributions. The kernel can be chosen as smooth, so the KDE should have support everywhere. Minor:
there is a nontrivial number of grammar issues/typos. Please double-check.
Many sentences are confusing or logically disconnected.
e.g., in the abstract, “A compromise between enhancing …. To alleviate this issue, … ” what issue?
“Improving the learning process.” In what sense? Higher sample efficiency?
“Indeed, the lack of information … ” what information? Why does it provoke overestimation?
I believe saying “based on the amount of information collected” is inaccurate because the paper does not really introduce any information measurement. | 2. I expect more baselines to be compared and more domains to be tested. As I mentioned, the choices of the weighting and the way of learning density functions are not strongly motivated. In this case, I have to ask for stronger empirical results: baselines with other design choices and more domains. |
NIPS_2016_153 | NIPS_2016 | weakness of previous models. Thus I find these results novel and exciting.Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. b. Model accuracy, where the aim is to provide a model which is better than the state of the art. To the best of my understanding, this study mostly focuses on the latter, i.e. provide a better than state of the art model. If I am misunderstanding, then it would probably be important to stress the biological insights gained from the study. Yet if indeed modeling accuracy is the focus, it's important to provide a fair comparison to the state of the art, and I see a few caveats in that regard: 1. The authors mention the GLM model of Pillow et al. which is pretty much state of the art, but a central point in that paper was that coupling filters between neurons are very important for the accuracy of the model. These coupling filters are omitted here which makes the comparison slightly unfair. I would strongly suggest comparing to a GLM with coupling filters. Furthermore, I suggest presenting data (like correlation coefficients) from previous studies to make sure the comparison is fair and in line with previous literature. 2. The authors note that the LN model needed regularization, but then they apply regularization (in the form of a cropped stimulus) to both LN models and GLMs. To the best of my recollection the GLM presented by pillow et al. did not crop the image but used L1 regularization for the filters and a low rank approximation to the spatial filter. To make the comparison as fair as possible I think it is important to try to reproduce the main features of previous models. Minor notes: 1. Please define the dashed lines in fig. 2A-B and 4B. 2. Why is the training correlation increasing with the amount of training data for the cutout LN model (fig. 4A)? 3. I think figure 6C is a bit awkward, it implies negative rates, which is not the case, I would suggest using a second y-axis or another visualization which is more physically accurate. 4. Please clarify how the model in fig. 7 was trained. Was it on full field flicker stimulus changing contrast with a fixed cycle? If the duration of the cycle changes (shortens, since as the authors mention the model cannot handle longer time scales), will the time scale of adaptation shorten as reported in e.g Smirnakis et al. Nature 1997. | 1. Please define the dashed lines in fig. 2A-B and 4B. |
ICLR_2022_2196 | ICLR_2022 | weakness] Modeling:
The rewards are designed based on a discriminator. As we know, generative adversarial networks are not easy to train since generative networks and discriminative networks are trained alternatively. In the proposed method, the policy network and the discriminator are trained alternatively. I doubt if it is easy to train the model. I would like to see the training curves for rewards value.
The detailed alignment function used in Eq. (1) and Eq. (3) need to be provided.
Experiment: - The results are not satisfying. In the experiment, the generation quality of the proposed method is not good as traditional generative networks in terms of FID. In the image parsing part, the results are far behind the compared methods. - Since the results are not comparable to the existing methods, there seems not too much significance for the proposed methods. | - Since the results are not comparable to the existing methods, there seems not too much significance for the proposed methods. |
53kW6e1uNN | ICLR_2024 | 1. Limited novelty. The paper seems like a straightforward application of existing literature, specifically the DeCorr [1] that focuses on general deep graph neural networks, in a specific application domain. The contribution of this study is mainly the transposition of DeCorr's insights into graph collaborative filtering, with different datasets and backbones. Although modifications like different penalty coefficients for users and items are also proposed, the whole paper still lack enough insights about what are unique challenges of overcorrelation in recommender systems.
2. It could be better if one additional figure could be illustrated, i.e., how Corr and SMV metrics evolve with the application of additional network layers—mirroring the Figure 2, but explicitly showcasing the effects of the proposed method—the authors could convincingly validate their auxiliary loss function's efficacy.
3. Presentation issues. The y-axis labels of Figure 2 lack standardization, e.g., 0.26 vs. 0.260 vs. 2600 vs. .2600.
[1] Jin et al. Feature overcorrelation in deep graph neural networks: A new perspective. KDD 2022. | 1. Limited novelty. The paper seems like a straightforward application of existing literature, specifically the DeCorr [1] that focuses on general deep graph neural networks, in a specific application domain. The contribution of this study is mainly the transposition of DeCorr's insights into graph collaborative filtering, with different datasets and backbones. Although modifications like different penalty coefficients for users and items are also proposed, the whole paper still lack enough insights about what are unique challenges of overcorrelation in recommender systems. |
NIPS_2017_575 | NIPS_2017 | - While the general architecture of the model is described well and is illustrated by figures, architectural details lack mathematical definition, for example multi-head attention. Why is there a split arrow in Figure 2 right, bottom right? I assume these are the inputs for the attention layer, namely query, keys, and values. Are the same vectors used for keys and values here or different sections of them? A formal definition of this would greatly help readers understand this.
- The proposed model contains lots of hyperparameters, and the most important ones are evaluated in ablation studies in the experimental section. It would have been nice to see significance tests for the various configurations in Table 3.
- The complexity argument claims that self-attention models have a maximum path length of 1 which should help maintaining information flow between distant symbols (i.e. long-range dependencies). It would be good to see this empirically validated by evaluating performance on long sentences specifically.
Minor comments:
- Are you using dropout on the source/target embeddings?
- Line 146: There seems to be dangling "2" | - The proposed model contains lots of hyperparameters, and the most important ones are evaluated in ablation studies in the experimental section. It would have been nice to see significance tests for the various configurations in Table 3. |
ICLR_2021_1465 | ICLR_2021 | 1. The complexity analysis is insufficient. In the draft, the author only provide the rough overall complexity. A better way is to show the comparison between the proposed method and some other methods, including the number of model parameter and network forwarding time.
2. In the converting of point cloud to concentric spherical signal, the Gaussian radial basis function is adopted to summarize the contribution of points. Is there any other function that can accomplish this job? The reviewer would like to the discussion about this.
3. The Figure 2 is a little ambiguous, where some symbols are not explained clearly. And the reviewer is curious about that whether there is information redundancy and interference in the multi-sphere icosahedral discretization process.
4. There are some typos in the draft. The first is the wrong use of "intra-sphere" and "inter-sphere". The second is the use of two consecutive "stacking" in the Spherical Discretization subsection. Please check the full text carefully.
5. The center choice of the concentric spheres should be discussed both theoretically and experimentally. In the opinion, the center of spheres play a important role in the representation capturing of 3D point clouds in a sphere convolution manner. | 3. The Figure 2 is a little ambiguous, where some symbols are not explained clearly. And the reviewer is curious about that whether there is information redundancy and interference in the multi-sphere icosahedral discretization process. |
NIPS_2017_585 | NIPS_2017 | weakness of the paper is in the experiments: there should be more complete comparisons in computation time, and comparisons with QMC-based methods of Yang et al (ICML2014). Without this the advantage of the proposed method remains unclear.
- The limitation of the obtained results:
The authors assume that the spectrum of a kernel is sub-gaussian. This is OK, as the popular Gaussian kernels are in this class. However, another popular class of kernels such as Matern kernels are not included, since their spectrum only decay polynomially. In this sense, the results of the paper could be restrictive.
- Eq. (3):
What is $e_l$?
Corollaries 1, 2 and 3 and Theorem 4:
All of these results have exponential dependence on the diameter $M$ of the domain of data: a required feature size increases exponentially as $M$ grows. While this factor does not increase as a required amount of error $\varepsilon$ decreases, the dependence on $M$ affects the constant factor of the required feature size. In fact, Figure 1 shows that the performance is more quickly getting worse than standard random features. This may exhibit the weakness of the proposed approaches (or at least of the theoretical results).
- The equation in Line 170:
What is $e_i$?
- Subsampled dense grid:
This approach is what the authors used in Section 5 on experiments. However, it looks that there is no theoretical guarantee for this method. Those having theoretical guarantees seem not to be practically useful.
- Reweighted grid quadrature:
(i) It looks that there is no theoretical guarantee with this method.
(ii) The approach reminds me of Bayesian quadrature, which essentially obtains the weights by minimizing the worst case error in the unit ball of an RKHS. I would like to look at comparison with this approach.
(iii) Would it be possible to derive a time complexity?
(iv) How do you chose the regularization parameter $\lambda$ in the case of the $\ell_1$ approach?
- Experiments in Section 5:
(i) The authors reported the results of computation time very briefly (320 secs vs. 384 seconds for 28800 features in MNIST and "The quadrature-based features ... are about twice as fast to generate, compared to random Fourier features ..." in TIMIT). I do not they are not enough: the authors should report the results in the form of Tables, for example, varying the number of features.
(ii) There should be comparison with the QMC-based methods of Yang et al. (ICML2014, JMLR2016). It is not clear what is the advantage of the proposed method over the QMC-based methods.
(iii) There should be explanation on the settings of the MNIST and TIMIT classification tasks: what classifiers did you use, and how did you determine the hyper-parameters of these methods? At least such explantion should be included in the appendix. | - The limitation of the obtained results: The authors assume that the spectrum of a kernel is sub-gaussian. This is OK, as the popular Gaussian kernels are in this class. However, another popular class of kernels such as Matern kernels are not included, since their spectrum only decay polynomially. In this sense, the results of the paper could be restrictive. |
ICLR_2023_4654 | ICLR_2023 | Weakness:
1), The proposed approach is straightforward (not a demerit), and is a native extension on how to extend the DETR into few-shot, although there exist some specific mechanism designs in this paper to facilitate such extension. However, similar ideas also can be found in existing papers such as [1], which appeared in 2021 in arXiv and published in 2022. Since [1] is already published, it should be included as a fair baseline to compare with and discuss and [1] should be the most close research reported in few-shot object detection. However, the performance reported in this manuscript seems is not as high as in [1] with a significant margin.
[1] Meta-DETR: Image-Level Few-Shot Object Detection with Inter-Class Correlation Exploitation, T-PAMI, 2022.
2), From the data in Table 4, it indicates that the unsupervised pretraining is a key factor on the performance gain. However, there is no detailed discussion on the unsupervised pretraining in the main paper, which might be a problem. In fact, compared with ablation study of Table 5, the unsupervised pretraining is much more important than other modules presented in this paper. Therefore, I will suggest on focusing more on the pretraining method in the main paper.
3), I also cannot very agree with three “desiderata” claimed by the authors (although this is not a serious issue). In standard few-shot object detection, fine-tuning or re-training is not an evil. Moreover, “without re-training” is a merit to all attention-based few-shot approaches, not a unique merit of this approach. The second point, “an arbitrary number of novel objects” actually is not even an issue for those “re-training” few-shot methods. And the "re-training" based method may also have the merit that it no need to require the "queries" for the detection on both base and novel classes, and the selection of "queries" may also affect the performance of the detection performance.
4), In fact, many few-shot research is also focusing on the performance on both base and novel classes. One another important “desiderata” should be it can eliminate the performance drop as much as possible on base classes when adapting to novel classes. However, in this paper (similar for most "retraining-free" FS methods), the base class performance is not focused totally, and no experiment statistics are provided at all, this would be a problem to compare with most few-shot methods. Since the attention-based approach relying on the "queries", it's base-class performance may be worse than the "retraining" methods? | 2), From the data in Table 4, it indicates that the unsupervised pretraining is a key factor on the performance gain. However, there is no detailed discussion on the unsupervised pretraining in the main paper, which might be a problem. In fact, compared with ablation study of Table 5, the unsupervised pretraining is much more important than other modules presented in this paper. Therefore, I will suggest on focusing more on the pretraining method in the main paper. |
Ie040B4nFm | EMNLP_2023 | - The proposed system seems to deter the model in terms of their BLEU scores (system degrades in 2 out of the 3 settings). This leads me to think that while the model seems to do well on speaker specific terms/inflections, the overall translations degrade.
- How would we choose which ELM to pick (male/female)? Does this require us to know the speaker’s gender beforehand, i.e., at inference time? This seems like a drawback as the accuracy should be calculated after using a gender detection model in the pipeline (at least in the cases where vocal traits match speaker identity).
- What happens when a single audio file has two speakers (male and female) conversing with each other? Which ELM to pick in that case? | - How would we choose which ELM to pick (male/female)? Does this require us to know the speaker’s gender beforehand, i.e., at inference time? This seems like a drawback as the accuracy should be calculated after using a gender detection model in the pipeline (at least in the cases where vocal traits match speaker identity). |
ARR_2022_187_review | ARR_2022 | 1. Not clear if the contribution of the paper are sufficient for a long *ACL paper. By tightening the writing and removing unnecessary details, I suspect the paper will make a nice short paper, but in its current form, the paper lacks sufficient novelty. 2. The writing is difficult to follow in many places and can be simplified.
1. Line 360-367 are occupying too much space than needed. 2. It was not clear to me that Vikidia is the new dataset that was introduced by the paper until I read the last section :) 3. Too many metrics used for evaluation. While I commend the paper’s thoroughness by using different metrics for evaluation, I believe in this case the multiple metrics create more confusion than clarity in understanding the results. I recommend using the strictest metric (such as RA) because it will clearly highlight the differences in performance. Also consider marking the best results in each column/row using boldface text. 4. I suspect that other evaluation metrics NDCG, SRRR, KTCC are unable to resolve the differences between NPRM and the baselines in some cases. For e.g., Based on the extremely large values (>0.99) for all approaches in Table 4, I doubt the difference between NPRM’s 0.995 and Glove+SVMRank 0.992 for Avg. SRR on NewsEla-EN is statistically significant. 5. I did not understand the utility of presenting results in Table 2 and Table 3. Why not simplify the presentation by selecting the best regression based and classification based approaches for each evaluation dataset and compare them against NPRM in Table 4 itself? 6. From my understanding, RA is the strictest evaluation metric, and NPRM performs worse on RA when compared to the baselines (Table 4) where simpler approaches fare better. 7. I appreciate the paper foreseeing the limitations of the proposed NPRM approach. However, I find the discussion of the first limitation somewhat incomplete and ending abruptly. The last sentence has the tone of “despite the weaknesses, NPRM is useful'' but it does not flesh out why it’s useful. 8. I found ln616-632 excessively detailed for a conclusion paragraph. Maybe simply state that better metrics are needed for ARA evaluation? Such detailed discussion is better suited for Sec 4.4 9. Why was a classification based model not used for the zero shot experiments in Table 5 and Table 6? These results in my opinion are the strongest aspect of the paper, and should be as thorough as the rest of the results. 10. Line 559: “lower performance on Vikidia-Fr compared to Newsela-Es …” – Why? These are different languages after all, so isn’t the performance difference in-comparable? | 2. The writing is difficult to follow in many places and can be simplified. |
NIPS_2017_217 | NIPS_2017 | - The paper is incremental and does not have much technical substance. It just adds a new loss to [31].
- "Embedding" is an overloaded word for a scalar value that represents object ID.
- The model of [31] is used in a post-processing stage to refine the detection. Ideally, the proposed model should be end-to-end without any post-processing.
- Keypoint detection results should be included in the experiments section.
- Sometimes the predicted tag value might be in the range of tag values for two or more nearby people, how is it determined to which person the keypoint belongs?
- Line 168: It is mentioned that the anchor point changes if the neck is occluded. This makes training noisy since the distances for most examples are computed with respect to the neck.
Overall assessment: I am on the fence for this paper. The paper achieves state-of-the-art performance, but it is incremental and does not have much technical substance. Furthermore, the main improvement comes from running [31] in a post-processing stage. | - The paper is incremental and does not have much technical substance. It just adds a new loss to [31]. |
ikX6D1oM1c | ICLR_2024 | - I found Sec 5.1 and 5.2 difficult to read and I think clarity can be improved. What confused me initially was that you suggest fixing $P^*(U|x, a)$ but then the $\sup$ in Eq. 5 is also over the distributions $p(u|x, A)$. Reading it further, the sup is only for $A \neq a$ but I think clarifying that you only fix for the treatment $a$ that enters into $Q$ would be useful. Maybe this is obvious, but it will still make it easier to understand what is being optimized over in the $\sup$.
- It would also be nice to have some intuition of the proof of Theorem 1. Also, the invertible function $f^*$ would depend on the fixed $P^*$. Does certain distributions $P^*$ make it easier to determine $f^*$. In practice, how should you determine which $P^*$ to fix? | - It would also be nice to have some intuition of the proof of Theorem 1. Also, the invertible function $f^*$ would depend on the fixed $P^*$. Does certain distributions $P^*$ make it easier to determine $f^*$. In practice, how should you determine which $P^*$ to fix? |
NIPS_2018_15 | NIPS_2018 | - The hGRU architecture seems pretty ad-hoc and not very well motivated. - The comparison with state-of-the-art deep architectures may not be entirely fair. - Given the actual implementation, the link to biology and the interpretation in terms of excitatory and inhibitory connections seem a bit overstated. Conclusion: Overall, I think this is a really good paper. While some parts could be done a bit more principled and perhaps simpler, I think the paper makes a good contribution as it stands and may inspire a lot of interesting future work. My main concern is the comparison with state-of-the-art deep architectures, where I would like the authors to perform a better control (see below), the results of which may undermine their main claim to some extent. Details: - The comparison with state-of-the-art deep architectures seems a bit unfair. These architectures are designed for dealing with natural images and therefore have an order of magnitude more feature maps per layer, which are probably not necessary for the simple image statistics in the Pathfinder challenge. However, this difference alone increases the number of parameters by two orders of magnitude compared with hGRU or smaller CNNs. I suspect that using the same architectures with smaller number of feature maps per layer would bring the number of parameters much closer to the hGRU model without sacrificing performance on the Pathfinder task. In the author response, I would like to see the numbers for this control at least on the ResNet-152 or one of the image-to-image models. The hGRU architecture seems very ad-hoc. - It is not quite clear to me what is the feature that makes the difference between GRU and hGRU. Is it the two steps, the sharing of the weights W, the additional constants that are introduced everywhere and in each iteration (eta_t). I would have hoped for a more systematic exploration of these features. - Why are the gain and mix where they are? E.g. why is there no gain going from H^(1) to \tilde H^(2)? - I would have expected Eqs. (7) and (10) to be analogous, but instead one uses X and the other one H^(1). Why is that? - Why are both H^(1) and C^(2) multiplied by kappa in Eq. (10)? - Are alpha, mu, beta, kappa, omega constrained to be positive? Otherwise the minus and plus signs in Eqs. (7) and (10) are arbitrary, since some of these parameters could be negative and invert the sign. - The interpretation of excitatory and inhibitory horizontal connections is a bit odd. The same kernel (W) is applied twice (but on different hidden states). Once the result is subtracted and once it's added (but see the question above whether this interpretation even makes sense). Can the authors explain the logic behind this approach? Wouldn't it be much cleaner and make more sense to learn both an excitatory and an inhibitory kernel and enforce positive and negative weights, respectively? - The claim that the non-linear horizontal interactions are necessary does not appear to be supported by the experimental results: the nonlinear lesion performs only marginally worse than the full model. - I do not understand what insights the eigenconnectivity analysis provides. It shows a different model (trained on BSDS500 rather than Pathfinder) for which we have no clue how it performs on the task and the authors do not comment on what's the interpretation of the model trained on Pathfinder not showing these same patterns. Also, it's not clear to me where the authors see the "association field, with collinear excitation and orthogonal suppression." For that, we would have to know the preferred orientation of a feature and then look at its incoming horizontal weights. If that is what Fig. 4a shows, it needs to be explained better. | - I would have expected Eqs. (7) and (10) to be analogous, but instead one uses X and the other one H^(1). Why is that? |
NIPS_2021_2024 | NIPS_2021 | below). Using the related literature on active interventions would require full identification of the underlying DAG. It is emphasized that matching only the means can be done with significantly smaller number of interventions, and this is the difference from previous works. - Identifiability in terms of Markov equivalence classes (MEC) is well discussed. Graphical characterization of the proposed shift-interventional (shift-I) MEC, and its refinement over the general interventional MEC is given clearly. Assumptions are reasonable within the given setting. - Extending the decomposition of intervention essential graphs to shift interventional essential graphs is sound. Both of the proposed approaches for solving the problem, clique tree and supermodular strategies are reasonable. Use of a lower bound surrogate function to enable supermodularity is clever. - The paper is organized clearly, and the theoretical claims are well supported.
Weaknesses: I have several concerns on the importance of the proposed settings and usefulness of the results. - Although the causal matching problem seems interesting and new, it is not well motivated. To the reviewer’s knowledge, interventions on a causal model are tied to inferring the underlying structure (it does not need to be the whole structure of the model). In this regard, it is not clear how exactly matching the means of a causal system is preferable to performing more relaxed cases of soft interventions. The authors are encouraged to further explain how this setting can be beneficial. - Deterministic shift interventions are useful to test the applicability of the proposed ideas. However, restricting the problem setting to only shift interventions is quite limited and leads to some rather trivial results. For instance, existence and uniqueness results of matching shift-intervention in Lemma 1, and the properties of source nodes in Lemma 2 are immediate observations in a DAG. - Clique tree approximation is just a minor modification of the cited central node algorithm (Greenewald et al., 2019). - Complexity of the submodularity approach subroutine uses SATURATE algorithm (Krause et al., 2008), and is said to scale with N 5
in appendix D.4. It is worth commenting on the feasibility of this approach. For instance, what are the runtimes of the simulations for large models in Section 6? - It is a nice result that the number of proposed interventions is only a logarithmic factor of the lower bound. However, the baselines in the simulations are not very strong to demonstrate the usefulness. Though coloring approach of Shanmugam et al., 2015 is a related active intervention design, the goal of it is broader than finding a matching intervention. For instance, a simple random upstream search, the other baseline, performs much better than coloring due to the simpler objective. That being said, the reviewer understands that the proposed task is new and fair comparisons may not be easy.
Although this paper has several nice properties, the overall contribution, constraints on the problem, and the importance of the results are not adequate for publication at NeurIPS.
Main limitations of the work, which are also stated in the above review, and potential impact of the work, which is not very imminent, are adequately addressed in the discussion section. | - Although the causal matching problem seems interesting and new, it is not well motivated. To the reviewer’s knowledge, interventions on a causal model are tied to inferring the underlying structure (it does not need to be the whole structure of the model). In this regard, it is not clear how exactly matching the means of a causal system is preferable to performing more relaxed cases of soft interventions. The authors are encouraged to further explain how this setting can be beneficial. |
NIPS_2016_279 | NIPS_2016 | Weakness: 1. The main concern with the paper is the applicability of the model to real-world diffusion process. Though the authors define an interesting problem with elegant solutions, however, it will be great if the authors could provide empirical evidence that the proposed model captures the diffusion phenomena in real-world. 2. Though the IIM problem is defined on the Ising network model, all the analysis is based on the mean-field approximation. Therefore, it will be great if the authors can carry out experiments to show how similar is the mean-field approximation compared to the true distribution via methods such as Gibbs sampling. Detailed Comments: 1. Section 3, Paragraph 1, Line 2, if there there exists -> if there exists. | 1. The main concern with the paper is the applicability of the model to real-world diffusion process. Though the authors define an interesting problem with elegant solutions, however, it will be great if the authors could provide empirical evidence that the proposed model captures the diffusion phenomena in real-world. |
V8PhVhb4pp | ICLR_2024 | The main weaknesses of this paper are the lack of enough qualitative results and the ambiguity of explanation.
1. In the ablation study of 4.3, only one particular qualitative example is shown to demonstrate the effectiveness of different components. This is far from being convincing. The authors should have included more than 10 results of different prompts in the appendix for that.
2. In the "bidirectional guidance" part of section 4.3 Ablation Studies, the results shown at the top row of figure 6 seem to be totally different shapes. I understand this can happen for the 2D diffusion model. However the text also says "... and the 3D diffusion model manifests anomalies in both texture and geometric constructs.". But where are the 3D diffusion results? From my understanding the results from the 3D diffusion model should always look like the same shape and yield consistent multi-view renderings. I did not find these results in figure 6.
3. Figure 4 shows the main qualitative results of the proposed feed-forward method. However there is no comparison to previous methods. I think at least the comparison to Shap-E should be included.
4. The results of Zero-1-to-3 shown in figure 5 are weird. Why all the others methods shown are the final 3D results with mesh visualization, but the Zero-1-to-3 only has multi-view generation results? My understanding is to generation the 3D results at the left lower corner of Figure 5, we still need to use the SDS loss. If this is true, then a directly competitor should be using Zero-1-to-3 with SDS loss.
5. More details about the decoupled geometry and texture control in page 8 are needed. What does it mean to fix the 3D prior? Do you mean fixing the initial noise of the 3D diffusion? When fixing the textual prompt if the 2D diffusion, do you also fix the initial noise? | 2. In the "bidirectional guidance" part of section 4.3 Ablation Studies, the results shown at the top row of figure 6 seem to be totally different shapes. I understand this can happen for the 2D diffusion model. However the text also says "... and the 3D diffusion model manifests anomalies in both texture and geometric constructs.". But where are the 3D diffusion results? From my understanding the results from the 3D diffusion model should always look like the same shape and yield consistent multi-view renderings. I did not find these results in figure 6. |
ICLR_2023_1553 | ICLR_2023 | of the papers in my opinion are as follows: 1)The mehtod is only tested on two datasets. Have the authors tried more datasets to get a better idea of the performance. 2) The codes for the paper are not released. | 1)The mehtod is only tested on two datasets. Have the authors tried more datasets to get a better idea of the performance. |
NIPS_2017_53 | NIPS_2017 | Weakness
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later.
3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further.
4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation.
5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable.
6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map?
7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences).
8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case?
Minor Points:
- L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (arenât we mutliplying by a nicely-conditioned matrix to make sure everything is dense?).
- Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU?
Perliminary Evaluation
The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*).
[A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. âNeural Module Networks.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799.
[B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. âMultimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847.
[C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. âSimple Baseline for Visual Question Answering.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167. | 5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable. |
bIlnpVM4bc | ICLR_2025 | - The main contribution of combining attention with other linear mechanisms is not novel, and, as noted in the paper, a lot of alternatives exist.
- A comprehensive benchmarking against existing alternatives is lacking. Comparisons are only made to their proposed variants and Sliding Window Attention in fair setups. A thorough comparison with other models listed in Appendix A (such as MEGA, adapted to Mamba) would strengthen the findings. Additionally, selected architectures are evaluated on a very small scale and only measured by perplexity. While some models achieve lower perplexity, this alone may not suffice to establish superiority (e.g., in H3 by Dao et al., 2022b, lower perplexity is reported against transformer baselines).
- Results on common benchmarks are somewhat misleading. The paper aims to showcase the architecture’s strengths, yet comparisons are often made against models trained on different data distributions, which weakens the robustness of the conclusions.
- Conclusions on long-context handling remain vague, although this should be a key advantage over transformers. It would be helpful to include dataset statistics (average, median, min, max lengths) to clarify context length relevance.
- The only substantial long-context experiment, the summarization task, should be included in the main paper, with clearer discussion and analysis.
- Section 4, “Analysis,” could benefit from clearer motivation. Some explored dimensions may appear intuitive (e.g., l. 444, where SWA is shown to outperform full attention on larger sequence lengths than those used in training), which might limit the novelty of the findings. Other questions seems a bit unrelated to the paper topics (see Questions).
- Length extrapolation, a key aspect of the paper, is barely motivated or discussed in relation to prior work.
- The paper overall feels somewhat unstructured and difficult to follow. Tables present different baselines inconsistently, and messages regarding architectural advantages are interleaved with comments on training data quality (l. 229). The evaluation setup lacks consistency (performance is sometimes assessed on real benchmarks, other times by perplexity), and the rationale behind baseline choices or research questions is insufficiently explained. | - The main contribution of combining attention with other linear mechanisms is not novel, and, as noted in the paper, a lot of alternatives exist. |
pUOesbrlw4 | ICLR_2024 | 1. The paper is lacking a clear and precise definition of unlearning. Its is important to show the definition of unlearning that you want to achieve through your algorithm.
2. The proposed algorithm is an empirical algorithm without any theoretical guarantees. It is important for unlearning papers to provide unlearning guarantees against an adversary.
3. The approach is very similar to this method (http://proceedings.mlr.press/v130/izzo21a/izzo21a.pdf) applied on each layer, which is not cited.
4. A simple baseline is just applying all the unlearning algorithm mentioned in the paper to the last layer vs the entire model. This comparison is missing.
5. All the unlearning verification are only show wrt accuracy of the model or the confusion matrix, however, the information is usually contained in the weights of the model, hence other metrics like membership attack or re-train time after forgetting show be considered.
6. The authors should also consider applying this method a linear perturbation of the network, as in those settings you will be able to get theoretical guarantees in regards to the proposed method, and also get better results.
7. Since the method is applied on each layer, the authors should provide a plot of how different different weights of the model move, for instance plot the relative weight change after unlearning to see which layers are affected the most after unlearning. | 7. Since the method is applied on each layer, the authors should provide a plot of how different different weights of the model move, for instance plot the relative weight change after unlearning to see which layers are affected the most after unlearning. |
NIPS_2017_415 | NIPS_2017 | Weakness:
1. From the methodology aspect, the novelty of paper appears to be rather limited. The ENCODE part is already proposed in [10] and the incremental contribution lies in the decomposition part which just factorizes the M_v into factor D and slices Phi_v.
2. For the experiment, I'd like to the effect of optimized connectome in comparison with that of LiFE model, so we can see the performance differences and the effectiveness of the tensor-based LiFE_sd model. This part of experiment is missing. | 1. From the methodology aspect, the novelty of paper appears to be rather limited. The ENCODE part is already proposed in [10] and the incremental contribution lies in the decomposition part which just factorizes the M_v into factor D and slices Phi_v. |
ICLR_2022_1653 | ICLR_2022 | Weakness]: (1) There is a large gap in the proof of Theorem 1. (2) Missing discussion of the line of research using random matrix theory to understand the input-output Jacobian [1], which also consider the operator norm of the input-out Jacobian and draws a very similar conclusion, e.g., the squared operator norm must grow linearly with the number of layers; see eq (17) and follow up discussion in [1].
In what follows, I elaborate (1) and (2) since they are related.
The biggest issue I see in the proof is the equation above (A.1) on page 11. The authors mixed the calculation of finite width networks (on the left of the equation) and infinite width network calculation together (on the right). More precisely, the authors exchange the order of the two limits lim w i d t h → ∞ and
lim sup x α → x
. The exchangeability of the two limits is questionable to me. In the order: lim w i d t h → ∞
lim sup x α →
, we need to handle a product of random matrices (if we compute the Jacobian). This is indeed a core contribution of [1], who uses free probability theory to compute the whole spectrum of the singular values of the Jacobian (assuming certain free independence of the matrices). If we swap the limits (we shouldn't do this without justification) to
lim sup x α → x β lim w i d t h →
, the problem itself is reduced to computing the derivative of the composed correlation map, which is much simpler. I think these two limits are not unchangeable in general. E.g., using the order
lim sup x α → x β lim w i d t h →
, both critical gaussian and orthogonal initialization give the same answer. But using the order lim w i d t h → ∞
lim sup x α →
, gaussian and orthogonal initialization can give different answers, see eq (17) vs (22) in [1].
Several Qs: Q1:
How Theorem 1 leads to the four possible cases after it needs more discussion. In addition, what are the new insights quotient the existing ones from the order-chaotic analysis? It seems: the first case corresponds to the chaotic phase, the second case corresponds to the order phase. The third/fourth cases seem to be a finer analysis of the critical regime.
Q2: Remark1 the critical initialization. Several works have already identified the issue of the polynomial rate convergence of the correlation to 1 for Relu and smooth functions; see Proposition 1 in [2]; sec B.3. in [3].
Q3: I can't find places to explain the legends "upper bound", "largest found".
Q4: How does Thm1 imply eq (4.1)? Do you assume the operator norm is bounded by O(1)?
[1] Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice, https://arxiv.org/pdf/1711.04735.pdf [2] On the Impact of the Activation Function on Deep Neural Networks Training, https://arxiv.org/abs/1902.06853 [3] Disentangling Trainability and Generalization in Deep Neural Networks, https://arxiv.org/abs/1912.13053
Minors comments: 1.) What is the domain of the inputs? It seems they are lying in the same sphere, not mentioned in the paper. | 1.) What is the domain of the inputs? It seems they are lying in the same sphere, not mentioned in the paper. |
jfTrsqRrpb | ICLR_2024 | 1. This paper generate candidate object regions through unsupervised segmentation methods. However, it cannot be guaranteed that these unsupervised methods can generate object regions that cover all regions. Especially when the number of categories increases, I question the performance of the unsupervised segmentation methods. The author should provide :1) the specific performance of the unsupervised segmentation methods, 2) experimental comparison with existing methods when categories are more, like COCO to LVIS.
2. The author should provide more result metrics with previous methods. For example, LDET also provides AP, AR10. The author should provide related performance comparisons to provide more comprehensive results.
3. [A] also proproses a CLN (region proposal generation algorithm). What's about performance comparision with this work.
4. What's about the details about Refinement module? I feel that this is all about previous methods, no matter the objectness ranking and inference.
[A] Detecting everything in the open world: Towards universal object detection. CVPR 2023 | 3. [A] also proproses a CLN (region proposal generation algorithm). What's about performance comparision with this work. |
NIPS_2016_321 | NIPS_2016 | #ERROR! | - The presentation is at times too equation-driven and the notation, especially in chapter 3, quite convoluted and hard to follow. An illustrative figure of the key concepts in section 3 would have been helpful. |
NIPS_2016_499 | NIPS_2016 | - The proposed method is very similar in spirit to the approach in [10]. It seems that the method in [10] can also be equipped with scoring causal predictions and the interventional data. If otherwise, why [10] cannot use these side information? - The proposed method reduces the computation time drastically compared to [10] but this is achieved by reducing the search space to the ancestral graphs. This means that the output of ACI has less information compared to the output of [10] that has a richer search space, i.e., DAGs. This is the price that has been paid to gain a better performance. How much information of a DAG is encoded in its corresponding ancestral graph? - Second rule in Lemma 2, i.e., Eq (7) and the definition of minimal conditional dependence seem to be conflicting. Taking Zâ in this definition to be the empty set, we should have that x and y are independent given W, but Eq. (7) says otherwise. | - Second rule in Lemma 2, i.e., Eq (7) and the definition of minimal conditional dependence seem to be conflicting. Taking Zâ in this definition to be the empty set, we should have that x and y are independent given W, but Eq. (7) says otherwise. |
7EK2hqWmvz | ICLR_2025 | 1. The paper does not clearly position itself with respect to existing retrieval-augmented methods that used to accelerate the model’s inference. A more thorough literature review is needed to highlight how RAEE differs from and improves upon prior work.
2. While the data presented in figure3 is comprehensive, I noticed that the visual presentation, specifically the subscripts, could be enhanced for better readability and aesthetic appeal. | 2. While the data presented in figure3 is comprehensive, I noticed that the visual presentation, specifically the subscripts, could be enhanced for better readability and aesthetic appeal. |
NIPS_2019_757 | NIPS_2019 | Weakness 1. Online Normalization introduces two additional hype-parameters: forward and backward decay factors. The authors use a logarithmic grid sweep to search the best factors. This operation largely increases the training cost of Online Normalization. Question: 1. The paper mentions that Batch Normalization has the problem of gradient bias because it uses mini-batch to estimate the real gradient distribution. In contrast, Online Normalization can be implemented locally within individual neurons without the dependency on batch size. It sounds like that Online Normalization and Batch Normalization are two different ways to estimate the real gradient distribution. I am confused why Online Normalization is unbiased and Batch Normalization is biased. ** I have read other reviews and the author response. I will stay with my original score. ** | 1. The paper mentions that Batch Normalization has the problem of gradient bias because it uses mini-batch to estimate the real gradient distribution. In contrast, Online Normalization can be implemented locally within individual neurons without the dependency on batch size. It sounds like that Online Normalization and Batch Normalization are two different ways to estimate the real gradient distribution. I am confused why Online Normalization is unbiased and Batch Normalization is biased. ** I have read other reviews and the author response. I will stay with my original score. ** |
ICLR_2023_2322 | ICLR_2023 | ---
W1. The authors have clearly reduced whitespace throughout the paper; equations are crammed together, captions are too close to the figures. This by itself is grounds for rejection since it effectively violates the 9-page paper limit.
W2. An important weakness that is not mentioned anywhere is that the factors A ( k )
in Eq (8) must have dimensions that factorize the dimensions of W
. For example, they must satisfy ∏ k = 1 S a j ( k ) = w j
. So what is hailed as greater flexibility of the proposed model in the caption of Fig 1 is in fact a limitation. For example, if the dimensions of W
are prime numbers, then for each mode of W
, only a single tensor A ( k )
can have a non-singleton dimension in that same mode. This may be fixable with appropriate zero padding, but this has to at least be discussed and highlighted in the paper.
W3. The 2nd point in the list of contributions in Sec 1 claims that the paper provides a means of finding the best approximation in the proposed format. In fact, it is easy to see that this claim is likely to be false: The decomposition corresponds to a difficult non-convex optimization problem, and it is therefore unlikely that a simple algorithm with a finite number of steps could solve it optimally.
W4. SeKron is claimed to generalize various other decompositions. But it is not clear that the proposed algorithm could ever reproduce those decompositions. For example, since there is no SVD-based algorithm for CP decomposition, I strongly suspect that the proposed algorithm (which is SVD-based) cannot recreate the decomposition that, say, an alternating least squares based approach for CP decomposition would achieve.
W5. The paper is unclear and poor notation is used in multiple places. For examples:
Subscripts are sometimes used to denote indices (e.g., Eq (5)), sometimes to denote sequences of tensors (e.g., Eqs (7), (8)), and sometimes used to denote both at the same time (e.g., Thm 3, Eq (35))! This is very confusing.
It is unclear how Eq (7) follows from Eq (5). The confusing indices exacerbate this.
In Thm 1, A ( k )
are tensors, so it's unclear what you mean by " R i
are ranks of intermediate matrices".
In Alg 1, you apply SVD to a 3-way tensors. This operation is not defined. If you mean batched SVD, you need to specify that. The W r 1 ⋯ r k − 1 ( k )
tensors in Eq (10) haven't been defined.
The definition of Unfold below Eq (13) is ambiguous. Similarly, you say that Mat reformulates a tensor to a matrix, but list the output space as R d 1 ⋯ d N
, i.e., indicating that the output is a vector.
Below Eq (15) you discuss "projection". This is not an appropriate term to use, since these aren't projections; projection is a term with a specific meaning in linear algebra.
In Eq (16), the r k
indices appear on the right-hand side but not on the left-hand side. | --- W1. The authors have clearly reduced whitespace throughout the paper; equations are crammed together, captions are too close to the figures. This by itself is grounds for rejection since it effectively violates the 9-page paper limit. |
NIPS_2019_95 | NIPS_2019 | of the submission. * originality: I enjoyed reading this paper. It introduces a new and interesting twist on the secretary problem, thereby providing a stylized theoretical version capturing the main essence of the task of ranking in many online settings. Some part of the analysis also provides some novel techniques that may be independently useful for other purpose (e.g. the new anti-concentration inequality). The proposed randomized algorithm is natural and somewhat unsurprising, but its analysis building upon connections to linear probing is interesting. * quality: By in-large the paper seems to be technically sound. I have gone through most proofs in detail, and although some would benefit with added clarity (see some examples below), I haven't found any main flaws. * clarity: The paper is in general well written, although there is room for some improvement. I list some examples below. * significance: In terms of significance, I believe that this work would be of interest to a small fraction of researchers within NIPS. In fact, in terms of fit, this looks like more a submission to be found within SODA. * other details/comments: - p.2, line 42: the optimal => an optimal \- p.4, line 155 to 159: this is a bit of a repeat, and somewhat ill-placed. Should appear before line 148 \- p.4, measure of sortedness: perhaps indicate whether there are other interesting measures one could consider (besides Kendall' tau and Spearman's footrule) \- p.5, line 192,1: $R$ should be defined (set of available rank); $r_1$ should be defined as 0 \- p.5, line 192,5: how are ties within the argmin dealt with? \- p.5, line 203: this equation is not numbered, but seems to be referred to as Eq. (3.2) later on; so it should be numbered. Same elsewhere where you have an equation which you want to refer to later on ... \- p.5, proof of Proposition 3.3: in the introduction of the event ${\cal O}_\sigma$ 5 lines above, it was for a permutation $\sigma$ on $t$ elements. Now in the proof, $\sigma$ is a permutation over $t-1$ elements. Which conditional event do we have in the conditional probability. By the way that formula indicates that the relative rank $r_t$ is {\em conditionally} uniformly distributed ... \- p.5, line 227: $t$-the should be $t^{th}$. Also what bound do you refer to at the end of the sentence? \- p.7, line 247: shouldn't we say: is popular at time $t$ w.p. at most $e^{-\Omega(\alpha)}$, that would help in the proof of Lemma 3.5, otherwise I don't see how the last equality in 263 can be true as the integral could be infinite? \- p.8, line 277: should $O(n\sqrt{n})$ be $\Omega(n\sqrt{n})$? | * significance: In terms of significance, I believe that this work would be of interest to a small fraction of researchers within NIPS. In fact, in terms of fit, this looks like more a submission to be found within SODA. |
ICLR_2022_2110 | ICLR_2022 | Weakness: 1) Although each part of the proposed method is effective, the overall algorithm is still cumbersome. It has multiple stages. In contrast, many of existing pruning methods do not need fine-tuning. 2) Technical details and formulations are limited. It seems that the main novelty reflected in the scheme or procedure novelty. 3) The experimental results are not convincing. The compared methods are few. Although few authors have attempted to prune EfficientNet, other networks can be compressed in experiments such as ResNet. In addition, the performance gains compared with SOTAs are also marginal, which are also within 1%. 4) The paper is poorly written. There are many typos and some are listed as follows: --In caption of Figure 2, “An subset of a network” should be “A subset of a network”. --In Line157 of Page4, “The output output vector” should be “The output vector”. --In Line283 of Page7, “B0V2 as,” should be “B0V2 as teacher,”. --In Line301 of Page7, “due the inconsistencies” should be “due to the inconsistencies”. | 2) Technical details and formulations are limited. It seems that the main novelty reflected in the scheme or procedure novelty. |
NIPS_2019_1089 | NIPS_2019 | - The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets. - The authors provide a nice study of the effect of polynomial tensor order on prediction performance and show that accuracy increases up to a point. Weaknesses: - There are a few baselines that could also be worth comparing to such as âStrong and Simple Baselines for Multimodal Utterance Embeddings, NAACL 2019â - Since the model has connections to convolutional arithmetic units then ConvACs can also be a baseline for comparison. Given that you mention that âresulting in a correspondence of our HPFN to an even deeper ConACâ, it would be interesting to see a comparison table of depth with respect to performance. What depth is needed to learning âflexible and higher-order local and global intercorrelationsâ? - With respect to Figure 5, why do you think accuracy starts to drop after a certain order of around 4-5? Is it due to overfitting? - Do you think it is possible to dynamically determine the optimal order for fusion? It seems that the order corresponding to the best performance is different for different datasets and metrics, without a clear pattern or explanation. - The model does seem to perform well but there seem to be much more parameters in the model especially as the model consists of more layers. Could you comment on these tradeoffs including time and space complexity? - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to leverage additional modalities to help infer the missing ones? - How can the model be modified to remain useful when there are noisy or missing modalities? - Some more qualitative evaluation would be nice. Where does the improvement in performance come from? What exactly does the model pick up on? Are informative features compounded and highlighted across modalities? Are features being emphasized within a modality (i.e. better unimodal representations), or are better features being learned across modalities? ****************************Clarity**************************** Strengths: - The paper is well written with very informative Figures, especially Figures 1 and 2. - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? - It is unclear whether the improved results in Table 1 with respect to existing methods is due to higher-order interactions or due to more parameters. A column indicating the number of parameters for each model would be useful. - More experimental details such as neural networks and hyperparameters used should be included in the appendix. - Results should be averaged over multiple runs to determine statistical significance. - There are a few typos and stylistic issues: 1. line 2: "Despite of being compactâ -> âDespite being compactâ 2. line 56: âWe refer multiway arraysâ -> âWe refer to multiway arraysâ 3. line 158: âHPFN to a even deeper ConACâ -> âHPFN to an even deeper ConACâ 4. line 265: "Effect of the modelling mixed temporal-modality features." -> I'm not sure what this means, it's not grammatically correct. 5. equations (4) and (5) should use \left( and \right) for parenthesis. 6. and so on⦠****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite promising. Weaknesses: - Not really a weakness, but there is a paper at ACL 2019 on "Learning Representations from Imperfect Time Series Data via Tensor Rank Regularizationâ which uses low-rank tensor representations as a method to regularize against noisy or imperfect multimodal time-series data. Could your method be combined with their regularization methods to ensure more robust multimodal predictions in the presence of noisy or imperfect multimodal data? - The paper in its current form presents a specific model for learning multimodal representations. To make it more significant, the polynomial pooling layer could be added to existing models and experiments showing consistent improvement over different model architectures. To be more concrete, the yellow, red, and green multimodal data in Figure 2a) can be raw time-series inputs, or they can be the outputs of recurrent units, transformer units, etc. Demonstrating that this layer can improve performance on top of different layers would be this work more significant for the research community. ****************************Post Rebuttal**************************** I appreciate the effort the authors have put into the rebuttal. Since I already liked the paper and the results are quite good, I am maintaining my score. I am not willing to give a higher score since the tasks are rather straightforward with well-studied baselines and tensor methods have already been used to some extent in multimodal learning, so this method is an improvement on top of existing ones. | - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? |
NIPS_2020_1228 | NIPS_2020 | - The method section looks not self-contained and lacks descriptions of some key components. In particular: * What is Eq.(9) for? Why "the SL is the negative logarithm of a polynomial in \theta" -- where is the "negative logarithm" in Eq.(9)? * Eq.(9) is not practically tractable. It looks its practical implementation is discussed in the "Evaluating the Semantic Loss" part (L.140) which involves the Weighted Model Count (WMC) and knowledge compilation (KC). However, no details about KC are presented. Considering the importance of the component in the whole proposed approach, I feel it's very necessary to clearly present the details and make the approach self-contained. - The proposed approach essentially treats the structured constraints (a logical rule) as part of the discriminator that supervises the training of the generator. This idea looks not new -- one can simply treat the constraints as an energy function and plug it into energy-based GANs (https://arxiv.org/abs/1609.03126). Modeling structured constraints as a GAN discriminator to train the generative model has also been studied in [15] (which also discussed the relation b/w the structured approach with energy-based GANs). Though the authors derive the formula from a perspective of semantic loss, it's unclear what's the exact difference from the previous work? - The paper claims better results in the Molecule generation experiment (Table.3). However, it looks adding the proposed constrained method actually yields lower validity and diversity. | - The paper claims better results in the Molecule generation experiment (Table.3). However, it looks adding the proposed constrained method actually yields lower validity and diversity. |
NIPS_2019_651 | NIPS_2019 | (large relative error compared to AA on full dataset) are reported. - Clarity: The submission is well written and easy to follow, the concept of coresets is well motivated and explained. While some more implementation details could be provided (source code is intended to be provided with camera-ready version), a re-implementation of the method appears feasible. - Significance: The submission provides a method to perform (approximate) AA on large datasets by making use of coresets and therefore might be potentially useful for a variety of applications. Detailed remarks/questions: 1. Algorithm 2 provides the coreset C and the query Q consists of the archetypes z_1, â¦, z_k which are initialised with the FurthestSum procedure. However, it is not quite clear to me how the archetype positions are updated after initialisation. Could the authors please comment on that? 2. The presented theorems provide guarantees for the objective functions phi on data X and coreset C for a query Q. Table 1 reporting the relative errors suggests that there might be a substantial deviation between coreset and full dataset archetypes. However, the interpretation of archetypes in a particular application is when AA proves particularly useful (as for example in [1] or [2]). Is the archetypal interpretation of identifying (more or less) stable prototypes whose convex combinations describe the data still applicable? 3. Practically, the number of archetypes k is of interest. In the presented framework, is there a way to perform model selection in order to identify an appropriate k? 4. The work in [3] might be worth to mention as a related approach. There, the edacious nature of AA is approached by learning latent representation of the dataset as a convex combination of (learnt) archetypes and can be viewed as a non-linear AA approach. [1] Shoval et al., Evolutionary Trade-Offs, Pareto Optimality, and the Geometry of Phenotype Space, Science 2012. [2] Hart et al., Inferring biological tasks using Pareto analysis of high-dimensional data, Nature Methods 2015. [3] Keller et al., Deep Archetypal Analysis, arxiv preprint 2019. ---------------------------------------------------------------------------------------------------------------------- I appreciate the authorsâ response and the additional experimental results. I consider the plot of the coreset archetypes on a toy experiment insightful and it might be a relevant addition to the appendix. In my opinion, the submission constitutes a relevant contribution to archetypal analysis which makes it more feasible in real-world applications and provides some theoretical guarantees. Therefore, I raise my assessment to accept. | 1. Algorithm 2 provides the coreset C and the query Q consists of the archetypes z_1, â¦, z_k which are initialised with the FurthestSum procedure. However, it is not quite clear to me how the archetype positions are updated after initialisation. Could the authors please comment on that? |
ICLR_2022_2123 | ICLR_2022 | of this submission and make suggestions for improvement:
Strengths - The authors provide a useful extension to existing work on VAEs, which appears to be well-suited for the target application they have in mind. - The authors include both synthetic and empirical data as test cases for their method and compare it to a range of related approaches. - I especially appreciated, that the authors validated their method on the empirical data and also provide an assessment of face validity using established psychological questionnaires (BDI and AQ). - I also appreciated the ethics statement pointing out that the method requires additional validation, before it may enter the clinic. - The paper is to a great extend clearly written.
Weaknesses - In Figure 2 it seems that Manner-1 use of diagnostic information is more important than Manner-2 use of this information, which calls your choice into question to set lambda = 0.5 in equation 3. Are you able to learn this parameter from the data? - Also in Figure 2, when applying your full model to the synthetic data, it appears to me that inverting your model seems to underestimate the within-cluster variance (compared to the ground truth). Could it be that your manner-1 use of information introduces constraints that are too strong, as they do not allow for this variance? - It would strengthen your claims of “superiority” of your approach over others, if you could provide a statistical test that shows that your approach is indeed better at recovering the true relationship compared to others. Please, provide such tests. - There are important information about the empirical study missing that should be mentioned in the supplement, such as recording parameters for the MRI, preprocessing steps, was the resting-state recorded under eyes-open or eyes-closed condition? A brief explanation of the harmonization technique would also be appreciated. It would also be helpful to mention the number of regions in the parcellation in the main text. - The validation scheme using the second study is not clear to me. Were the models trained on dataset A and then directly applied to dataset B or did you simply repeat the training in dataset B. If the latter is the case, I would refer to this as a replication dataset and not a validation dataset (which would require applying the same model on a new dataset, without retraining). - Have you applied multiple testing correction for the FID comparisons across diagnoses. If so which? If not, you should apply it and please, state that clearly in the main manuscript. - It is somewhat surprising that the distance between SCZ and MDD is shorter than between SCZ and ASD as often the latter two are viewed as closely related. It might be helpful to discuss, why that may be the case in more detail. - The third ethics statement is not clear to me. Could you clarify? - The font size in the figures is too small. Please, increase it to improve readability. | - There are important information about the empirical study missing that should be mentioned in the supplement, such as recording parameters for the MRI, preprocessing steps, was the resting-state recorded under eyes-open or eyes-closed condition? A brief explanation of the harmonization technique would also be appreciated. It would also be helpful to mention the number of regions in the parcellation in the main text. |
NIPS_2019_82 | NIPS_2019 | 1. One major risk of methods that exploit relationships between action units is that the relationships can be very different accross datasets (e.g. AU6 can occur both in an expression of pain and in happiness, and this co-occurence will be very different in a positive salience dataset such as SEMAINE compared to something like UNBC pain dataset). This difference in correlation can already be seen in Figure 1 with quite different co-occurences of AU1 and AU12. A good way to test the generalization of such work is by performing cross-dataset experiments, which this paper is lacking. 2. The language in the paper is sometimes conversational and not scientific (use of terms like massive), and there are several opinions and claims that are not substantiated (e.g. "... facial landmarks, which are helpful for the recognition of AUs defined in small regions"), the paper could benefit from copy-editing 3. Why are two instances of the same network (resnet) are used as different views? Would using a different architecture instead be considered a more differing view? Would be great to see a justification for using two resnet networks. 4. Why is the approach limited to two views, it feels like the system should be able to generalize to more views without too much difficulty? Minor comments: - What is PCA style guarantee? - What is v in equation 2? - why are dfferent numbers of unlabeled images using in training BP4D and EmotioNet models? Trivia: massive face images -> large datasets donates -> denotes (x2) adjacent -> adjacency | 1. One major risk of methods that exploit relationships between action units is that the relationships can be very different accross datasets (e.g. AU6 can occur both in an expression of pain and in happiness, and this co-occurence will be very different in a positive salience dataset such as SEMAINE compared to something like UNBC pain dataset). This difference in correlation can already be seen in Figure 1 with quite different co-occurences of AU1 and AU12. A good way to test the generalization of such work is by performing cross-dataset experiments, which this paper is lacking. |
NIPS_2020_867 | NIPS_2020 | - As someone without a linguistics background, it was at times difficult for me to follow some parts of the paper. For example, it’s not clear to me why we care about the speaker payoff and listener payoff (separate from listener accuracy), rather than just a means to obtain higher accuracy --- is it important that the behavior of the speaker at test time stay close to its behavior during training? - I think more emphasis could be placed on the fact that the proposed methods require the speaker to have a model (in fact, in most of the experiments it’s an exact model) of the listener’s conditional probability p(t|m), and vice-versa. - I would have liked more description of the Starcraft environment (potentially in an Appendix?) | - I would have liked more description of the Starcraft environment (potentially in an Appendix?) |
NIPS_2019_499 | NIPS_2019 | of the method. Are there any caveats to practitioners due to some violation of the assumptions given in Appendix. B or for any other reasons? Clarity: the writing is highly technical and rather dense, which I understand is necessary for some parts. However, I believe the manuscript would be readable to a broader audience if Sections 2 and 3 are augmented with more intuitive explanations of the motivations and their proposed methods. Many details of the derivations could be moved to the appendix and the resultant space could be used to highlight the key machinery which enabled efficient inference and to develop intuitions. Many terms and notations are not defined in text (as raised in "other comments" below). Significance: the empirical results support the practical utility of the method. I am not sure, however, if the experiments on synthetic datasets, support the theoretical insights presented in the paper. I believe that the method is quite complex and recommend that the authors release the codes to maximize the impact. Other comments: - line 47 - 48 "over-parametrization invariably overfits the data and results in worse performance": over-parameterization seems to be very helpful for supervised learning of deep neural networks in practice ... Also, I have seen a number of theoretical work showing the benefits of over-parametrisation e.g. [1]. - line 71: $\beta$ is never defined. It denotes the set of model parameters, right? - line 149-150 "the convergence to the asymptotically correct distribution allows ... obtain better point estimates in non-convex optimization.": this is only true if the assumptions in Appendix. B are satisfied, isn't it? How realistic are these assumptions in practice? - line 1: MCMC is never defined: Markov Chain Monte Carlo - line 77: typo "gxc lobal"=> "global" - eq.4: $\mathcal{N}$ and $\mathcal{L}$ are not defined. Normal and Laplace I suppose. You need to define them, please. - Table 2: using the letter `a` to denote the difference in used models is confusing. - too many acronyms are used. References: [1] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. "A convergence theory for deep learning via over-parameterization." arXiv preprint arXiv:1811.03962 (2018). ---------------------------------------------------------------------- I am grateful that the authors have addressed most of the concerns about the paper, and have updated my score accordingly. I would like to recommend for acceptance provided that the authors reflect the given clarifications in the paper. | - line 47 - 48 "over-parametrization invariably overfits the data and results in worse performance": over-parameterization seems to be very helpful for supervised learning of deep neural networks in practice ... Also, I have seen a number of theoretical work showing the benefits of over-parametrisation e.g. [1]. |
NIPS_2016_313 | NIPS_2016 | Weakness: 1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sperately to better support the claim. 2. Lacks in detail about the techniques and make it hard to reproduce the result. For example, it is unclear about the sparsification process since it is important to extract the landmark features for following steps. And how to generate the landmark on the edge? How to decide the number of landmark used? What kind of images features? What is the fixed radius with different scales? How to achieve shape invariance, etc. 3. The authors claim to achieve state-of-the-art results on challenging scene text recognition tasks, even outperforms the deep-learning based approaches, which is not convincing. As claimed, the performance majorly come from the first step which makes it reasonable to conduct comparisons experiments with existing detection methods. 4. It is time-consuming since the shape model is trained in pixel level(though sparsity by landmark) and the model is trained independently on all font images and characters. In addition, parsing model is a high-order factor graph with four types of factors. The processing efficiency of training and testing should be described and compared with existing work. 5. For the shape model invariance study, evaluation on transformations of training images cannot fully prove the point. Are there any quantitative results on testing images? | 4. It is time-consuming since the shape model is trained in pixel level(though sparsity by landmark) and the model is trained independently on all font images and characters. In addition, parsing model is a high-order factor graph with four types of factors. The processing efficiency of training and testing should be described and compared with existing work. |
uSiyu6CLPh | ICLR_2025 | * I suggest that the authors show a more intuitive figure to visualize the framework that includes the images and labels in the original dataset and also the corrected images. This will help the readers to gain more intuition for your method.
* The authors combine two existing techniques to get the framework without innovation. The adversarial attack or correction method and the domain adaptation method used by the authors are proposed by prior work. And the adopted domain adaptation method here is a very old and simple method which is proposed eight years ago. Considering there were so many effective domain adaptation methods proposed in the recent few years, why don't you use other domain adaptation methods to further improve the performance?
* In Section 3.3, the authors align the features of the weak classifier on the original dataset and the synthetic dataset. Considering that the difference between the original and the synthetic datasets is the corrected part, can we omit the correctly classified samples and only minimize the covariance difference for the adversarially corrected sample and the misclassified sample?
* How do you choose the hyper-parameters such as $\lambda,\epsilon$? Does your method work robustly for other choices of hyper-parameters? If not, how do you choose them? | * The authors combine two existing techniques to get the framework without innovation. The adversarial attack or correction method and the domain adaptation method used by the authors are proposed by prior work. And the adopted domain adaptation method here is a very old and simple method which is proposed eight years ago. Considering there were so many effective domain adaptation methods proposed in the recent few years, why don't you use other domain adaptation methods to further improve the performance? |
qb2QRoE4W3 | ICLR_2025 | Despite the idea being interesting, I have found some technical issues that weakened the overall soundness. I enumerate them as follows:
1. The assumption that generated URLs are always meaningfully related to the core content of the document from where the premises are to be fetched is not true by and large. It works for Wikipedia because the URLs are well-structured semantically.
2. LLMs generating URLs on Wikidata have a significantly higher probability of being linked with a valid URL because extensive entity linking has already been done. This, however, is not the case for many other web sources.
3. There are several URLs that are not named according to the premise entities. In that case, those sources will never be fetched.
4. How to resolve contradictory entailment from premises belonging to different sources?
5. There can be many sources that are themselves false (particularly for the open Internet and also in cases of unverified Wiki pages). So assuming the premises to be true may lead to incorrect RTE.
6. It is unclear how the prompt templates are designed, i.e., the rationale and methodology that would drive the demonstration example patterns in the few-shot cases.
7. A discussion on the prompt dataset (for the few-shot case) creation together with its source should be discussed.
8. The assumption that RTE (i.e. NLI) being true would imply that the hypothesis (fact/claim) is verified is bit tricky and may not always be true. A false statement can entail a hypothesis as well as its true version. Eg.:
$\textit{Apples come in many colors}$ $\implies$ $\textit{Apples can be blue (claim)}$.
$\textit{John was a follower of Jesus}$ $\implies$ $\textit{John was a Christian (claim)}$.
9. Line 253: What is citation threshold? I could not find the definition.
10. In the comparisons with the baselines and variants of LLM-Cite, what was the justification behind not keeping the model set fixed for all the experiments? I think this should be clear.
11. In sections 4.1 and 4.2, an analysis of why the verification models perform better on model-generated claims as compared to human-generated claims is very important to me. I could not find any adequate analysis for that.
12. The key success of LLM-Cite depends on the NLI model (given that at least one valid URL is generated that points to a valid premise). Hence, a discussion on the accuracy of SOTA NLI models (with citation) and the rationale behind choosing the Check Grounding NLI API and Gemini-1.5-Flash should be included. | 7. A discussion on the prompt dataset (for the few-shot case) creation together with its source should be discussed. |
CoEuk8SNI1 | EMNLP_2023 | - Very difficult to follow the motivation of this paper. And it looks like an incremental engineering paper.
- The abstract looks a little vague. For example, “However, it is difficult to fully model interaction between utterances …” What is 'interaction between utterances' and why is it difficult to model? This information is not evident from the previous context. Additionally, the misalignment between the two views might seem obvious since most ERC models aggregate information using methods like residual connections. So, why the need for alignment? Isn't the goal to leverage the advantages of both features? Or does alignment help in achieving a balance between both features?
- The author used various techniques to enhance performance, including contrastive learning, external knowledge, and graph networks. However, these methods seem contradictory to the limited experiments conducted. For example, the author proposes a new semi-parametric inferencing paradigm involving memorization to address the recognition problem of tail class samples. However, the term "semi-parametric" is not clearly defined, and there is a lack of experimental evidence to support the effectiveness of the proposed method in tackling the tail class samples problem.
- The Related Work section lacks a review of self-supervised contrastive learning in ERC.
- The most recent comparative method is still the preprint version available on ArXiv, which lacks convincing evidence.
- Table 3 needs some significance tests to further verify the assumptions put forward in the paper.
- In Chapter 5.3, the significant impact of subtle hyperparameter fluctuations on performance raises concerns about the method's robustness. The authors could consider designing an automated hyperparameter search mechanism or decoupling dependencies on these hyperparameters to address this.
- There is a lack of error analysis.
- The formatting of the reference list is disorganized and needs to be adjusted.
- Writing errors are common across the overall paper. | - Very difficult to follow the motivation of this paper. And it looks like an incremental engineering paper. |
NIPS_2020_1253 | NIPS_2020 | 1. Perhaps the most important limitation I can see is the artificial environments used. In games and especially those old Atari ones, audio events can be repeated exaclty the same and it's quite easy for the network to learn to distinguish new sounds, whereas this might not be the case in more realistic environments, where there's more variance and noise in the audio. 2. L.225: "Visiting states with already learned audio-visual pairs is necessary for achieving a high score, even though they may not be crucial for exploration" So that seems like an important limitation, agent won't work well in this sort of environments, which can easily happen in realistic scenarios. 3. L.227 "The game has repetitive background sounds. Games like SpaceInvaders and BeamRider have background sounds at a fixed time interval, but it is hard to visually associate these sounds" Same here, repetitive background sounds might often be the case in real applications. 4. L190: It's a bit strange how the authors use vanilla FFT instead of the more common STFT (overlapping segments and a Hann windowing function). Probably a good idea to try this for consistency with literature. Insufficient ablations: 6. An ablation on the weighting method of the cross-entropy loss would be nice to see. The authors note for example that in Atlantis their method underperforms because "the game has repetitive background sounds". This is a scenario I'd expect the weighting might have helped remedy. 7. An ablation with adding noise to the audio channel would be interesting. 8. An ablation sampling the negatives from unrelated trajectories would also be interesting 9. Some architectural details are missing and some unclear. For example why is the 2D convnet shown in Fig. 2 fixed to random initialization? | 6. An ablation on the weighting method of the cross-entropy loss would be nice to see. The authors note for example that in Atlantis their method underperforms because "the game has repetitive background sounds". This is a scenario I'd expect the weighting might have helped remedy. |
ARR_2022_52_review | ARR_2022 | 1. A critical weakness of the paper is the lack of novelty and incremental nature of work. The paper addresses a particular problem of column operations in designing semantic parsers for Text-to-SQL. They design a new dataset which is a different train/test split of an existing dataset SQUALL. The other synthetic benchmark paper proposed is based on a single question template, "What was <column> in <year>?".
2. The paper assumes strong domain knowledge about the column types and assumes a domain developer first creates a set of templates based on column types. With the help of these column templates, I think many approaches (parsers) can easily solve the problem. For example, parsers utilizing the SQL grammar to generate the output SQL can use these templates to add new rules that can be used while generating the output. Few such works are 1. A Globally Normalized Neural Model for Semantic Parsing ACl 2021 2. TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation EMNP 2018 3. GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing, ICLR 2021.
1. It will good if the authors can learn the templates for schema expansion from source domain data.
2. Compare the proposed approach with methods which uses domain knowledge in the form of grammar. Comparing with below methods will show generality of ideas proposed in the paper in a much better way.
1. A Globally Normalized Neural Model for Semantic Parsing ACl 2021 2. TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation EMNP 2018 3. GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing, ICLR 2021. | 1. A critical weakness of the paper is the lack of novelty and incremental nature of work. The paper addresses a particular problem of column operations in designing semantic parsers for Text-to-SQL. They design a new dataset which is a different train/test split of an existing dataset SQUALL. The other synthetic benchmark paper proposed is based on a single question template, "What was <column> in <year>?". |
NIPS_2018_901 | NIPS_2018 | Weakness: - The experiments are only done on one game environment. More experiments are necessary. - This method seems not generalizable for other games e.g. FPS game. People can hardly do this on realistic scenes such as driving. Static Assumption too strong. | - The experiments are only done on one game environment. More experiments are necessary. |
NIPS_2020_1451 | NIPS_2020 | 1. Unlike the works HaoChen and Sra and Nagaraj et.al, this work uses the fact that all component functions f_i are mu strongly convex. 2. The authors need to explain why removing some of the assumptions like bounded variance and bounded gradients is an important contribution via. solid examples. 3. The quantity sigma^{*} being finite also implies that all the gradients are finite via. smoothness property of the functions f_i and gives a natural upper bound. | 2. The authors need to explain why removing some of the assumptions like bounded variance and bounded gradients is an important contribution via. solid examples. |
NIPS_2019_900 | NIPS_2019 | -no consideration for approximate number schemes in related work. -no support for float numbers. -At many points in the paper, it is not clear if unecrypted model is a model with PAA or a model with ReLU activation. -what is TCN? the abbreviation is explained way too late into the paper -Tables in chapter 5 are overloaded and abbreviations used are not explained properly. -Figure 3a does not highlight that the shift operation is cheap. - Although the authors claim they implement ImageNet for the first time, it is very slow and accuracy is very low; "SHE needs 1 day and 2.5 days to test an ImageNet picture by AlexNet and ResNet-18, respectively" and accuracy is around 70% | - Although the authors claim they implement ImageNet for the first time, it is very slow and accuracy is very low; "SHE needs 1 day and 2.5 days to test an ImageNet picture by AlexNet and ResNet-18, respectively" and accuracy is around 70% |
NIPS_2019_165 | NIPS_2019 | of the approach and experiments or list future direction for readers. The writeup is exceptionally clear and well organized-- full marks! I have only minor feedback to improve clarity: 1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between the learning curves and M-PHATE. Why do you want to want me to look at the learning curves? Does worse performing model always result in structural collapse? What is the accuracy number? For the last task? or average? 3. Make the captions more descriptive. It's annoying to have to search through the text for your interpretation of the figures, which is usually on a different page 4. Explain the scramble network better... 5. Fig 1, Are these the same plots, just colored differently? It would be nice to keep all three on the same scale (the left one seems condensed) M-PHATE results in significantly more interpretable visualization of evolution than previous work. It also preserves neighbors better (Question: why do you think t-SNE works better in two conditions? The difference is very small tho). On continual learning tasks, M-PHATE clearly distinguishes poor performing learning algorithms via a collapse. (See the question about this in 5. Improvement). The generalization vignette shows that the heterogeneity in M-PHATE output correlates with performance. I would really like to recommend a strong accept for this paper, but my major concern is that the vignettes focus on one dataset MNIST and one NN architecture MLP, which makes the experiments feel incomplete. The results and observations made by authors would be much more convincing if they could repeat these experiments for more datasets and NN architectures. | 1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between the learning curves and M-PHATE. Why do you want to want me to look at the learning curves? Does worse performing model always result in structural collapse? What is the accuracy number? For the last task? or average? |
elMKXvhhQ9 | ICLR_2024 | 1. The paper should acknowledge related works that are pertinent to the proposed learnable data augmentation, such as [a] and [b]. It is crucial to cite and discuss the distinctions between these works and the proposed approach, providing readers with a clear understanding of the novel contributions made by this study.
2. The paper predominantly explores the concept of applying learnable data augmentation for graph anomaly detection. While this is valuable, investigating its applicability in broader graph learning tasks, such as node classification with contrastive learning, could significantly expand its scope. For example, how about its benefits to generic graph contrastive learning tasks, compared to existing contrastive techniques?
3. While consistency training might usually be deployed on unlabeled data, I wonder if it would be beneficial to utilize labeled data for consistency training as well. Specifically, labeled data has exact labels, which might provide effective information for consistency training the model in dealing with the taks of graph anomaly detection.
[a] Graph Contrastive Learning Automated. Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang. ICML 2021
[b] Graph Contrastive Learning with Adaptive Augmentation. Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang. WWW 2021. | 3. While consistency training might usually be deployed on unlabeled data, I wonder if it would be beneficial to utilize labeled data for consistency training as well. Specifically, labeled data has exact labels, which might provide effective information for consistency training the model in dealing with the taks of graph anomaly detection. [a] Graph Contrastive Learning Automated. Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang. ICML 2021 [b] Graph Contrastive Learning with Adaptive Augmentation. Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang. WWW 2021. |
zkzf0VkiNv | ICLR_2024 | 1. Figure 2 shows that, without employing data augmentation and similarity-based regularization, the performance of CR-OSRS is comparable to RS-GM.
2. Could acceleration be achieved by incorporating entropy regularization into the optimization process?
3. It would be beneficial if the authors could provide an analysis of the computational complexity of this method.
4. The author wants to express too much content in the article, resulting in insufficient details and incomplete content in the main text.
5. The experimental part needs to be reorganized and further improved.
Details comments
1) It is recommended to swap the positions of Sections 4.3 and 4.4. According to the diagram, 4.3 is the training section, and 4.4 aims to measure certified space. Both 4.1 and 4.2 belong to the robustness and testing sections. Therefore, putting these parts together feels more reasonable.
2) The author should emphasize "The article is a general and robust method that can be applied to various GM methods, and we only use NGMv2 as an example." at the beginning of the article, rather than just showing in the title of Method Figure 1. This can better highlight the characteristics and contribution of the method.
3) The experimental part needs to be reorganized and further improved. The experimental section has a lot of content, but the experimental content listed in the main text does not highlight the superiority of the method well, so it needs to be reorganized. Based on the characteristics of the article, the experimental suggestions in the main text should include the following: 1. Robustness comparison and accuracy analysis with other empirical robustness algorithms for the same type of perturbations, rather than just focusing on the RS method, to clarify the superiority of the method. (You should supplement this part.) 2. Suggest using ablation experiments as the second part to demonstrate the effectiveness of the method. 3. Parameter analysis, elucidating the method's dependence on parameters. 4. Consider its applications on six basic algorithms as an extension part. Afterwards, based on the importance, select the important ones to place in the main text, and show the rest in the appendix.
4) In P16, the proof of claim 2, it should be P(I \in B) not P(I \in A).
5) In Table 2 of appendix, the Summary of main existing literature in learning GM can list the related types of perturbations.
6) In Formula 8, please clarify the meaning of lower p (lower bound of unilateral confidence), and the reason and meaning of setting as 1/2. | 3) The experimental part needs to be reorganized and further improved. The experimental section has a lot of content, but the experimental content listed in the main text does not highlight the superiority of the method well, so it needs to be reorganized. Based on the characteristics of the article, the experimental suggestions in the main text should include the following: |
NIPS_2018_681 | NIPS_2018 | Weakness: However, I'm not very convinced with experimental results and I a bit doubt that this method would work in general and is useful in any sense. 1. The authors propose a new classification network, but I a bit doubt that its classification error is universally as good as the standard softmax network. It is a bit dangerous to build a new model for better detecting out-of-distribution samples, while losing its classification accuracy. Could the authors report the classification accuracy of the proposed classifier on ImageNet data? Some theoretical justifications, if possible, would be great for the issue. 2. The detection procedure (i.e., measuring the norm of the predicted embedding) is not intuitive and I am not convinced why it is expected to work. Could the authors provide more detailed explanations about it? 3. The baselines to compare are not enough, e.g., compare the proposed method with LID [1] which is one of the state-of-the-art detection methods for detecting adversarial samples. [1] Ma, X., Li, B., Wang, Y., Erfani, S.M., Wijewickrema, S., Houle, M.E., Schoenebeck, G., Song, D. and Bailey, J., Characterizing adversarial subspaces using local intrinsic dimensionality. In ICLR, 2018 4. Similar to Section 4.3, it is better to report AUROC and detection error when the authors evaluate their methods for detecting adversarial samples. | 1. The authors propose a new classification network, but I a bit doubt that its classification error is universally as good as the standard softmax network. It is a bit dangerous to build a new model for better detecting out-of-distribution samples, while losing its classification accuracy. Could the authors report the classification accuracy of the proposed classifier on ImageNet data? Some theoretical justifications, if possible, would be great for the issue. |
NIPS_2017_28 | NIPS_2017 | - Most importantly, the explanations are very qualitative and whenever simulation or experiment-based evidence is given, the procedures are described very minimally or not at all, and some figures are confusing, e.g. what is "sample count" in fig. 2? It would really help adding more details to the paper and/or supplementary information in order to appreciate what exactly was done in each simulation. Whenever statistical inferences are made, there should be error bars and/or p-values.
- Although in principle the argument that in case of recognition lists are recalled based on items makes sense, in the most common case of recognition, old vs new judgments, new items comprise the list of all items available in memory (minus the ones seen), and it's hard to see how such an exhaustive list could be effectively implemented and concrete predictions tested with simulations.
- Model implementation should be better justified: for example, the stopping rule with n consecutive identical samples seems a bit arbitrary (at least it's hard to imagine neural/behavioral parallels for that) and sensitivity with regard to n is not discussed.
- Finally it's unclear how perceptual modifications apply for the case of recall: in my understanding the items are freely recalled from memory and hence can't be perceptually modified. Also what are speeded/unspeeded conditions? | - Although in principle the argument that in case of recognition lists are recalled based on items makes sense, in the most common case of recognition, old vs new judgments, new items comprise the list of all items available in memory (minus the ones seen), and it's hard to see how such an exhaustive list could be effectively implemented and concrete predictions tested with simulations. |
ICLR_2023_2630 | ICLR_2023 | - The technical novelty and contributions are a bit limited. The overall idea of using a transformer to process time series data is not new, as also acknowledged by the authors. The masked prediction was also used in prior works e.g. MAE (He et al., 2022). The main contribution, in this case, is the data pre-processing approach that was based on the bins. The continuous value embedding (CVE) was also from a prior work (Tipirneni & Reddy 2022), and also the early fusion instead of late fusion (Tipirneni & Reddy, 2022; Zhang et al., 2022). It would be better to clearly clarify the key novelty compared to previous works, especially the contribution (or performance gain) from the data pre-processing scheme.
- It is unclear if there are masks applied to all the bins, or only to one bin as shown in Fig. 1.
- It is unclear how the static data (age, gender etc.) were encoded to input to the MLP. The time-series data was also not clearly presented.
- It is unclear what is the "learned [MASK] embedding" mean in the SSL pre-training stage of the proposed method.
- The proposed "masked event dropout scheme" was not clearly presented. Was this dropout applied to the ground truth or the prediction? If it was applied to the prediction or the training input data, will this be considered for the loss function?
- The proposed method was only evaluated on EHR data but claimed to be a method designed for "time series data" as in both the title and throughout the paper. Suggest either tone-down the claim or providing justification on more other time series data.
- The experimental comparison with other methods seems to be a bit unfair. As the proposed method was pre-trained before the fine-tuning stage, it is unclear if the compared methods were also initialised with the same (or similar scale) pre-trained model. If not, as shown in Table 1, the proposed method without SSL performs inferior to most of the compared methods.
- Missing reference to the two used EHR datasets at the beginning of Sec. 4. | - The experimental comparison with other methods seems to be a bit unfair. As the proposed method was pre-trained before the fine-tuning stage, it is unclear if the compared methods were also initialised with the same (or similar scale) pre-trained model. If not, as shown in Table 1, the proposed method without SSL performs inferior to most of the compared methods. |
NIPS_2017_390 | NIPS_2017 | + Intuitive and appealingly elegant method, that is simple and fast.
+ Authors provide several interpretations which draw connections drawn to other methods and help the reader understand well.
+ Some design choices are well explained , e.g. Euclidean distance outperforms cosine for good reason.
+ Good results
- Some other design decisions (normalisation; number of training classes per episode, etc) less well explained. How much of good results is the proposed method per-se, and how much of it is tuning this stuff?
- Why the zero-shot part specifically works so well should be better explained. Details:
- Recent work also has 54-56% on CUB. (Chanpinyo et al, CVPRâ16, Zhang & Salgrama ECCVâ16)
- This may not necessarily reduce the novelty of this work, but the use of mean of feature vectors from the same class has been proposed for the enhancement over training general (not few-shot specifically) classification model. [A Discriminative Feature Learning Approach for Deep Face Recognition, Wen et al, ECCV 2016]. The âcenterâ in the above paper matches âprototypeâ. Probably this connection should be cited.
- For the results of zero-shot learning on CUB dataset, i.e., Table 3 page 7, the meta-data used here are âattributeâ. This is good for fair comparison. However, from the perspective of getting better performance, better meta-data embeddings options are available. Refer to table 1 in âLearning Deep Representations of Fine-Grained Visual Descriptions, Reed et al, CVPR 2016â. It would be interesting to know the performance of the proposed method when it is equipped with better meta-data embeddings.
Update: Thanks to the authors for their response to the reviews. I think this paper is acceptable for NIPS. | - For the results of zero-shot learning on CUB dataset, i.e., Table 3 page 7, the meta-data used here are âattributeâ. This is good for fair comparison. However, from the perspective of getting better performance, better meta-data embeddings options are available. Refer to table 1 in âLearning Deep Representations of Fine-Grained Visual Descriptions, Reed et al, CVPR 2016â. It would be interesting to know the performance of the proposed method when it is equipped with better meta-data embeddings. Update: Thanks to the authors for their response to the reviews. I think this paper is acceptable for NIPS. |
xrtM8r0zdU | ICLR_2025 | 1. **Limited Applicability**: While the paper claims that SGC offers a more flexible, fine-grained tradeoff, PEFT methods typically target compute-constrained scenarios, where such granular control may require extra tuning that reduces practicality. It would be beneficial to include a plot with sparsity on the x-axis and performance on the y-axis to directly compare the flexibility of SGC with LoRA. This visualization could more intuitively demonstrate whether SGC’s fine-grained control offers practical performance benefits at different sparsity levels.
2. **Questionable Memory Advantage**: The memory usage for first-order optimization methods largely comes from the model parameters, gradients, activations, and optimizer states. Even with Adam’s two states, optimizer memory costs are typically less than half. SGC, based on Adam, can’t reduce memory below that of simple SGD without momentum, and since it still calculates full gradients, its GPU memory consumption may surpass LoRA, which doesn’t require full gradient computations.
3. **Subpar Performance**: As seen in Table 2, SGC shows no clear performance advantage over methods like LoRA and GaLore, raising questions about its efficacy as a fine-tuning method.
4. **Lack of Related Work Comparison**: The paper omits discussion and comparison with relevant optimizers like Adafactor[1] and CAME[2], which also focus on compressing optimizer states. These omissions reduce the context for understanding SGC’s place among similar methods. Including a comparison on task performance, memory efficiency and convergence speed would better contextualize SGC's advantages and place among similar methods. **References**:
[1] Shazeer, N., & Stern, M. (2018, July). Adafactor: Adaptive learning rates with sublinear memory cost. In *International Conference on Machine Learning* (pp. 4596-4604). PMLR.
[2] Luo, Y., Ren, X., Zheng, Z., Jiang, Z., Jiang, X., & You, Y. (2023). CAME: Confidence-guided Adaptive Memory Efficient Optimization. In *Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pages 4442–4453, Toronto, Canada. Association for Computational Linguistics. | 1. **Limited Applicability**: While the paper claims that SGC offers a more flexible, fine-grained tradeoff, PEFT methods typically target compute-constrained scenarios, where such granular control may require extra tuning that reduces practicality. It would be beneficial to include a plot with sparsity on the x-axis and performance on the y-axis to directly compare the flexibility of SGC with LoRA. This visualization could more intuitively demonstrate whether SGC’s fine-grained control offers practical performance benefits at different sparsity levels. |
NIPS_2020_341 | NIPS_2020 | - For theorem 5.1 and 5.2, is there a way to decouple the statement, i.e., separating out the optimization part and the generalization part? It would be clearer if one could give a uniform convergence guarantee first followed by how the optimization output can instantiate such uniform convergence. - In the experiments, is it reasonable for the German and Law school dataset to have shorter training time in Gerrymandering than Independent? Since in Experiment 2, ERM and plug-in have similar performance to Kearns et al. and the main advantage is its computation time, it would be good to have the code published. | - In the experiments, is it reasonable for the German and Law school dataset to have shorter training time in Gerrymandering than Independent? Since in Experiment 2, ERM and plug-in have similar performance to Kearns et al. and the main advantage is its computation time, it would be good to have the code published. |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 36