paper_id
stringlengths 10
19
| venue
stringclasses 14
values | focused_review
stringlengths 7
8.09k
| point
stringlengths 54
690
|
---|---|---|---|
7GxY4WVBzc | EMNLP_2023 | * The contribution of the vector database to improving QA performance is unclear. More analysis and ablation studies are needed to determine its impact and value for the climate change QA task.
* Details around the filtering process used to create the Arabic climate change QA dataset are lacking. More information on the translation and filtering methodology is needed to assess the dataset quality.
* The work is focused on a narrow task (climate change QA) in a specific language (Arabic), so its broader impact may be limited.
* The limitations section lacks specific references to errors and issues found through error analysis of the current model. Performing an analysis of the model's errors and limitations would make this section more insightful. | * The work is focused on a narrow task (climate change QA) in a specific language (Arabic), so its broader impact may be limited. |
sJslLVsYNo | ICLR_2024 | - [Originality] The winner-take-all property has been widely used in previous works such as NN-based clustering algorithms [1] and it’s unclear how this paper contributes novelly to the understanding of this behavior with its extremely simplified settings, especially since most of the findings have been reported in previous works (Sec 5).
- [Quality] The quality of the paper is unacceptable due to the following issues:
1) The experimental setup is highly insufficient for a top-tier conference like ICLR, with overly simplified network, datasets and analyses (only scalar plots from single neurons instead of e.g. cluster analysis and/or visualization), leaving the results highly inconclusive.
2) The observed winner-take-most (divide-and-conquer) behavior could be simply due to insufficient training as it contradicts the neural collapse theory [2] which predicts the opposite, the collapse of all intraclass clusters, leaving the main results highly questionable.
3) Claiming that training noises are required for generalization on the synthetic dataset (Sec 3.1) is quite problematic, since sufficient training should generally lead to the max-margin solution (which generalizes) even without training noises [3]. The extremely large learning rate (1.0) could be causing the problem.
- [Significance] Given the critical issues in the paper’s originality and quality as stated above, it’s unfortunately hard to conclude that this work is significant or sufficiently promising.
[1] Clustering: A neural network approach, Neural networks, 2010.\
[2] Prevalence of neural collapse during the terminal phase of deep learning training, PNAS, 2020.\
[3] The Implicit Bias of Gradient Descent on Separable Data, JMLR, 2018. | - [Originality] The winner-take-all property has been widely used in previous works such as NN-based clustering algorithms [1] and it’s unclear how this paper contributes novelly to the understanding of this behavior with its extremely simplified settings, especially since most of the findings have been reported in previous works (Sec 5). |
NIPS_2021_304 | NIPS_2021 | /Questions:
I only have a few minor points:
1.) For equation (7), does treating | u − l |
as the length require the bins to be equally spaced? I don't think this is stated.
2.) It may be good to briefly mention the negligible computational cost of CHR (which is in the appendix) in the main paper to help motivate the method. A rough example of some run-times in the experiments may also be useful for readers looking to apply the method.
3.) Just a few typographical/communication points:
I found Section 2.2 slightly difficult to read, as the notation gets a little heavy. This may not be necessary, but the authors could consider presenting the nested intervals without randomization (e.g. after Line 119), with the randomization in the Appendix, as it is not needed in Theorem 2. This would give more room for intuitive discussions, related to my next point.
It may be helpful to introduce some intuition on the conformity score in equation (12) and why we need the sets to be nested for readers unfamiliar with previous work, perhaps at the start of Section 2.3.
Line 113: ϵ
is mentioned here before it is defined
Line 188: 'increased' instead of 'increase' ##################################################################### Overall:
This paper is an interesting extension of previous work, and the provided asymptotic justifications of attaining oracle width and conditional coverage is useful. The method is also general and can empirically provide better average widths and conditional coverage than other methods, particularly under skewed data, making it useful in practice. ##################################################################### References:
Romano, Y., Sesia, M., & Candes, E. (2020). Classification with Valid and Adaptive Coverage. Advances in Neural Information Processing Systems, 33, 3581-3591.
Gupta, C., Kuchibhotla, A. K., & Ramdas, A. K. (2019). Nested conformal prediction and quantile out-of-bag ensemble methods. arXiv preprint arXiv:1910.10562.
The authors have described the limitations of their method - in particular their method does not control for upper and lower miscoverage, and they provide alternative recommendations. | 2.) It may be good to briefly mention the negligible computational cost of CHR (which is in the appendix) in the main paper to help motivate the method. A rough example of some run-times in the experiments may also be useful for readers looking to apply the method. |
gp5dPMBzMH | ICLR_2024 | 1. Figure 3 presents EEG topography plots for both the input and output during the EEG token quantization process, leading to some ambiguity in interpretation. I would recommend the authors to elucidate this procedure in greater detail. Specifically, it would be insightful to understand whether the spatial arrangement of the EEG sensors played any role in this process.
2. The manuscript introduces BELT-2 as a progression from the prior BELT-1 model. However, the discussion and distinction between the two models are somewhat scanty, especially given their apparent similarities. It would be of immense value if the authors could elaborate on the design improvements made in BELT-2 over BELT-1. A focused discussion highlighting the specific enhancements and their contribution to the performance improvements, as showcased in Table 1 and Table 4, would add depth to the paper.
3. A few inconsistencies are observed in the formatting of the tables, which might be distracting for readers. I'd kindly suggest revisiting and refining the table presentation to ensure a consistent and polished format.
4. In Figure 4 and Section 2.4, there is a mention of utilizing the mediate layer coding as 'EEG prompts'. The concept, as presented, leaves some gaps in understanding, primarily because its introduction and visualization seem absent or not explicitly labeled in the preceding figures and method sections. It would enhance coherence and clarity if the authors could revisit Figures 2 and/or 3 and annotate the specific parts illustrating this mediate layer coding. | 1. Figure 3 presents EEG topography plots for both the input and output during the EEG token quantization process, leading to some ambiguity in interpretation. I would recommend the authors to elucidate this procedure in greater detail. Specifically, it would be insightful to understand whether the spatial arrangement of the EEG sensors played any role in this process. |
GHaoCSlhcK | ICLR_2025 | 1. **Limited discussion of related works** on heterogeneous architectures and PCA-based methods. Below are some relevant examples. The authors are encouraged to conduct a more thorough literature review. Without a clear differentiation from existing methods, there is a concern about the novelty of the paper.
- Liu, Yufan, et al. "Cross-architecture knowledge distillation." *ACCV 2022*.
- Hofstätter, Sebastian, et al. "Improving efficient neural ranking models with cross-architecture knowledge distillation." *arXiv:2010.02666* (2020).
- Ni, Jianyuan, et al. "Adaptive Cross-Architecture Mutual Knowledge Distillation." *FG 2024*.
- Chiu, Tai-Yin, and Danna Gurari. "PCA-based knowledge distillation towards lightweight and content-style balanced photorealistic style transfer models." *CVPR 2022*.
- Guo, Yi, et al. "RdimKD: Generic Distillation Paradigm by Dimensionality Reduction." *arXiv:2312.08700* (2023).
2. **Unclear necessity and effectiveness of using CKA** for modularization. An ablation study comparing CKA-based modularization with simpler approaches, such as dividing networks based on feature map resolution, should be included.
3. **Writing quality could be improved** to enhance rigor and clarity. For example:
- In lines 56-57, the phrase “it begins by training … representative features are transferred first” is unclear. It is not intuitive why training the shallowest module would ensure the most representative features.
- There may be no need to distinguish between the two types of representation distances $d_{SM}$ and $d_{DM}$, as they are calculated in the same way. | - There may be no need to distinguish between the two types of representation distances $d_{SM}$ and $d_{DM}$, as they are calculated in the same way. |
NIPS_2018_914 | NIPS_2018 | of the paper are (i) the presentation of the proposed methodology to overcome that effect and (ii) the limitations of the proposed methods for large-scale problems, which is precisely when function approximation is required the most. While the intuition behind the two proposed algorithms is clear (to keep track of partitions of the parameter space that are consistent in successive applications of the Bellman operator), I think the authors could have formulated their idea in a more clear way, for example, using tools from Constraint Satisfaction Problems (CSPs) literature. I have the following concerns regarding both algorithms: - the authors leverage the complexity of checking on the Witness oracle, which is "polynomial time" in the tabular case. This feels like not addressing the problem in a direct way. - the required implicit call to the Witness oracle is confusing. - what happens if the policy class is not realizable? I guess the algorithm converges to an \empty partition, but that is not the optimal policy. minor: line 100 : "a2 always moves from s1 to s4 deterministically" is not true line 333 : "A number of important direction" -> "A number of important directions" line 215 : "implict" -> "implicit" - It is hard to understand the figure where all methods are compared. I suggest to move the figure to the appendix and keep a figure with less curves. - I suggest to change the name of partition function to partition value. [I am satisfied with the rebuttal and I have increased my score after the discussion] | - the authors leverage the complexity of checking on the Witness oracle, which is "polynomial time" in the tabular case. This feels like not addressing the problem in a direct way. |
NIPS_2021_1788 | NIPS_2021 | - The approach proposed is quite simple and straightforward without much technical innovation. For example, CODAC is a direct combination of CQL and QR-DQN to learn conservative quantiles of the return distribution. - Some parts of the paper need clearer writing (more below)
Comments and questions: - I think in a paragraph from lines 22-30 when discussing distributional RL, the paper lacks relevant literature on using moment matching (instead of quantile regression as most DRL methods) for DRL (Nguyen-Tang et al AAAI’21, “Distributional Reinforcement Learning via Moment Matching”). I think this should be properly discussed when talking about various approaches to DRL that have been developed so far, even though the present paper still uses quantile regression instead of moment matching. - More explanation is needed for Eq (5). For example, what is the meaning of the cost c 0 ( s , a )
? (e.g., to quantify out-of-distribution actions) - The use of s ′ and a ′
when defining $\hat{\pi}{\beta} a t l i n e 107 m i g h t c a u s e c o n f u s i o n a s \mathcal{D} c o n t a i n s
(s,a,r,s’)$. - This paper is about deriving a conservative estimate of the quantiles of the return from offline data where the conservativeness is for penalizing out-of-distribution actions. In the paper, they define OOD actions as those are not drawn from \hat{\pi}{\beta}(.|s) (line 109) but in Assumption 3.1. they assume that \hat{\pi}_{\beta}(a|s) > 0, i.e., there is no OOD actions. Thus, what is the merit of the theoretical result presented in the paper?
The authors have adequately addressed the limitations and social impact of their work. | - I think in a paragraph from lines 22-30 when discussing distributional RL, the paper lacks relevant literature on using moment matching (instead of quantile regression as most DRL methods) for DRL (Nguyen-Tang et al AAAI’21, “Distributional Reinforcement Learning via Moment Matching”). I think this should be properly discussed when talking about various approaches to DRL that have been developed so far, even though the present paper still uses quantile regression instead of moment matching. |
q09vTY1Cqh | EMNLP_2023 | 1. Missing some baselines: For code completion tasks, I suggest that the authors compare with existing code completion commercial applications, such as Copilot. It can be tested on a smaller subset of RepoEval, and it is essential to compare with these state-of-the-art code completion systems.
2. Time efficiency: For code completion tasks, it is also important to focus on the time efficiency. I recommend the authors could add corresponding experiments to make it clear.
3. Missing some related work: Recently there are some papers on similar topics, such as repo-level benchmark [1], API invocation [2]. The authors could consider discussing them in related work to better tease out research directions.
[1] Liu, Tianyang et al. “RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems.” (2023)
[2] Zhang, Kechi et al. “ToolCoder: Teach Code Generation Models to use API search tools.”(2023) | 1. Missing some baselines: For code completion tasks, I suggest that the authors compare with existing code completion commercial applications, such as Copilot. It can be tested on a smaller subset of RepoEval, and it is essential to compare with these state-of-the-art code completion systems. |
eFGQ97z5Cd | ICLR_2025 | - While MoEE introduces two methods for combining HS and RW embeddings (concatenation and weighted sum), the concatenation variant appears simplistic and less effective than the weighted sum in terms of similarity calculation. Future work could explore more sophisticated aggregation methods to fully leverage the complementary strengths of HS and RW.
- The claim that RW embeddings are more robust than HS, based solely on prompt variation tests, lacks comprehensive support. Other factors, such as model size (or maybe architectural variations), should be examined to substantiate this claim.
- The choice to evaluate on only a subset of the Massive Text Embedding Benchmark (MTEB) raises questions about generalizability; it would be helpful to understand the criteria behind this selection and whether other tasks or datasets might yield different insights. | - The choice to evaluate on only a subset of the Massive Text Embedding Benchmark (MTEB) raises questions about generalizability; it would be helpful to understand the criteria behind this selection and whether other tasks or datasets might yield different insights. |
NIPS_2017_110 | NIPS_2017 | weakness of this paper in my opinion (and one that does not seem to be resolved in Schiratti et al., 2015 either), is that it makes no attempt to answer this question, either theoretically, or by comparing the model with a classical longitudinal approach.
If we take the advantage of the manifold approach on faith, then this paper certainly presents a highly useful extension to the method presented in Schiratti et al. (2015). The added flexibility is very welcome, and allows for modelling a wider variety of trajectories. It does seem that only a single breakpoint was tried in the application to renal cancer data; this seems appropriate given this dataset, but it would have been nice to have an application to a case where more than one breakpoint is advantageous (even if it is in the simulated data). Similarly, the authors point out that the model is general and can deal with trajectories in more than one dimensions, but do not demonstrate this on an applied example.
(As a side note, it would be interesting to see this approach applied to drug response data, such as the Sanger Genomics of Drug Sensitivity in Cancer project).
Overall, the paper is well-written, although some parts clearly require a background in working on manifolds. The work presented extends Schiratti et al. (2015) in a useful way, making it applicable to a wider variety of datasets.
Minor comments:
- In the introduction, the second paragraph talks about modelling curves, but it is not immediately obvious what is being modelled (presumably tumour growth).
- The paper has a number of typos, here are some that caught my eyes: p.1 l.36 "our model amounts to estimate an average trajectory", p.4 l.142 "asymptotic constrains", p.7 l. 245 "the biggest the sample size", p.7l.257 "a Symetric Random Walk", p.8 l.269 "the escapement of a patient".
- Section 2.2., it is stated that n=2, but n is the number of patients; I believe the authors meant m=2.
- p.4, l.154 describes a particular choice of shift and scaling, and the authors state that "this [choice] is the more appropriate.", but neglect to explain why.
- p.5, l.164, "must be null" - should this be "must be zero"?
- On parameter estimation, the authors are no doubt aware that in classical mixed models, a popular estimation technique is maximum likelihood via REML. While my intuition is that either the existence of breakpoints or the restriction to a manifold makes REML impossible, I was wondering if the authors could comment on this.
- In the simulation study, the authors state that the standard deviation of the noise is 3, but judging from the observations in the plot compared to the true trajectories, this is actually not a very high noise value. It would be good to study the behaviour of the model under higher noise.
- For Figure 2, I think the x axis needs to show the scale of the trajectories, as well as a label for the unit.
- For Figure 3, labels for the y axes are missing.
- It would have been useful to compare the proposed extension with the original approach from Schiratti et al. (2015), even if only on the simulated data. | - In the introduction, the second paragraph talks about modelling curves, but it is not immediately obvious what is being modelled (presumably tumour growth). |
ICLR_2021_1740 | ICLR_2021 | are in its clarity and the experimental part.
Strong points Novelty: The paper provides a novel approach for estimating the likelihood of p(class image), by developing a new variational approach for modelling the causal direction (s,v->x). Correctness: Although I didn’t verify the details of the proofs, the approach seems technically correct. Note that I was not convinced that s->y (see weakness)
Weak points Experiments and Reproducibility: The experiments show some signal, but are not through enough: • shifted-MNIST: it is not clear why shift=0 is much better than shift~ N ( 0 , σ 2 )
, since both cases incorporate a domain shift • It would be useful to show the performance the model and baselines on test samples from the observational (in) distribution. • Missing details about evaluation split for shifted-MNIST: Did the experiments used a validation set for hyper-param search with shifted-MNIST and ImageCLEF? Was it based on in-distribution data or OOD data? • It would be useful to provide an ablation study, since the approach has a lot of "moving parts". • It would be useful to have an experiment on an additional dataset, maybe more controlled than ImageCLEF, but less artificial than shifted-MNIST. • What were the ranges used for hyper-param search? What was the search protocol?
Clarity: • The parts describing the method are hard to follow, it will be useful to improve their clarity. • It will be beneficial to explicitly state which are the learned parametrized distributions, and how inference is applied with them. • What makes the VAE inference mappings (x->s,v) stable to domain shift? E.g. [1] showed that correlated latent properties in VAEs are not robust to such domain shifts. • What makes v distinctive of s? Is it because y only depends on s? • Does the approach uses any information on the labels of the domain?
Correctness: I was not convinced about the causal relation s->y. I.e. that the semantic concept cause the label, independently of the image. I do agree that there is a semantic concept (e.g. s) that cause the image. But then, as explained by [Arjovsky 2019] the labelling process is caused by the image. I.e. s->image->y, and not as argued by the paper. The way I see it, is like a communication channel: y_tx -> s -> image -> y_rx. Could the authors elaborate how the model will change if replacing s->y by y_tx->s ?
Other comments: • I suggest discussing [2,3,4], which learned similar stable mechanisms in images. • I am not sure about the statement that this work is the "first to identify the semantic factor and leverage causal invariance for OOD prediction" e.g. see [3,4] • The title may be confusing. OOD usually refers to anomaly-detection, while this paper relates to domain-generalization and domain-adaptation. • It will be useful to clarify that the approach doesn't use any external-semantic-knowledge. • Section 3.2 - I suggest to add a first sentence to introduce what this section is about. • About remark in page 6: (1) what is a deterministic s-v relation? (2) chairs can also appear in a workspace, and it may help to disentangle the desks from workspaces.
[1] Suter et al. 2018, Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness [2] Besserve et al. 2020, Counterfactuals uncover the modular structure of deep generative models [3] Heinze-Deml et al. 2017, Conditional Variance Penalties and Domain Shift Robustness [4] Atzmon et al. 2020, A causal view of compositional zero-shot recognition
EDIT: Post rebuttal
I thank the authors for their reply. Although the authors answered most of my questions, I decided to keep the score as is, because I share similar concerns with R2 about the presentation, and because experiments are still lacking.
Additionally, I am concerned with one of the author's replies saying All methods achieve accuracy 1 ... on the training distribution, because usually there is a trade-off between accuracy on the observational distribution versus the shifted distribution (discussed by Rothenhäusler, 2018 [Anchor regression]): Achieving perfect accuracy on the observational distribution, usually means relying on the spurious correlations. And under domain-shift scenarios, this would hinder the performance on the shifted-distribution. | • shifted-MNIST: it is not clear why shift=0 is much better than shift~ N ( 0 , σ 2 ) , since both cases incorporate a domain shift • It would be useful to show the performance the model and baselines on test samples from the observational (in) distribution. |
K8Mbkn9c4Q | ICLR_2024 | While I do like the general underlying idea, there are several severe weaknesses present in this work – leading me to lean towards rejection of the manuscript in its current form. The two main areas of concern are briefly listed here, with details explained in the ‘Questions’ part:
### 1) Lacking quality of the “Domain Transformation” part
This is arguably the KEY part of the paper, and needs significant improvement in two points: Underlying intuition/motivation/justification, as well as technical correctness and clarity. There are several fundamental points that are unclear to me and require significant improvement and clarification; This applies to both clarity in terms of writing but, more importantly, to the quality of the approach and justifications/underlying motivations.
Please see the “Questions” part for details.
### 2) Lacking detail in experiment description:
Description of experimental details would significantly benefit from increased clarity to allow the user to better judge the results, which is very difficult in the manuscript’s current state; See "Questions" for further details. | 2) Lacking detail in experiment description: Description of experimental details would significantly benefit from increased clarity to allow the user to better judge the results, which is very difficult in the manuscript’s current state; See "Questions" for further details. |
ICLR_2022_3204 | ICLR_2022 | - It is unclear whether CBR works as expected (i.e., align the distribution of intra-camera and inter-camera distance). Intuitively, there are more than one possible changing directions of the two items in Equ 3. For example, 1) the second term gets larger, the first term gets smaller (as shown in Fig.2), 2) both of them get smaller, 3)the second term stays the same, and the first term gets smaller. However, according to Fig.4 (b) and Fig.5 (b), we can observe that the changes of the distance distribution caused by CBR should be in line with 2) and 3) mentioned above rather than 1) “expected”. Therefore, this paper should provide more explanation to make it clear.
One of the main contributions of this paper is the CBR, so different optimization strategies and the corresponding results should discussion. For example, what will happen by minimizing both of the inter and intra terms in Eq 3 or only minimizing the first term? | 1) “expected”. Therefore, this paper should provide more explanation to make it clear. One of the main contributions of this paper is the CBR, so different optimization strategies and the corresponding results should discussion. For example, what will happen by minimizing both of the inter and intra terms in Eq 3 or only minimizing the first term? |
NIPS_2021_815 | NIPS_2021 | - In my opinion, the paper is a bit hard to follow. Although this is expected when discussing more involved concepts, I think it would be beneficial for the exposition of the manuscript and in order to reach a larger audience, to try to make it more didactic. Some suggestions: - A visualization showing a counting of homomorphisms vs subgraph isomorphism counting. - It might be a good idea to include a formal or intuitive definition of the treewidth since it is central to all the proofs in the paper. - The authors define rooted patterns (in a similar way to the orbit counting in GSN), but do not elaborate on why it is important for the patterns to be rooted, neither how they choose the roots. A brief discussion is expected, or if non-rooted patterns are sufficient, it might be better for the sake of exposition to discuss this case only in the supplementary material. - The authors do not adequately discuss the computational complexity of counting homomorphisms. They make brief statements (e.g., L 145 “Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets”), but I think it will be beneficial for the paper to explicitly add the upper bounds of counting and potentially elaborate on empirical runtimes. - Comparison with GSN: The authors mention in section 2 that F-MPNNs are a unifying framework that includes GSNs. In my perspective, given that GSN is a quite similar framework to this work, this is an important claim that should be more formally stated. In particular, as shown by Curticapean et al., 2017, in order to obtain isomorphism counts of a pattern P, one needs not only to compute P-homomorphisms, but also those of the graphs that arise when doing “non-edge contractions” (the spasm of P). Hence a spasm(P)-MPNN would require one extra layer to simulate a P-GSN. I think formally stating this will give the interested reader intuition on the expressive power of GSNs, albeit not an exact characterisation (we can only say that P-GSN is at most as powerful as a spasm(P)-MPNN but we cannot exactly characterise it; is that correct?) - Also, since the concept of homomorphisms is not entirely new in graph ML, a more elaborate comparison with the paper by NT and Maehara, “Graph Homomorphism Convolution”, ICML’20 would be beneficial. This paper can be perceived as the kernel analogue to F-MPNNs. Moreover, in this paper, a universality result is provided, which might turn out to be beneficial for the authors as well.
Additional comments:
I think that something is missing from Proposition 3. In particular, if I understood correctly the proof is based on the fact that we can always construct a counterexample such that F-MPNNs will not be equally strong to 2-WL (which by the way is a stronger claim). However, if the graphs are of bounded size, a counterexample is not guaranteed to exist (this would imply that the reconstruction conjecture is false). Maybe it would help to mention in Proposition 3 that graphs are of unbounded size?
Moreover, there is a detail in the proof of Proposition 3 that I am not sure that it’s that obvious. I understand why the subgraph counts of C m + 1
are unequal between the two compared graphs, but I am not sure why this is also true for homomorphism counts.
Theorem 3: The definition of the core of a graph is unclear to me (e.g., what if P contains cliques of multiple sizes?)
In the appendix, the authors mention they used 16 layers for their dataset. That is an unusually large number of layers for GNNs. Could the authors comment on this choice?
In the same context as above, the experiments on the ZINC benchmark are usually performed with either ~100K or 500K parameters. Although I doubt that changing the number of parameters will lead to a dramatic change in performance, I suggest that the authors repeat their experiments, simply for consistency with the baselines.
The method of Bouritsas et al., arxiv’20 is called “Graph Substructure Networks” (instead of “Structure”). I encourage the authors to correct this.
After rebuttal
The authors have adequately addressed all my concerns. Enhancing MPNNs with structural features is a family of well-performing techniques that have recently gained traction. This paper introduces a unifying framework, in the context of which many open theoretical questions can be answered, hence significantly improving our understanding. Therefore, I will keep my initial recommendation and vote for acceptance. Please see my comment below for my final suggestions which, along with some improvements on the presentation, I hope will increase the impact of the paper.
Limitations: The limitations are clearly stated in section 1, by mainly referring to the fact that the patterns need to be selected by hand. I would also add a discussion on the computational complexity of homomorphism counting.
Negative societal impact: A satisfactory discussion is included in the end of the experimental section. | - It might be a good idea to include a formal or intuitive definition of the treewidth since it is central to all the proofs in the paper. |
NIPS_2018_38 | NIPS_2018 | Weakness: 1. As I just mentioned, the paper only analyzed, under which cases will the Algorithm 1 converges to permutations as local minima. However, it will be better if the quality of this kind of local minima could be analyzed (e.g. the approximation ratio of these local minima, under certain assumptions). 2. This paper is not very easy to follow. First, many definitions are used before they are defined. For example, on line 59 in Theorem 1, the authors used the definition of "conditionally positive definite function of order 1", which is defined on line 126 in Definition 2. Also, the author used the definition "\epsilon-negative definite" on line 162, which is defined on line 177 in Definition 3. It will be better if the authors could define those important concepts before using them. Second, the introduction is a little bit too long (more than 2.5 pages) and many parts of that are repeated in Section 2 and 3. It might be better to restructure the first 3 sections for a little bit. Third, it will be good if more captions could be added to the figures in the experiment section so that the readers could understand the results more easily. 3. For the description of Theorem 3, from the proof it seems that we need to use Equation 12 as a condition. It will be better if this information is included in the theorem description to avoid confusions (though I know this is mentioned right before the theorem, it is still better to have it in the theorem description). Also, for the constant c_1, it is better to give it an explicit formula in the theorem. Reference: [1] Burkard, R. E., Cela, E., Pardalos, P. M., and Pitsoulis, L. S. (1998). The quadratic assignment problem. In Handbook of combinatorial optimization, pages 1713â1809. Springer. | 1. As I just mentioned, the paper only analyzed, under which cases will the Algorithm 1 converges to permutations as local minima. However, it will be better if the quality of this kind of local minima could be analyzed (e.g. the approximation ratio of these local minima, under certain assumptions). |
NIPS_2017_349 | NIPS_2017 | - The paper is not self contained
Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility.
I also hereby request the authors to release the source code of their experiments to allow reproduction of their results.
- Use of deep-reinforcement learning is not well motivated
The problem domain seems simple enough that a linear approximation would have likely sufficed? The network is fairly small and isn't "deep" either.
- > We argue that such a mechanism is more realistic because it has an effect within the game itself, not just on the scores
This is probably the most unclear part. It's not clear to me why the paper considers one to be more realistic than the other rather than just modeling different incentives? Probably not enough space in the paper but actual comparison of learning dynamics when the opportunity costs are modeled as penalties instead. As economists say: incentives matter. However, if the intention was to explicitly avoid such explicit incentives, as they _would_ affect the model-free reinforcement learning algorithm, then those reasons should be clearly stated.
- Unclear whether bringing connections to human cognition makes sense
As the authors themselves state that the problem is fairly reductionist and does not allow for mechanisms like bargaining and negotiation that humans use, it's unclear what the authors mean by ``Perhaps the interaction between cognitively basic adaptation mechanisms and the structure of the CPR itself has more of an effect on whether self-organization will fail or succeed than previously appreciated.'' It would be fairly surprising if any behavioral economist trying to study this problem would ignore either of these things and needs more citation for comparison against "previously appreciated".
* Minor comments
** Line 16:
> [18] found them...
Consider using \citeauthor{} ?
** Line 167:
> be the N -th agentâs
should be i-th agent?
** Figure 3:
Clarify what the `fillcolor` implies and how many runs were the results averaged over?
** Figure 4:
Is not self contained and refers to Fig. 6 which is in the supplementary. The figure is understandably large and hard to fit in the main paper, but at least consider clarifying that it's in the supplementary (as you have clarified for other figures from the supplementary mentioned in the main paper).
** Figure 5:
- Consider increasing the axes margins? Markers at 0 and 12 are cut off.
- Increase space between the main caption and sub-caption.
** Line 299:
From Fig 5b, it's not clear that |R|=7 is the maximum. To my eyes, 6 seems higher. | - The paper is not self contained Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility. I also hereby request the authors to release the source code of their experiments to allow reproduction of their results. |
NIPS_2022_1664 | NIPS_2022 | • The paper is missing an integration of the main algorithmic steps (Fill, Propagate, Decode) with the overarching flow diagram in Fig 1 which creates a gap in the presentation.
• The abstract and main text make inconsistent claims about the transmission capacity: o Abstract: “.. covertly transmit over 10000 real-world data samples within a carrier model which has 220× less parameters than the total size of the stolen data,” o Introduction: “… covertly transmit over 10000 real-world data samples within a carrier model which has 100× less parameters than the total size of the stolen data (§4.1),”
• Definitions of metrics and illustrations of qualitative results are insufficiently described and included. o For example, the equation for a earning objective in section 3.3 should be clearly described.
o Page 7: define performance difference and hiding capacity in equations. o Fig 3 is too small for the information to be conveyed (At the 200% digital magnification of Fig 3, I can see some differences in image qualities).
• The choices and constructions of a secret key and noisy vectors are insufficiently described i.e., Are the secret keys similar to the public-private keys used in the current cryptography applications? What are the requirements on creating the noisy vectors?
• How is the information redundancy built into the Fill, Propagate, Decode algorithms? o In reference to the sentence “ Finally, by comparing the performance of the secret model with or without fusion, we conclude that the robustness of Cans largely comes from the information redundancy implemented in our design of the weight pool” | • How is the information redundancy built into the Fill, Propagate, Decode algorithms? o In reference to the sentence “ Finally, by comparing the performance of the secret model with or without fusion, we conclude that the robustness of Cans largely comes from the information redundancy implemented in our design of the weight pool” |
NIPS_2017_434 | NIPS_2017 | ---
This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance:
1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not
ablated. How important is the added complexity? Will one IN do?
2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2.
3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder.
While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated.
Why is this particular dimension of difficulty interesting?
4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error?
5. Are the learned object state embeddings interpretable in any way before decoding?
6. It may be beneficial to spend more time discussing model limitations and other dimensions of generalization. Some suggestions:
* The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs).
* How many different kinds of physical interaction can be in one simulation?
* How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates?
Preliminary Evaluation ---
Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection. | 1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not ablated. How important is the added complexity? Will one IN do? |
NIPS_2017_143 | NIPS_2017 | For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant:
- In which real scenarios is the objective given by the adverserial prediction accuracy they propose, in contrast to classical prediction accuracy?
- In l32-45 they pretend to give a real example but for me this is too vague. I do see that in some scenarios the loss/objective they consider (high accuracy on majority) kind of makes sense. But I imagine that such losses already have been studied, without necessarily referring to "strategic" settings. In particular, how is this related to robust statistics, Huber loss, precision, recall, etc.?
- In l50 they claim that "pershaps even in most [...] practical scenarios" predicting accurate on the majority is most important. I contradict: in many areas with safety issues such as robotics and self-driving cars (generally: control), the models are allowed to have small errors, but by no means may have large errors (imagine a self-driving car to significantly overestimate the distance to the next car in 1% of the situations).
Related to this, in my view they fall short of what they claim as their contribution in the introduction and in l79-87:
- Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player).
- In particular, in the experiments, it doesn't come as a complete surprise that the opponent can be outperformed w.r.t. the multi-agent payoff proposed by the authors, because the opponent simply doesn't aim at maximizing it (e.g. in the experiments he maximizes classical SE and AE).
- Related to this, in the experiments it would be interesting to see the comparison of the classical squared/absolute error on the test set as well (since this is what LSE claims to optimize).
- I agree that "prediction is not done in isolation", but I don't see the "main" contribution of showing that the "task of prediction may have strategic aspects" yet. REMARKS:
What's "true" payoff in Table 1? I would have expected to see the test set payoff in that column. Or is it the population (complete sample) empirical payoff?
Have you looked into the work by Vapnik about teaching a learner with side information? This looks a bit similar as having your discrapency p alongside x,y. | - In particular, in the experiments, it doesn't come as a complete surprise that the opponent can be outperformed w.r.t. the multi-agent payoff proposed by the authors, because the opponent simply doesn't aim at maximizing it (e.g. in the experiments he maximizes classical SE and AE). |
NIPS_2021_998 | NIPS_2021 | • Unprofessional writing. - Most starkly, “policies” is misspelled in the title. • At times, information is not given in an easy-to-understand way. - E.G. lines 147 - 152, 284 - 289. • Captions of figures do not help elucidate what is going on in the figure. This problem is mitigated by the quality of the figures, but it still makes it much harder to understand the pipeline of LCP and its components. More emphasis on that pipeline would help with the understanding. • 100 nodes seem like a small maximum test size for TSP problems (though this is an educated guess). Many real-world problems have thousands or tens of thousands of nodes. • Increase in optimality is either not very significant, or not presented to highlight its significance. It would be better to put the improvement into perspective. • Blank spaces in table 1 are unclear. Opportunities:
• It would be good to describe why certain choices were made. For example, why is the REINFORCE algorithm used for training versus something like PPO? I presume it has to do with the attention model paper this one iterates on, but clarification would be good.
• More real-world uses of the algorithm could be included to better understand the societal impact, including details on how LCP could be integrated well.
The paper lacks a high degree of polish and professionalism, but its formatting (e.g. bolded inline subsubsections) and figures are its saving grace. The tables are also well structured, if a bit cluttered --- values are small and bolding is indistinct. This paper does a good job of giving this information and promises open source-code on publication.
Overall, the paper and its presentation have several problems, but the idea seems elegant and useful.
Yes. The authors have adequately addressed the limitations and potential negative societal impact of their work | • It would be good to describe why certain choices were made. For example, why is the REINFORCE algorithm used for training versus something like PPO? I presume it has to do with the attention model paper this one iterates on, but clarification would be good. |
NIPS_2019_819 | NIPS_2019 | Weakness: Due to the intractbility of the MMD DRO problem, the submission did not find an exact reformulation as much other literature in DRO did for other probability metrics. Instead, the author provides several layers of approximation. The reason why I emphasize the importance of a tight bound, if not an exact reformulation, is that one of the major criticism about (distributionally) robust optimization is that it is sometimes too conservative, and thus a loose upper bound might not be sufficient to mitigate the over-conservativeness and demonstrate the power of distributionally robust optimization. When a new distance is introduced into the DRO framework, a natural question is why it should be used compared with other existing approaches. I hope there will be a more fair comparision in the camera-ready version. =============== 1. The study of MMD DRO is mostly motivated by the poor out-of-sample performance of existing phi-divergence and Wasserstein uncertainty sets. However, I don't believe this is indeed the case. For example, Namkoong and Duchi (2016), and Blanchet, Kang, and Murthy (2016) show the dimension-independent bound 1/\sqrt{n} for a broad class of objective functions in the case of phi-divergence and Wasserstein metric respectively. They didn't require the population distribution to be within the uncertainty set, and in fact, such a requirement is way too conservative and it is exactly what they wanted to avoid. 2. Unlike phi-divergence or Wasserstein uncertainty sets, MMD DRO seems not enjoy a tractable exact equivalent reformulation, which seems to be a severe drawback to me. The upper bound provided in Theorem 3.1 is crude especially because it drops the nonnegative constraint on the distribution, and further approximation is still needed even applied to a simple kernel ridge regression problem. Moreover, it seems restrictive to assume the loss \ell_f belongs to the RKHS as already pointed out by the authors. 3. I am confused about the statement in Theorem 5.1, as it might indicate some disadvantage of MMD DRO, as it provides a more conservative upper bound than the variance regularized problem. 4. Given the intractability of the MMD DRO and several layers of approximation, the numerical experiment in Section 6 is insufficient to demonstrate the usefulness of the new framework. References: Namkoong, H. and Duchi, J.C., 2017. Variance-based regularization with convex objectives. In Advances in Neural Information Processing Systems (pp. 2971-2980). Blanchet, J., Kang, Y. and Murthy, K., 2016. Robust wasserstein profile inference and applications to machine learning. arXiv preprint arXiv:1610.05627. | 3. I am confused about the statement in Theorem 5.1, as it might indicate some disadvantage of MMD DRO, as it provides a more conservative upper bound than the variance regularized problem. |
ICLR_2021_2929 | ICLR_2021 | Weakness: The major concern is the limited contribution of this work. 1.Using image-to-image translation to unify the representations across-domain is an existing technique in domain adaptation, especially in segmentation tasks [1,2]. 2. The use of morphologic information in this paper is simple as the combination of edge detection and segmentation, which are both employed as tools from existing benchmarks (in this paper the author used DeeplabV3, DexiNed-f, employed as off-the-shelf tools for image pre-processing purpose as mentioned in section 4). 3.There should be more on how to use the morphologic segmentation across-domain, and how morphologic segmentation should be conducted differently for different domains. Or is it exactly the same given any arbitrary domain? These questions are important given the task domain adaptation. This paper didn’t provide insight into this but assumed morphologic segmentation will be invariant. 4. Results compared to other domain adaptation methods (especially generative methods) are missing. There is an obvious lack of evidence that the proposed method is superior.
In brief, the contribution of this paper is limited, the results provided are not sufficient to support the method being effective. A reject.
[1] Learning from Synthetic Data: Addressing Domain Shift for Semantic Segmentation [2] Image to Image Translation for Domain Adaptation | 3.There should be more on how to use the morphologic segmentation across-domain, and how morphologic segmentation should be conducted differently for different domains. Or is it exactly the same given any arbitrary domain? These questions are important given the task domain adaptation. This paper didn’t provide insight into this but assumed morphologic segmentation will be invariant. |
ICLR_2023_4834 | ICLR_2023 | There are some concerns regarding the method description and designs: 1) As described in the ASR update strategy, the rehearsal samples for previous tasks are based on the samples with high and low AS scores (the samples with middle AS scores are discarded) while for the current task the rehearsal samples are uniformly sampled from the corresponding data stream sorted by AS scores. Such difference between previous tasks and current task should be experimentally verified (for instance, why can't the current task follow the same principle as the previous tasks to select the rehearsal samples); 2) In line 5 of Algo.1, should it be noted that B^i_{t-}1 is already sorted according to the AS?
There are other concerns regarding the experimental settings and results: 1) As mentioned in Sec. 4.2, the mixup technique in LUMP is also adopted for the proposed method in the experiments on SplitCIFAR-100 and SplitTiny-ImageNet, there should be experimental results of excluding such mixup technique from the proposed method in order to demonstrate its pure contribution; 2) In order to better demonstrate the contribution of using ASR to update the reply buffer and its generalizabilty, there should be experiments of replacing the rehearsal buffer update strategy in the related works (for both supervised continual learning and continual self-supervised learning baselines that also adopt rehearsal buffer) by the proposed ASR update strategy.
The idea behind "augmentation stability of each sample is positively correlated with its relative position in corresponding category distribution" is not well proven or verified. Though Fig.1 tries to serve such purpose to show such idea, but it is not enough, can the authors provide more solid discussion or even theoretical proof for such idea if it is possible? Moreover, currently there is a hidden assumption that the distribution of each category is single mode, but how if the category distribution is multi-modal (in which it is very likely to happen in more complicated datasets), will AS still be effective as a proxy to the relative positive in a category distribution?
From my own research experience, for supervised continual learning, different strategies of rehearsal example selection (e.g. random or uniform) do not contribute significant difference to the final performance, can authors provide more discussion on the impact of particularly having both representative and discriminative rehearsal samples to the overall performance? | 1) As mentioned in Sec. 4.2, the mixup technique in LUMP is also adopted for the proposed method in the experiments on SplitCIFAR-100 and SplitTiny-ImageNet, there should be experimental results of excluding such mixup technique from the proposed method in order to demonstrate its pure contribution; |
NIPS_2017_53 | NIPS_2017 | Weakness
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later.
3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further.
4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation.
5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable.
6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map?
7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences).
8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case?
Minor Points:
- L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (arenât we mutliplying by a nicely-conditioned matrix to make sure everything is dense?).
- Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU?
Perliminary Evaluation
The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*).
[A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. âNeural Module Networks.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799.
[B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. âMultimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847.
[C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. âSimple Baseline for Visual Question Answering.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167. | 6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map? |
NIPS_2017_382 | NIPS_2017 | weakness that there is much tuning and other specifics of the implementation that need to be determined on a case by case basis. It could be improved by giving some discussion of guidelines, principles, or references to other work explaining how tuning can be done, and some acknowledgement that the meaning of fairness may change dramatically depending on that tuning.
* Clarity
The paper is well organized and explained. It could be improved by some acknowledgement that there are a number of other (competing, often contradictory) definitions of fairness, and that the two appearing as constraints in the present work can in fact be contradictory in such a way that the optimization problem may be infeasible for some values of the tuning parameters.
* Originality
The most closely related work of Zemel et al. (2013) is referenced, the present paper explains how it is different, and gives comparisons in simulations. It could be improved by making these comparisons more systematic with respect to the tuning of each method--i.e. compare the best performance of each.
* Significance
The broad problem addressed here is of the utmost importance. I believe the popularity of (IF) and modularity of using preprocessing to address fairness means the present paper is likely to be used or built upon. | * Significance The broad problem addressed here is of the utmost importance. I believe the popularity of (IF) and modularity of using preprocessing to address fairness means the present paper is likely to be used or built upon. |
qYwdyvvvqQ | ICLR_2024 | 1. Figure 1 does not convey the main idea clearly and should be significantly improved.
2. The presentation of the proposed method in 3.2 is confusing and should be significantly improved. For example, it would be better to have a small roadmap on the beginning of 3.2 so that the readers know what each step is doing. Also, it would be better to break up the page-long paragraph to smaller paragraphs, and use each paragraph to explain a small part of computation. Also, explain the intention of each equation and the reasons of design choice.
3. The related work only discussed sparse efficient attentions and Reformer and SMYRF that related to the proposed idea. There are a lot more efficient attentions that have the property of “information flow throughout the entire input sequence” (which was one of the motivation for the proposed idea), such as low rank based attentions (Linformer https://arxiv.org/abs/2006.04768, Nystromformer https://arxiv.org/abs/2102.03902, Performer https://arxiv.org/abs/2009.14794, RFA https://arxiv.org/abs/2103.02143) or multi-resolution based attentions (H-Transformer https://arxiv.org/abs/2107.11906, MRA Attention https://arxiv.org/abs/2207.10284).
4. Missing discussion about Set Transformer (https://arxiv.org/abs/1810.00825) and other related works that also uses summary tokens.
5. In 4-th paragraph of related work, the authors claim some baselines are unstable and the proposed method is stable, but the claim is not supported by any experiments.
6. Experiments are only performed on LRA benchmark, which consists a set of small datasets. The results might be difficult to generalize to larger scale experiments. It would be better to evaluate the methods on larger scale datasets, such as language modeling or at least ImageNet.
7. Given that LRA benchmark is a small scale experiment, it would be better to run the experiment multiple times and calculate the error bars since the results could be very noisy. | 4. Missing discussion about Set Transformer (https://arxiv.org/abs/1810.00825) and other related works that also uses summary tokens. |
NIPS_2020_631 | NIPS_2020 | 1. How Fourier features accelerate NTK convergence in the high-frequency range? Did I overlook something or it's not analyzed? This is an essential theoretical support to the merits of Fourier features. 2. The theory part is limited to the behavior on NTK. I understand analyzing Fourier features on MLPs is highly difficult, but I'm a bit worried there would be a significant gap between NTK and the actual behavior of MLPs (although they are asymptotically equivalent). 3. Examples in Section 5 are limited to 1D functions, which are a bit toyish. | 1. How Fourier features accelerate NTK convergence in the high-frequency range? Did I overlook something or it's not analyzed? This is an essential theoretical support to the merits of Fourier features. |
NIPS_2016_314 | NIPS_2016 | I found in the paper includes: 1. The paper mentions that their model can work well for a variety of image noise, but they show results only on images corrupted using Gaussian noise. Is there any particular reason for the same? 2. I can't find details on how they make the network fit the residual instead of directly learning the input - output mapping. - Is it through the use of skip connections? If so, this argument would make more sense if the skip connections exist after every layer (not every 2 layers) 3. It would have been nice if there was an ablation study on what plays the most important factor on the improvement in performance. Whether it is the number of layers or the skip connections, and how does the performance vary when the skip connections are used for every layer. 4. The paper says that almost all existing methods estimate the corruption level at first. There is a high possibility that the same is happening in the initial layers of their Residual net. If so, the only advantage is that theirs is end to end. 5. The authors mention in the Related works section that the use of regularization helps the problem of image- restoration, but they donât use any type of regularization in their proposed model. It would be great if the authors can address these points (mainly 1, 2 and 3) in the rebuttal. | 2. I can't find details on how they make the network fit the residual instead of directly learning the input - output mapping. |
ICLR_2023_3449 | ICLR_2023 | 1.The spurious features in Section 3.1 and 3.2 are very similar to backdoor triggers. They both are some artificial patterns that only appear a few times in the training set. For example, Chen et al. (2017) use random noise patterns. Gu et al. (2019) [1] use single-pixel and simple patterns as triggers. It is well-known that a few training examples with such triggers (rare spurious examples in this paper) would have a large impact on the trained model.
2.How neural nets learn natural rare spurious correlations is unknown to the community (to the best of my knowledge). However, most of analysis and ablation studies use the artificial patterns instead of natural spurious correlations. Duplicating the same artificial pattern for multiple times is different from natural spurious features, which are complex and different in every example.
3.What’s the experiment setup in Section 3.3? (data augmentation methods, learning rate, etc.).
[1]: BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. https://messlab.moyix.net/papers/badnets_ieeeaccess19.pdf | 3.What’s the experiment setup in Section 3.3? (data augmentation methods, learning rate, etc.). [1]: BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. https://messlab.moyix.net/papers/badnets_ieeeaccess19.pdf |
p4S5Z6Sah4 | ICLR_2024 | Note, the below concerns have resulted in a lower score, which I would be happy to increase pending the authors’ responses.
**A. Wave fields**
The wave-field comparisons, claims, and references seem a bit strained and unnecessary. Presumably, by “wave-field,” the authors simply mean a vector field that supports wave solutions. In any case, since this term is not oft-used in neuroscience or ML that I am aware of, a brief definition should be provided if the term is kept. However, I am unsure that it is necessary or helpful. That the brain supports wavelike activity is well-established, and some evidence for this is appropriately outlined by the authors. Many computational neuroscience models support waves in a way that has been mathematically analyzed (e.g., Wilson-Cowan neural fields equations). The authors’ discretization methodology suggests a similar connection to such analyses. However, appealing to “physical wave fields” to relate waves and memory seems to be overly speculative and unnecessary for the simple system under study in this manuscript. The brain is a dissipative rather than a conservative system, so that many aspects of physical wave fields may well not apply. Moreover, the single reference the authors do make to the concept does not apply either to the brain or to their wave-RNN. Instead, Perrard et al. 2016 describe a specific study that demonstrates that a particle-pilot wave system can still maintain memory in a very specific way that does not at all clearly apply to brains or the authors’ RNN, despite that study studying a dissipative (and chaotic) system. Instead, the readers would benefit much more from gaining an intuition as to why such wavelike activity might benefit learning and recalling sequential inputs. Unfortunately, Fig. 1 does little to help in this vein.
However, the concept certainly is simple enough, and the authors provide a few intuitions in the manuscript that help. I believe the manuscript would improve by removing the discussion of wave fields and instead providing / moving the intuitive explanations (e.g., the “register or ‘tape’” description on p. 20) as to how waves may help with sequential tasks to the same portion of the Introduction.
**B. Fourier analysis**
Overall, I found the wave and Fourier analysis a bit inconsistent and potentially problematic. While I agree that the wRNNs clearly display waves when plotted directly, the mapping and analysis within the spatiotemporal Fourier domain (FD below) does not always match patterns in the regular spatiotemporal plots (RSP below). Moreover, it’s unclear how much substance they add to the analysis results. In more detail:
1. Constant-velocity, 1-D waves don’t need to be transformed to the FD to infer their speeds. The slopes in the RSP correspond to their speeds. For example, in Fig. 2 (top left), there is a wave that begins at unit 250, step ~400, that continues through to unit 0, step ~650, corresponding to a wave speed of ~1.7 units/step, far larger than the diagonal peak shown in the FD below it that would correspond to a speed of ~0.3 units/step, as indicated by the authors.
2. Similar, seemingly speed mismatches can be observed in the Appendix. E.g., in Fig. 9 (2nd column, top), the slopes of the waves are around 0.35-0.42 units/step (close enough to likely be considered the same speed, especially as they converge in time to form a more clustered wave pulse) from what I can tell, whereas the slopes in the FD below it are ~0.3 for the diagonal (perhaps this is close enough to my rough estimate) and ~0.9, well above any observable wave speed. Perhaps there is a much faster wave that is unobservable in the RSP due to the min/max values set for the image intensity in the plot, but in that case the authors should demonstrate this. Given (a) the potential mismatch in the speeds for the waves that can be observed, (b) the mismatch in the speeds discussed above in Fig. 2, and (c) the fact that some waves may be missed in FD (see below), I would worry about assuming this without checking.
3. As alluded to in the point above, iRNN in Fig. 2 appears to have some fast pulse bursts easily observed in the RSP that don’t show in the FD. For example, there is a very fast wave observable in the RSP in units ~175-180, time steps 0-350. Note, the resolution is poor, but zooming in and scrolling to where the wave begins around unit 175, step 0 makes it clear. If one scrolls vertically such that the bottom of the wave at step 0 is just barely unobservable, then one can see the wave rapidly come into view and continue downwards. Similarly some short-lasting, slower pulses in units near 190, steps 0-350 are observable in the RSP. None of these appear in the FD. Note, this would not take away from the claim that wRNNs facilitate wave activity much more than iRNNs do, but rather that some small amounts—likely insufficient amounts for facilitating sequence learning—of wave activity might still arise in iRNNs. If the authors believe these wavelike activities are aberrant, it would be helpful for them to explain why so.
4. I looked over the original algorithm the authors used (in Section III of “Recognition and Velocity Computation of Large Moving Objects in Images”—RVC paper below—which I would recommend for the authors to cite), and I wonder if an error in the initial calibration steps (steps 1 & 2) occurred that might explain the speed disparities observed between the RSPs and FDs.
5. There do seem to be some different wave speeds—e.g., in Fig. 9, there appear to be fast and narrow excitatory waves overlapping with slow and broad inhibitory waves. But given that each channel has its own wave speed parameter $\nu$, it isn’t clear why a single channel would support multiple wave speeds. This should be explored in greater depth, and if obvious examples of sufficiently different speeds of excitatory waves are known (putatively Fig. 9, 2nd column), these should be clearly shown and carefully described and analyzed.
6. Is there cross-talk across the channels? If so, have the authors examined the images of the hidden units (with dimensions __hidden units__ x __channels__) for evidence of cross-channel waves? If so, perhaps this is one reason for multiple wave speeds to exist per channel?
7. Overall, it is unclear overall what FT adds to the detection of 1-D waves. If there are such waves, we should be able to observe them directly in the RSPs. In skimming over the RVC paper, it seems like it would be most useful in determining velocities of 2-D objects and perhaps wave pulses. That suggests that one place the FD analysis might be useful is if there are cross-channel waves as I mention above. If so, the waves should still be observable in the images (and I would encourage such images be shown), but might be more easily characterized following the marginalization decomposition procedure described in the original algorithm in Section III of the RVC paper. Note, the FDs might also facilitate the detection of multiple wave speeds in the network, as potentially shown in Fig. 9. However, in that case it would seem they should only appear in Fig. 9, and if the speeds are otherwise verified.
8. The authors mention they re-sorted the iRNN units to look for otherwise hidden waves. This seems highly problematic. If there are waves, then re-sorting can destroy them, and if there is only random activity then re-sorting can cause them to look like waves.
**C. Mechanisms**
Finally, while the results are overall impressive, and hypotheses made regarding the underlying mechanisms for the performance levels of the network, there is too little analysis of the these mechanisms. While the ablation study is important and helpful, much more could be done to characterize the relationship between wavelike activity and network performance.
**D. Minor**
1. Fig. 2: Both plots on the right have the leftmost y-axis digits obscured
2. Fig. 9, top, plots appear to have their x- and y- labels transposed (or else the lower FD plots and those in Fig. 2 have theirs transposed.
3. Fig. 15 needs axis labels | 4. I looked over the original algorithm the authors used (in Section III of “Recognition and Velocity Computation of Large Moving Objects in Images”—RVC paper below—which I would recommend for the authors to cite), and I wonder if an error in the initial calibration steps (steps 1 & 2) occurred that might explain the speed disparities observed between the RSPs and FDs. |
NIPS_2021_942 | NIPS_2021 | The biggest real-world limitation is that the method does not perform as well as backprop. This is unfortunate, but also understandable. The authors do mention that ASAP has a lower memory footprint, but there are also other methods that can reduce memory footprints of neural networks trained using backprop. Given that this method is worse than backprop, and it is also not easy to implement, I cannot see any practical use for it.
On the theoretical side, although the ideas here are interesting, I take issue with the term "biologically plausible" and the appeal to biological networks. Given that cognitive neuroscience has not yet proceeded to a point where we understand how patterns and learning occur in brains, it is extremely premature to try and train networks that match biological neurons on the surface, and claim that we can expect better performance because they are biology inspired. To say that these networks behave more similarly to biological neurons is true only on the surface, and the claim that these networks should therefore be better or superior (in any metric, not just predictive performance) is completely unfounded (and in fact, we can see that the more "inspiration" we draw from biological neurons, the worse our artificial networks tend to be). In this particular case, humans can perform image classification nearly perfectly, and better than the best CNNs trained with backprop. And these CNNs trained with backprop do better than any networks trained using other methods (including ASAP). To clarify, I do not blame (or penalize) the authors for appealing to biological networks, since I think this is a bigger issue in the theoretical ML community as a whole, but I do implore them to soften the language and recognize the severe limitations that prevent us from claiming homology between artificial neural networks and biological neural networks (at least in this decade). I encourage the authors to explicitly clarify that: 1) biological neurons are not yet understood, so drawing inspiration from the little we know does not improve our chances at building better artificial networks; 2) the artificial networks trained using ASAP (and similar methods) do not improve our understanding of biological neurons at all; and 3) the artificial networks trained using ASAP (and similar methods) do not necessarily resemble biological networks (other than the weight transport problem, which is of arguable importance) more than other techniques like backprop. Again, I do not hold the authors accountable for this, and this does not affect the review I gave. | 3) the artificial networks trained using ASAP (and similar methods) do not necessarily resemble biological networks (other than the weight transport problem, which is of arguable importance) more than other techniques like backprop. Again, I do not hold the authors accountable for this, and this does not affect the review I gave. |
wcgfB88Slx | EMNLP_2023 | The following are the questions I have and these are not necessarily 'reasons to reject'.
- I was looking for a comparison with the zero-shot chain of thought baseline which authors refer as ZOT (Kojima et al., 2022). The example selection method has a cost. Also, few shot experiments involve extra token usage cost than zero shot.
- Some of the numbers while comparing proposed method vs baselines seem to be pretty close. Wondering, if authors did any statistical significance test?
- A parallel field to explanation selection is prompt/instruction engineering, where we often change the zeroshot instruction. Another alternative is prompt-tuning via gradient descent. Wondering if authors have any thoughts regarding the tradeoff.
- Few shot examples has various types of example biases such as majority bias, recency bias etc. (http://proceedings.mlr.press/v139/zhao21c/zhao21c.pdf, https://aclanthology.org/2023.eacl-main.130/, https://aclanthology.org/2022.acl-long.556.pdf). Wondering if authors have any thought on how the robustness look like with the application of their method?
I am looking forward to hear answers to these questions from the authors. | - Some of the numbers while comparing proposed method vs baselines seem to be pretty close. Wondering, if authors did any statistical significance test? |
NIPS_2016_321 | NIPS_2016 | #ERROR! | * The paper focuses on learning HMMs with non-parametric emission distributions, but it does not become clear how those emission distributions affect inference. Which of the common inference tasks in a discrete HMM (filtering, smoothing, marginal observation likelihood) can be computed exactly/approximately with an NP-SPEC-HMM? |
vexCLJO7vo | EMNLP_2023 | 1. This paper aims to evaluate the performance of current LLMs on different temporal factors and select three types of factors, including cope, order, and counterfactual. What is the rationale behind selecting these three types of factors, and how do they relate to each other?
2. More emphasis should be placed on prompt design. This paper introduces several prompting methods to address issues in MenatQA. Since different prompts may result in varying performance outcomes, it is essential to discuss how to design prompts effectively.
3. The analysis of experimental results is insufficient. For instance, the authors only mention that the scope prompting method shows poor performance on GPT-3.5-turbo, but they do not provide any analysis of the underlying reasons behind this outcome. | 3. The analysis of experimental results is insufficient. For instance, the authors only mention that the scope prompting method shows poor performance on GPT-3.5-turbo, but they do not provide any analysis of the underlying reasons behind this outcome. |
NIPS_2018_101 | NIPS_2018 | Weakness: The ideas of extension seem to be intuitive and not very novel (the authors seem to honestly admit this in the related work section when comparing this work with [3,8,9]). This seems to make the work a little bit incremental. In the experiments, Monte Carlo (batch-ENS) works pretty well consistently, but the authors do not provide intuitions or theoretical guarantees to explain the reasons. Questions: 1. In [12], they also show the results of GpiDAPH3 fingerprint. Why not also run the experiment here? 2. You said only 10 out of 120 datasets are considered as in [7,12]. Why not compare batch and greedy in other 110 datasets? 3. If you change the budget (T) in the drug dataset, does the performance decay curve still fits the conclusion of Theorem 1 well (like Figure 1(a))? 4. In the material science dataset, the pessimistic oracle seems not to work well. Why do your explanations in Section 5.2 not hold in the dataset? Suggestions: Instead of just saying that the drug dataset fits Theorem 1 well, it will be better to characterize the properties of datasets to which you can apply Theorem 1 and your analysis shows that this drug dataset satisfies the properties, which naturally implies Theorem 1 hold and demonstrate the practical value of Theorem 1. Minor suggestions: 1. Equation (1): Using X to denote the candidate of the next batch is confusing because it is usually used to represent the set of all training examples 2. In the drug dataset experiment, I cannot find how large the budget T is set 3. In section 5.2, the comparison of myopic vs. nonmyopic is not necessary. The comparison in drug dataset has been done at [12]. In supplmentary material 4. Table 1 and 2: why not also show results when batch size is 1 as you did in the drug dataset? 5. In the material science dataset experiment, I cannot find how large the budget T is set After rebuttal: Thanks for the explanation. It is nice to see the theorem roughly holds for the batch size part when different budgets are used. However, based on this new figure, the performance does not improve with the rate 1/log(T) as T increases. I suggest authors to replace Figure 1a with the figure in the rebuttal and address the possible reasons (or leave it as future work) of why the rate 1/log(T) is not applied here. There are no major issues found by other reviewers, so I changed my rate from tending to accept to accepting. | 2. You said only 10 out of 120 datasets are considered as in [7,12]. Why not compare batch and greedy in other 110 datasets? |
NIPS_2020_1584 | NIPS_2020 | These are not weaknesses but rather questions. 1) Is there a general relation between the strict complementarity, F*, and the pyramidal width? I understand it in the case of the simplex, I wonder if something can be said in general. 2) It would be useful to discuss some practical applications (for example in sparse recovery) and the implication of the analysis to those. In general, I found the paper would be stronger if better positioned wrt particular practical applications. 3) I found the motivation in the introduction with the low-rank factorization unnecessary given that the main result is about polytopes. If the result has implications for low-rank matrix factorization I would like to see them explicitly discussed. | 3) I found the motivation in the introduction with the low-rank factorization unnecessary given that the main result is about polytopes. If the result has implications for low-rank matrix factorization I would like to see them explicitly discussed. |
NIPS_2018_544 | NIPS_2018 | - the presented results do not give me the confidence to say that this approach is better than any of the other due to a lot of ad-hoc decisions in the paper (e.g. digital identity part of the code vs full code evaluation, the evaluation itself, and the choice of the knn classifier) - the results in table 1 are quite unusual - there is a big gap between the standard autoencoders and the variational methods which makes me ask whether thereâs something particular about the classifier used (knn) that is a better fit for autoencoders. the particularities of the loss or the distribution used when training. why was k-nn used? a logical choice would be a more powerful method like svm or a multilayer perceptron. there is no explanation for this big gap - there is no visual comparison of what some of the baseline methods produce as disentangled representations so itâs impossibly to compare the quality of (dis) entanglement and the semantics behind each factor of variation - the degrees of freedom among features of the code seem binary in this case, therefore it is important which version of vae and beta-vae, as well as infogan are used, but the paper does not provide those details - the method presented can easily be applied on unlabeled data only, and that should have been one point of comparison to the other methods dealing with unlabeled data only - showing whether it works on par with baselines when no labels are used, but that wasnât done. the only trace of that is in the figure 3, but that compares the dual and primary accuracy curves (for supervision of 0.0), but does not compare it with other methods - though parts of the paper are easy to understand, in whole it is difficult to get the necessary details of the model and the training procedure (without looking into the appendix which i admittedly did not do, but then, i want to understand the paper fully (without particular details like hyperparameters) from the main body of the paper). i think the paper would benefit from another writing iteration Questions: - are the features of the code binary? because i didnât find it clear from the paper. if so, then the effects of varying the single code feature is essentially a binary choice, right? did the baseline (beta-)vae and infogan methods use the appropriate distribution in that case? i cannot find this in the paper - is there a concrete reason why you decided to apply the model only in a single pass for labelled data, because you could have applied the dual-swap on labelled data too - how is this method applied at test time - one needs to supply two inputs? which ones? does it work when one supplies the same input for both? - 57 - multi dimension attribute encoding, does this essentially mean that the projection code is a matrix, instead of a vector, and thatâs it? - 60-62 - if the dimensions are not independent, then the disentanglement is not perfect - meaning there might be correlations between specific parts of the representation. did you measure/check for that? - can you explicitly say what labels for each dataset in 4.1 are - where are they coming from? from the dataset itself? from what i understand, thatâs easy for the generated datasets (and generated parts of the dataset), but what about cas-peal-r1 and mugshot? - i find the explanation in 243-245 very unclear. could you please elaborate what this exactly means - why is it 5*3 (and not 5*2, e.g. in the case of beta-vae where thereâs a mean and stdev in the code) - 247 - why was k-nn used, and not some other more elaborate classifier? what is the k, what is the distance metric used? - algorithm 1, ascending the gradient estimate? what is the training algorithm used? isnât this employing a version of gradient descent (minimising loss)? - what is the effect of the balance parameter? it is a seemingly important parameter, but there are no results showing a sweep of that parameter, just a choice between 0.5 and 1 (and why does 0.5 work better)? - did you try beta-vae with significantly higher values of beta (a sweep between 10 and 150 would do)? Other: - the notation (dash, double dash, dot, double dot over inputs is a bit unfortunate because itâs difficult to follow) - algorithm 1 is a bit too big to follow clearly, consider revising, and this is one of the points where itâs difficult to follow the notation clearly - figure 1, primary-stage I assume that f_\phi is as a matter of a fact two f_\phis with shared parameters. please split it otherwise the reader can think that f_\phi accepts 4 inputs (and in the dual-stage it accepts only 2) - figure 3 a and b are very messy / poorly designed - it is impossible to discern different lines in a because thereâs too many of them, plus itâs difficult to compare values among the ones which are visible (both in a and b). log scale might be a better choice. as for the overfitting in a, from the figure, printed version, i just cannot see that overfitting - 66 - is shared - 82-83 Also, require limited weakerâ¦sentence not clear/gramatically correct - 113 - one domain entity - 127 - is shared - 190 - conduct disentangled encodings? strange word choice - why do all citations in parentheses have a blank space between the opening parentheses and the name of the author? - 226 - despite the qualities of hybrid images are not exceptional - sentence not clear/correct - 268 - is that fig 2b or 2a? - please provide an informative caption in figure 2 (what each letter stands for) UPDATE: Iâve read the author rebuttal, as well as all the reviews again, and in light of a good reply, Iâm increasing my score. In short, I think the results look promising on dSprites and that my questions were well answered. I still feel i) the lack of clarity is apparent in the variable length issue as all reviewers pointed to that, and that the experiments cannot give a fair comparison to other methods, given that the said vector is not disentangled itself and could account for a higher accuracy, and iii) that the paper doesnât compare all the algorithms in an unsupervised setting (SR=0.0) where I wouldnât necessarily expect better performance than the other models. | - can you explicitly say what labels for each dataset in 4.1 are - where are they coming from? from the dataset itself? from what i understand, thatâs easy for the generated datasets (and generated parts of the dataset), but what about cas-peal-r1 and mugshot? |
NIPS_2017_217 | NIPS_2017 | Weakness:
- The paper is rather incremental with respect to [31]. The authors adapt the existing architecture for the multi-person case producing identity/tag heatmaps with the joint heatmaps.
- Some explanations are unclear and rather vague. Especially, the solution for the multi-scale case (end of Section 3) and the pose refinement used in section 4.4 / table 4. This is important as most of the improvement with respect to state of the art methods seems to come from these 2 elements of the pipeline as indicated in Table 4. Comments:
The state-of-the-art performance in multi-person pose estimation is a strong point. However, I find that there is too little novelty in the paper with respect to the stacked hour glasses paper and that explanations are not always clear. What seems to be the key elements to outperform other competing methods, namely the scale-invariance aspect and the pose refinement stage, are not well explained. | - The paper is rather incremental with respect to [31]. The authors adapt the existing architecture for the multi-person case producing identity/tag heatmaps with the joint heatmaps. |
NIPS_2021_1078 | NIPS_2021 | weakness of this paper, I am concerned with the following two points: 1) The assumption for termination states of instructions are quite strong. In the general case, it is very expensive to label a large number of data manually. 2) It seems that performance and sample efficiency are sensitive to λ parameters.
(Page 9, lines 310-313) I don't understand how the process of calculating the λ
is done. How is λ
computed from step here?
(Page 8 lines 281-285) The authors explain why ELLA does not increase sample efficiency in a COMBO environment, but I don't quite understand what it means.
[1] Yuri Burda et al, Exploration by Random Network Distillation, ICLR 2019
[2] Deepak Pathak et al, Curiosity-driven Exploration by Self-supervised Prediction, ICML 2017
[3] Roberta Raileanu et al, RIDE: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments, ICLR 2020 | 2) It seems that performance and sample efficiency are sensitive to λ parameters. (Page 9, lines 310-313) I don't understand how the process of calculating the λ is done. How is λ computed from step here? (Page 8 lines 281-285) The authors explain why ELLA does not increase sample efficiency in a COMBO environment, but I don't quite understand what it means. [1] Yuri Burda et al, Exploration by Random Network Distillation, ICLR 2019 [2] Deepak Pathak et al, Curiosity-driven Exploration by Self-supervised Prediction, ICML 2017 [3] Roberta Raileanu et al, RIDE: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments, ICLR 2020 |
NIPS_2018_45 | NIPS_2018 | ---------- The main section of the paper (section 3) seems carelessly written. Some obvious weaknesses: - Algorithm 1 seems more confusing, than clarifying: a) Shouldn't the gradient step be taken in the direction of the gradient of the loss with respect to Theta? b) There is no description of the variables, most importantly X and f. It is better for the reader to define them in the algorithm than later in the text. Otherwise, the algorithm definition seems unnecessary. - Equation (1) is very unclear: a) Is the purpose to define a loss function or the optimization problem? It seems that it is mixing both. b) The optimization variable x is defined to be in R^n. Probably it is meant to be in R^k? c) The constraints notation (s.t. C(A, x)) is rather unusual. - It is briefly mentioned that an alternating direction method is used to solve the min-min problem. Which method? - The constraints in equation (2) are identical to the ones in equation (3). They can be mentioned as such to gain space. - In section 4.1, line 194, K = 10, presumably refers to the number of atoms in the dictionary, namely it should be a small k? The same holds for section 4.4, line 285. - In section 4.1, why is the regularizer coefficient gamma set to zero? Intuitively, structured sparcity should be particularly helpful in finding keypoint correspondences. What is the effect on the solution when gamma is larger than zero? The experimental section of the paper seems well written (with a few exceptions, see above). Nevertheless, the experiments in 4.2 and 4.3 each compare to only one existing work. In general, I can see the idea of the paper has merit, but the carelessness in the formulations in the main part and the lack of comparisons to other works make me hesitant to accept it as is at NIPS. | - It is briefly mentioned that an alternating direction method is used to solve the min-min problem. Which method? |
ICLR_2021_1916 | ICLR_2021 | Weakness: 1.Some experiment is hard to understand. Table1 shows the TD-error and the absolute state-action value which didn’t demonstrate the small approximation error would cause significant estimation error which would cause the sub-optimal fixed points.
2.The effectiveness of lower bound double q-learning is doubtful. In MsPacman of Figure2, the algorithm shows slight performance decrease of Clipped DDQN, in some environment such as WizardOfWor, Zaxxon RoadRunner and BattleZone, these algorithms seems converge into same solutions. Besides, the algorithm would cause the overestimate the true maximum value. | 2.The effectiveness of lower bound double q-learning is doubtful. In MsPacman of Figure2, the algorithm shows slight performance decrease of Clipped DDQN, in some environment such as WizardOfWor, Zaxxon RoadRunner and BattleZone, these algorithms seems converge into same solutions. Besides, the algorithm would cause the overestimate the true maximum value. |
ICLR_2022_3183 | ICLR_2022 | Weakness: 1. The novelty is limited. Interpretating the prediction of deep neural networks using linear model is not a new approach for model interpretation. 2. The motivation of the paper is not clear. It seems an experiment report about ~193k models, and obtains the obvious results, such as the middle layers represent the more generalizable features. But it is not about interpretation. 3. The writing is not clear. it's hard to read and follow the work. 4.
Concerns: 1. Why need 101 “source” tasks? What are these? 2. 101 tasks can be done on the retinal image, it is not a unified domain to do the research about model interpretation. 3. The motivation and conclusion should be clearly presented to under the contribution of this paper for model interpretation. | 1. The novelty is limited. Interpretating the prediction of deep neural networks using linear model is not a new approach for model interpretation. |
NIPS_2020_1602 | NIPS_2020 | There are some questions/concerns however. 1. Haven't you tried to set hyperparameters for the baseline models via cross-validation (i.e. the same method you used for your own model)? Setting it to their default values (even taken from other papers) may have a risk of unfair comparison aganist yours. I do not think this is the case but I would recommend the authors to carry out the corresponding experiments. 2. It is unclear for me why the performance of DNN+MMA becomes worse than vanilla DNN when lambda becomes small? See fig.3-4. I would expect it will approach vanilla methods from above but from below. | 2. It is unclear for me why the performance of DNN+MMA becomes worse than vanilla DNN when lambda becomes small? See fig.3-4. I would expect it will approach vanilla methods from above but from below. |
ARR_2022_121_review | ARR_2022 | 1. The writing needs to be improved. Structurally, there should be a "Related Work" section which would inform the reader that this is where prior research has been done, as well as what differentiates the current work with earlier work. A clear separation between the "Introduction" and "Related Work" sections would certainly improve the readability of the paper.
2. The paper does not compare the results with some of the earlier research work from 2020. While the authors have explained their reasons for not doing so in the author response along the lines of "Those systems are not state-of-the-art", they have compared the results to a number of earlier systems with worse performances (Eg. Taghipour and Ng (2016)).
Comments: 1. Please keep a separate "Related Work" section. Currently "Introduction" section of the paper reads as 2-3 paragraphs of introduction, followed by 3 bullet points of related work and again a lot of introduction. I would suggest that you shift those 3 bullet points ("Traditional AES", "Deep Neural AES" and "Pre-training AES") to the Related work section.
2. Would the use of feature engineering help in improving the performance? Uto et al. (2020)'s system reaches a QWK of 0.801 by using a set of hand-crafted features. Perhaps using Uto et al. (2020)'s same feature set could also improve the results of this work.
3. While the out of domain experiment is pre-trained on other prompts, it is still fine-tuned during training on the target prompt essays. Typos: 1. In Table #2, Row 10, the reference for R2BERT is Yang et al. (2020), not Yang et al. (2019).
Missing References: 1. Panitan Muangkammuen and Fumiyo Fukumoto. " Multi-task Learning for Automated Essay Scoring with Sentiment Analysis". 2020. In Proceedings of the AACL-IJCNLP 2020 Student Research Workshop.
2. Sandeep Mathias, Rudra Murthy, Diptesh Kanojia, Abhijit Mishra, Pushpak Bhattacharyya. 2020. Happy Are Those Who Grade without Seeing: A Multi-Task Learning Approach to Grade Essays Using Gaze Behaviour. In Proceedings of the 2020 AACL-IJCNLP Main Conference. | 2. The paper does not compare the results with some of the earlier research work from 2020. While the authors have explained their reasons for not doing so in the author response along the lines of "Those systems are not state-of-the-art", they have compared the results to a number of earlier systems with worse performances (Eg. Taghipour and Ng (2016)). Comments: |
wcqBfk4jv6 | EMNLP_2023 | 1. although the choice of models seems fine at first, but I am not sure as to how much of the citation information is actually being utilized. The maximum input size can only be 1024 and given the size of articles and the abstracts I am not sure how much inforamtion is being used in either. Can you comment on how much information is lost at token-level that is being fed to model.
2. I like the idea of aggregation using various cited articles. The only problem and I might be possibly confused as to how are you ensuring the quality of chosen articles. It could be so that the claims made in the article might also be contradicting to the claims made in cited articles or might not at all be related to the claims being discussed in the articles. Do you have any analysis on correlation between cited articles and the main articles, and whether it affects the quality of generation?
3. The authors have reprt significane testing but I think the choice of test might be incorrect. Since the comparision is to be done between two samples generated from same input why not some paired test setting was used like wilcoxon signed ranked test? | 3. The authors have reprt significane testing but I think the choice of test might be incorrect. Since the comparision is to be done between two samples generated from same input why not some paired test setting was used like wilcoxon signed ranked test? |
NIPS_2022_1034 | NIPS_2022 | Regarding the background: the authors should consider adding a preliminary section to introduce the background knowledge on the nonparametric kernel regression, kernel density estimation, and the generalized Fourier Integral theorem, which could help the readers easily follow the derivation of Section 2 and understand the motivation to use the Fourier Integral theorem as a guide to developing a new self-attention mechanism.
Regarding the experimental evaluation: the issues are three-fold. 1) since the authors provide an analysis of the approximation error between estimators and true functions (Theorem 1 and 2), it is informative to provide an empirical evaluation of these quantities on real data as further verification. 2) The experiments should be more comprehensive and general. For both the language modeling task and image classification task, the model size is limited and the baselines are restrictive. 3) Since the FourierFormer need customized operators for implementation, the authors should also provide the memory/time cost profiling compared to popular Transformer architectures. Based on these issues, the efficiency and effectiveness of the FourierFormer are doubtful.
-------After Rebuttal------- Thank authors for the detailed response. Most of my concerns have been addressed. I have updated my scores to 6. | 2) The experiments should be more comprehensive and general. For both the language modeling task and image classification task, the model size is limited and the baselines are restrictive. |
NIPS_2020_68 | NIPS_2020 | 1. It is unclear how guaranteeing stationary points that have small gradient norms translates to good generalization. The bounds just indicate that these algorithms reach one of the many stationary points for adaptive gradient methods and don't talk about how reaching one of the potentially many population stationary points especially in the non-convex regime can translate to good generalization. A remark on this would be helpful. 2. Line 124-125: For any w, the Hoeffding's bound holds true as long as the samples are drawn independently and so it is always possible to show inequality (2). Stochastic algorithms moreover impose conditioning on the previous iterate further guaranteeing that Hoeffding inequality holds. It will be great if the authors can elaborate on this. 3. The bounds in Theorem 1 have a dependence on d, which the authors have discussed. However, if \mu is small, the bounds are moot. If \mu is large, then the concentration guarantees are not very useful. Based on values in Theorem 2, latter seems to be the case. 4. It seems weird that the bounds in Theorems 2 and 4 do not depend on the initialization w_0 but on w_1. 5. For experiments on Penn-Tree bank, it seems that the algorithms are not stable with respect to train perplexity. | 2. Line 124-125: For any w, the Hoeffding's bound holds true as long as the samples are drawn independently and so it is always possible to show inequality (2). Stochastic algorithms moreover impose conditioning on the previous iterate further guaranteeing that Hoeffding inequality holds. It will be great if the authors can elaborate on this. |
ICLR_2023_3923 | ICLR_2023 | Weakness:
1: The paper focus over the metric learning approach of meta-learning, what about the optimization-based model e.g. MAML [a], Implicit-MAML [b] etc?. How the model will behave over these approaches? Does the same analysis be correct for other meta-learning model?
2: The base model is implemented based on ProtoNet (Deleu et al., 2019) with default parameters given by (Medina et al., 2020). How we can ensure there hyperparameters are optimal?
3: For the domain invariant auxiliary learning, “treat each image as its own class to form the support” how it is a valid labelling? It may be a wrong class assignment, then the result may be worse than using this, but the model works. I cannot understand the proper intuition, could you please provide the detail intuition?
4: Is it possible to add some optimization based meta-learning approach in the Table-1? like MAML/implicit-MAML?
5: What is the size of auxiliary data?
[a] Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
[b] Meta-Learning with Implicit Gradients | 4: Is it possible to add some optimization based meta-learning approach in the Table-1? like MAML/implicit-MAML? |
FXObwPWgUc | EMNLP_2023 | * The paper could have provided a clearer use case for why the NMT model is still necessary. When powerful and large general language models are used for post-editing, why should one use a specialized neural machine translation model to get the translations in the first place? Including GPT-4 translations plus GPT-4 post-editing scores would have been insightful.
* The paper lacks an ablation study explaining why they chose the prompt in this specific way, e.g., few-shot examples for CoT might improve performance.
* The reliance on an external model via API, where it's unclear how the underlying model changes, makes it hard to reproduce the results. There is also a risk of data pollution since translations might already be in the training data of GPT-4. The authors only state that the WMT-22 test data is after the cutoff date of the GPT-4 training data, but they do not say anything about the WMT-20 and WMT-21 datasets that they also use.
* The nature of post-edited translation experiments is only partially done: En-Zh and Zh-En for GPT-4 but not for En-De and De-En for GPT-3.5.
* The title is misleading since the authors also evaluate GPT-3.5. | * The paper lacks an ablation study explaining why they chose the prompt in this specific way, e.g., few-shot examples for CoT might improve performance. |
ICLR_2022_1267 | ICLR_2022 | Weakness
1. The proposed model's parameterization depends on the number of events and predicates making it difficult to generalize to unseen events or required retraining.
2. The writing needs to be improved to clearly discuss the proposed approach.
3. The experiments baselines are of the authors' own design; it lacks a comparison to the literature baselines using the same dataset. If there is no such baseline, please discuss the criteria in choosing such baselines. Details:
1. Page 1, "causal mechanisms", causality is different from temporal relationship. Please use the terms carefully.
2. Page 3, it seems to me that M_T is defined over the probabilities of atomic events. The notation as it is used not making it difficult to make sense of this concept. Please consider providing examples to explain M_T.
3. Page 4, equation (2), it is not usual to feed probabilities to convolution.
a. Please discuss in section 3 how your framework can handle raw inputs, such as video or audio? Do you need an atomic event predictor or human label to use your proposed system? If so, is it possible to extend your framework to directly have video as input instead of event probability distributions? Can you do end2end training from raw inputs, such as video or audio? (although you mentioned Faster R-CNN in the experiment section, it is better to discuss the whole pipeline in the methodology).
b. Have you tried discrete event embeddings to represent the atomic and composite events so as the framework can learn distributional embedding representation of events so as to learn the temporal rules?
4. Page 4, please explain what you want to achieve with M_A = M_C \otimes M_D. It is unusual to multiple length by conv1D output. Also please define \otimes here. I am guessing it is elementwise multiplication from the context.
5. Page 4, "M_{D:,:,l}=l. This can be thought as a positional encoding. It is not clear to me why this can be taken as positional encoding?
6. Page 6, please detail how do you sample top c predicates. Please define what is s in a = softmax(s). It seems to me the dimension of s with \sum_i (c i) can be quite large making it softmax(s) very costly. | 1. Page 1, "causal mechanisms", causality is different from temporal relationship. Please use the terms carefully. |
CblASBV3d4 | EMNLP_2023 | - While studying instability of LIME, the work likely confuses that instability with various
other sources of instability involved in the methods:
- Instability in the model being explained.
- Instability of ranking metrics used.
- Instability of the LIME implementation used. This one drops entire words instead of perturbing
embeddings which would be expected to be more stable. Dropping words is a discrete and
impactful process compared to embedding perturbation.
Suggestion: control for other sources of instability. That is, measure and compare model
instability vs. resulting LIME instability; measure and compare metric instability vs. resulting
LIME instability. Lastly consider evaluating the more continuous version of input perturbation
for LIME based on embeddings. While the official implementation does not use embeddings, it
shouldn't be too hard to adjust it given token embedding inputs.
- Sample complexity of the learning LIME uses to produce explanations is not discussed. LIME
attempts to fit a linear model onto a set of inputs of a model which is likely not linear. Even
if the target model was linear, LIME would need as many samples as there are input features to be
able to reconstruct the linear model == explanation. Add to this likelihood of the target not
being linear, the number of samples needed to estimate some stable approximation increases
greatly. None of the sampling rates discussed in the paper is suggestive of even getting close to
the number of samples needed for NLP models.
Suggestion: Investigate and discuss sample complexity for the type of linear models LIME uses as
there may be even tight bounds on how many samples are needed to achieve close to optimal/stable solution.
Suggestion: The limitations discusses the computational effort is a bottleneck in using larger
sample sizes. I thus suggest investigating smaller models. It is not clear that using
"state-of-the-art" models is necessary to make the points the paper is attempting to make.
- Discussions around focus on changing or not changing the top feature are inconsistent throughout
the work and the ultimate reason behind them is hard to discern. Requiring that the top feature
does not change seems like a strange requirement. Users might not even look at the features below
the top one so attacking them might be irrelevant in terms of confusing user understanding.
"Moreover, its experiment settings are not ideal as it allows perturbations of top-ranked
predictive features, which naturally change the resulting explanations"
Isn't changing the explanations the whole point in testing explanation robustness? You also cite
this point later in the paper:
"Moreover, this requirement also accounts the fact that end-users often consider only the
top k most important and not all of the features"
Use of the ABS metric which focuses on the top-k only also is suggestive of the importance of top
features. If top features are important, isn't the very top is the most important of all? Later:
"... changing the most important features will likely result in a violation to constraint in
Eq. (2)"
Possibly but that is what makes the perturbation/attack problem challenging. The text that
follows does not make sense to me:
"Moreover, this will not provide any meaningful insights to analysis on stability
in that we want to measure how many changes in the perturbed explanation that
correspond to small (and not large) alterations to the document.
I do not follow. The top feature might change without big changes to the document.
Suggestion: Coalesce the discussion regarding the top features into one place and present a
self-consistent argument of where and why they are allowed to be changed or not.
Smaller things:
- The requirement of black-box access should not dismiss comparisons with white-box attacks as baselines.
- You write:
"As a sanity check, we also constrain the final perturbed document to result in at least one
of the top k features decreasing in rank."
This does not sound like a sanity check but rather a requirement of your method. If it were
sanity check, you'd measure whether at least one of the top k features decreased without imposing
it as a requirement.
- The example of Table 5 seems to actually change the meaning significantly. Why was such a change
allowed given "think" (verb) and "thinking" (most likely adjective) changed part of speech?
- You write:
"Evidently, replacing any of the procedure steps of XAIFOOLER with a random mechanism dropped
its performance"
I'm unsure that "better than random" is a strong demonstration of capability. | - You write: "Evidently, replacing any of the procedure steps of XAIFOOLER with a random mechanism dropped its performance" I'm unsure that "better than random" is a strong demonstration of capability. |
NIPS_2018_936 | NIPS_2018 | Weakness: - would like to have seen a discussion of how these results related to the lower bounds on kernel learning using low-rank approximation given in "On the Complexity of Learning with Kernels". - In Assumption 5, the operator L is undefined. Should that be C? | - would like to have seen a discussion of how these results related to the lower bounds on kernel learning using low-rank approximation given in "On the Complexity of Learning with Kernels". |
34QscjTwOc | ICLR_2024 | - As far as I see, there is no mention of limitations of this work, let alone a Limitations section. No work is perfect, and every work should include a Limitations section so that, only two reasons given here for concision, (1) readers are quickly aware of cases in which this work applies and in which it doesn't and (2) readers have confidence that the paper is at least somewhat cognizant of (1). I'm unsure whether this is in the Appendix or Supplementary Material.
- Very limited Related Works section. A large section of related works that is relevant is "sparsity in neural networks," and this could be broken down into multiple relevant subsections, such as "sparsity over training progress", "sparsity with respect to {eigenvalues, spectral norms, Hessian properties [1], etc.}"
- Limited rigor in original (at least original as far as I know, such as the categorization of salient features) concepts.
- What quantitative rigor justifies the categorization of a feature into one of the 5 mentioned categories?
- Is there some sort of goodness of fit test or statistical hypothesis test or principled approach for assigning a feature to a category?
- What if the training epochs were extended and the utility trended in a way that changed categorization?
- What was the stopping criteria for training?
- Was any analysis done for the reliability of assigning features to categories?
- Unclear in several aspects. Some include
- Why use only one layer for each of the DNNs? How was this layer selected? How would results changing using a different intermediate layer?
- Why use the threshold values for rank, approximation error for salient feature count, the number of training epochs used, among others?
- Are the results in Figure 5a, 5b, and 5c each for one "sample", "sentence", and "image" in the single DNN model and single dataset listed?
- Do Figures X and Y show results for randomly sampled images? Since it's impossible to confirm whether this was actually the case, are there examples that do not align with these results, or even contradict these results? Is there analysis as to why?
- The novelty of using PCA to reduce interaction count seems incremental and the significance of the paper results is unclear to me. Using PCA to reduce the interaction count seems intuitive, as PCA aims to retain the maximum information in the data with the reduced dimensionality chosen, assuming certain assumptions are met. How well are the assumptions met?
[1] Dombrowski, Ann-Kathrin, Christopher J. Anders, Klaus-Robert Müller, and Pan Kessel. "Towards robust explanations for deep neural networks." Pattern Recognition 121 (2022): 108194. | - The novelty of using PCA to reduce interaction count seems incremental and the significance of the paper results is unclear to me. Using PCA to reduce the interaction count seems intuitive, as PCA aims to retain the maximum information in the data with the reduced dimensionality chosen, assuming certain assumptions are met. How well are the assumptions met? [1] Dombrowski, Ann-Kathrin, Christopher J. Anders, Klaus-Robert Müller, and Pan Kessel. "Towards robust explanations for deep neural networks." Pattern Recognition 121 (2022): 108194. |
HtQvhCRTxo | EMNLP_2023 | 1. The scope of the paper seems a bit narrow. The dataset focuses only on company relations, but makes strong claims about generalization of RC models. Perhaps that claim needs to be supported with datasets on a few other domains.
2. The dataset is limited to company relations from Wikipedia pages. This may limit the diversity of the dataset. It may also not be representative of real-world data.
3. The dataset contains only a limited number of relations and examples, which may limit its usability in different scenarios.
4. The few-shot RC models considered in the paper are not state-of-the-art models (e.g. https://aclanthology.org/2022.coling-1.205.pdf, https://ieeexplore.ieee.org/abstract/document/10032649/). How does the performance compare to relation extraction/generation models in few-shot settings. | 4. The few-shot RC models considered in the paper are not state-of-the-art models (e.g. https://aclanthology.org/2022.coling-1.205.pdf, https://ieeexplore.ieee.org/abstract/document/10032649/). How does the performance compare to relation extraction/generation models in few-shot settings. |
ICLR_2023_591 | ICLR_2023 | There are multiple axes along which the current paper falls short of applying to realistic settings: 1) the assumption that one is given an oracle adversary, i.e. we have access to the worst-case perturbation (as opposed to a noisy gradient oracle, i.e. just doing PGD); 2) the results in section 4 apply only to shallow fully-connected ReLU networks; 3) the results hold only in a regime very close to initialization and it is assumed one has an early stopping criterion/oracle.
Weaknesses 2) and 3) are not unique to this work, and thus I heavily discount their severity when considering my overall recommendation. | 2) the results in section 4 apply only to shallow fully-connected ReLU networks; |
NIPS_2016_43 | NIPS_2016 | Weakness: 1. The organization of this paper could be further improved, such as give more background knowledge of the proposed method and bring the description of the relate literatures forward. 2. It will be good to see some failure cases and related discussion. | 1. The organization of this paper could be further improved, such as give more background knowledge of the proposed method and bring the description of the relate literatures forward. |
ICLR_2023_4455 | ICLR_2023 | 1) Using the center's representation to conduct panoptic segmentation is too similar to PanopticFCN. The core difference would be the island centers for stuff, however, according to Table 6, it does not make significant improvements.
2) Although MaskConver gets significantly better performance than previous works, it is not clear where these improvements come from. It lacks a roadmap-like ablation study from the baseline to MaskConver. For example, in Table 5, the backbones and input sizes are all different among different models, which is not a fair or clear comparison.
3) The novelty of this paper is limited, as it does not propose new modules or training strategies. As it does not provide detailed ablations, it would be susceptible that the improvements mainly come from a highly engineered strong baseline.
4) Some other representative panoptic segmentation models are not compared, like PanopticFPN, Mask2Former, etc. | 4) Some other representative panoptic segmentation models are not compared, like PanopticFPN, Mask2Former, etc. |
NIPS_2017_351 | NIPS_2017 | - As I said above, I found the writing / presentation a bit jumbled at times.
- The novelty here feels a bit limited. Undoubtedly the architecture is more complex than and outperforms the MCB for VQA model [7], but much of this added complexity is simply repeating the intuition of [7] at higher (trinary) and lower (unary) orders. I don't think this is a huge problem, but I would suggest the authors clarify these contributions (and any I may have missed).
- I don't think the probabilistic connection is drawn very well. It doesn't seem to be made formally enough to take it as anything more than motivational which is fine, but I would suggest the authors either cement this connection more formally or adjust the language to clarify.
- Figure 2 is at an odd level of abstraction where it is not detailed enough to understand the network's functionality but also not abstract enough to easily capture the outline of the approach. I would suggest trying to simplify this figure to emphasize the unary/pairwise/trinary potential generation more clearly.
- Figure 3 is never referenced unless I missed it.
Some things I'm curious about:
- What values were learned for the linear coefficients for combining the marginalized potentials in equations (1)? It would be interesting if different modalities took advantage of different potential orders.
- I find it interesting that the 2-Modalities Unary+Pairwise model under-performs MCB [7] despite such a similar architecture. I was disappointed that there was not much discussion about this in the text. Any intuition into this result? Is it related to swap to the MCB / MCT decision computation modules?
- The discussion of using sequential MCB vs a single MCT layers for the decision head was quite interesting, but no results were shown. Could the authors speak a bit about what was observed? | - The discussion of using sequential MCB vs a single MCT layers for the decision head was quite interesting, but no results were shown. Could the authors speak a bit about what was observed? |
ICLR_2022_2323 | ICLR_2022 | Weakness:
1. The literature review is inaccurate, and connections to prior works are not sufficiently discussed. To be more specific, there are three connections, (i) the connection of (1) to prior works on multivariate unlabeled sensing (MUS), (ii) the connection of (1) to prior works in unlabeled sensing (US), and (iii) the connection of the paper to (Yao et al., 2021).
(i) In the paper, the authors discussed this connection (i). However, the experiments shown in Figure 2 do not actually use the MUS algorithm of (Zhang & Li, 2020) to solve (1); instead the algorithm is used to solve the missing entries case. This seems to be an unfair comparison as MUS algorithms are not designed to handle missing entries. Did the authors run matrix completion prior to applying the algorithm of (Zhang & Li, 2020)? Also, the algorithm of (Zhang & Li, 2020) is expected to fail in the case of dense permutation.
(ii) Similar to (i), the methods for unlabeled sensing (US) can also be applied to solve (1), using one column of B_0 at a time. There is an obvious advantage because some of the US methods can handle arbitrary permutations (sparse or dense), and they are immune to initialization. In fact, these methods were used in (Yao et al., 2021) for solving more general versions of (1) where each column of B has undergone arbitrary and usually different permutations; moreover, this can be applied to the d-correspondence problem of the paper. I kindly wish the authors consider incoporating discussions and reviews on those methods.
(iii) Finally, the review on (Yao et al., 2021) is not very accurate. The framework of (Yao et al., 2021), when applied to (1), means that the subspace that contains the columns of A and B is given (when generating synthetic data the authors assume that A and B come from the same subspace). Thus the first subspace-estimation step in the pipeline of (Yao et al., 2021) is automatically done; the subspace is just the column space of A. As a result, the method of (Yao et al., 2021) can handle the situation where the rows of B are densely shuffled, as discussed above in (ii). Also, (Yao et al., 2021) did not consider only "a single unknown correspondence". In fact, (Yao et al., 2021) does not utilize the prior knowledge that each column of B is permuted by the same permutation (which is the case of (1)), instead it assumes every column of B is arbitrarily shuffled. Thus it is a more general situation of (1) and of the d-correspondence problem. Finally, (Yao et al., 2021) discusses theoretical aspects of (1) with missing entries, while an algorithm for this is missing until the present work.
2. In several places the claims of the paper are not very rigorous. For example,
(i) Problem (15) can be solved via linear assignment algorithms to global optimality, why do the authors claim that "it is likely to fall into an undesirable local solution"? Also I did not find a comparison of the proposed approach with linear assignment algorithms.
(ii) Problem (16) seems to be "strictly convex", not "strongly convex". Its Hessian has positive eigenvalues everywhere but the minimum eigenvalue is not lower bounded by some positive constant. This is my feeling though, as in the situation of logistic regression, please verify this.
(iii) The Sinkhorn algorithm seems to use O(n^2) time per iteration, as in (17) there is a term C(hat{M_B}), which needs O(n^2) time to be computed. Experiments show that the algorithm needs > 1000 iterations to converge. Hence, in the regime where n << 1000 the algorithm might take much more time than O(n^2) (this is the regime considered in the experiments). Also I did not see any report on running times. Thus I feel uncomfortable to see the author claim in Section 5 that "we propose a highly efficient algorithm".
3. Even though an error bound is derived in Theorem 1 for the nuclear norm minimization problem, there is no guarantee of success on the alternating minimization proposal. Moreover, the algorithm requires several parameters to tune, and is sensitive to initialization. As a result, the algorithm has very lage variance, as shown in Figure 3 and Table 1. Questions:
1. In (3) the last term r+H(pi_P) and C(pi_P) is very interesting. Could you provide some intuition how it shows up, and in particular give an example?
2. I find Assumption 1 not very intuitive; and it is unclear to me why "otherwise the influence of the permutation will be less significant". Is it that the unknown permutation is less harmful if the magnitudes of A and B are close?
3. Solving the nuclear norm minimization program seems to be NP-hard as it involves optimization over permutation matrices and a complicated objective. Is there any hardness result for this problem?
Suggestions: The following experiments might be useful.
1. Sensitivity to permutation sparsity: As shown in the literature of unlabeled sensing, the alternating minimization of (Abid et al., 2017) works well if the data are sparsely permuted. This might also apply to the proposed alternating minimization algorithm here.
2. Sensitivity to initialization: One could present the performance as a function of the distance of initialization M^0 to the ground-truth M^*. That is for varying distance c (say from 0.01:0.01:0.1), randomly sample a matrix M^0 so that M^0 - M^* _F < c as initialization, and report the performance accordingly. One would expect that the mean error and variance increases as the quality of initialization decreases.
3. Sensitivity to other hyper-parameters.
Minor Comments on language usage: (for example)
1. "we typically considers" in the above of (7)
2. "two permutation" in the above of Theorem 1
3. "until converge" in the above of (14)
4. ......
Please proofread the paper and fix all language problems. | 3. Sensitivity to other hyper-parameters. Minor Comments on language usage: (for example) 1. "we typically considers" in the above of (7) 2. "two permutation" in the above of Theorem 1 3. "until converge" in the above of (14) 4. ...... Please proofread the paper and fix all language problems. |
NIPS_2022_2605 | NIPS_2022 | Weakness: 1) In the beginning of the paper, authors often mention that previous works lack the flexibility compared to their work. It is not clear what does it mean and thus makes it harder to understand their explanation. 2) It is not clear regarding the choice of 20 distribution sets. Can we control the number of distribution sets for each class? What if you select only few number of distribution set? 3) The role of Tranfer Matrix T is not discussed or elaborated. 4) It is not clear how to form the target distribution H. How do you formulate H? 5) There is no discussion on how to generate x_H from H and what does x_H constitute of? 6) Despite the significant improvement, it is not clear how this proposed method boost the transferability of the adversarial examples.
As per my understanding, authors briefly addressed the limitations and negative impact in their work. | 2) It is not clear regarding the choice of 20 distribution sets. Can we control the number of distribution sets for each class? What if you select only few number of distribution set? |
Pb1DhkTVLZ | EMNLP_2023 | 1. The assessment criteria for the performance of large language models is limited to accuracy metrics. Such a limited view does not necessarily provide a comprehensive representation of the performance of large language models in real-world applications.
2. The method exhibits dependence on similar examples from the training dataset. This raises potential concerns regarding the distribution consistency between the training and test datasets adopted in the study. An in-depth visualization and analysis of the data distributions might be beneficial to address such concerns.
3. The evaluative framework appears somewhat limited in scope. With considerations restricted to merely three Question-Answering tasks and two language models, there are reservations about the method's broader applicability. Its potential to generalize to other reasoning or generation tasks or more advanced models, such as vicunna or alpaca, remains a subject of inquiry. | 3. The evaluative framework appears somewhat limited in scope. With considerations restricted to merely three Question-Answering tasks and two language models, there are reservations about the method's broader applicability. Its potential to generalize to other reasoning or generation tasks or more advanced models, such as vicunna or alpaca, remains a subject of inquiry. |
HjBDSop3ME | EMNLP_2023 | 1) Reducing the vocabulary size is one way of reducing the size of embedding, however, there are other alternatives such as dimensionality reduction (Raunak et al. 2019), quantization (see some works here (Gholami et al. 2021)), bloom embedding (Serra & Karatzoglou 2017), distillation networks (Hinton et al. 2015), etc.
This work should be compared against some of these related baselines to show its true potential as an innovative approach for embedding compactness.
*Raunak, V., Gupta, V., & Metze, F. (2019, August). Effective dimensionality reduction for word embeddings. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019) (pp. 235-243).
*Serrà, J., & Karatzoglou, A. (2017, August). Getting deep recommenders fit: Bloom embeddings for sparse binary input/output networks. In Proceedings of the Eleventh ACM Conference on Recommender Systems (pp. 279-287).
*Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., & Keutzer, K. (2021). A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630.
*Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
2) The perplexity experiments are carried out on obsolete language models (n-gram HMM, RNN) that are rarely used nowadays. To better align the paper with current NLP trends, I believe the authors should showcase their approach using transformer-based (masked) language models.
3) The reliance of this approach on a secondary step (vowel-retrieval) to make the text human-readable again could limit its applicability. It would be interesting to see how this representation would perform on generation tasks such as translation or summarization. Since the vowel-retrieval process is not loss-less (word-error-rate 9 for consonant-only and ~3 for masked-vowel representations), it may cause a drastic drop in the performance of the models on such tasks.
4) In addition, this extra vowel-retrieval step would add to the required computational steps and may actually increase the computational requirements (as opposed to the paper’s claim on saving on computational resources). | 2) The perplexity experiments are carried out on obsolete language models (n-gram HMM, RNN) that are rarely used nowadays. To better align the paper with current NLP trends, I believe the authors should showcase their approach using transformer-based (masked) language models. |
NIPS_2020_1897 | NIPS_2020 | - The problem setting of this paper is too simplified, where only a “linearized” self-attention layer with all non-linear activations, layer normalization and softmax operation removed. However, given that the main purpose of the paper is to analyze the functionality of self-attention in terms of integrating inputs, these relaxations are not totally unreasonable. - The experiments are not sufficient. More empirical experiments or toy experiments (for the simplified self-attention model considered in the theoretical analysis) need to be done to show the validity of the model relaxations and the consistence of the theoretical analysis with empirical results, besides citing the result in Kaplan et al. 2020. - Although the paper is well organized, some parts are not well explained, especially for the proof sketch for Theorem 1 and Theorem 2. | - The experiments are not sufficient. More empirical experiments or toy experiments (for the simplified self-attention model considered in the theoretical analysis) need to be done to show the validity of the model relaxations and the consistence of the theoretical analysis with empirical results, besides citing the result in Kaplan et al. 2020. |
NIPS_2016_232 | NIPS_2016 | weakness of the suggested method. 5) The literature contains other improper methods for influence estimation, e.g. 'Discriminative Learning of Infection Models' [WSDM 16], which can probably be modified to handle noisy observations. 6) The authors discuss the misestimation of mu, but as it is the proportion of missing observations - it is not wholly clear how it can be estimated at all. 5) The experimental setup borrowed from [2] is only semi-real, as multi-node seed cascades are artificially created by merging single-node seed cascades. This should be mentioned clearly. 7) As noted, the assumption of random missing entries is not very realistic. It would seem worthwhile to run an experiment to see how this assumption effects performance when the data is missing due to more realistic mechanisms. | 6) The authors discuss the misestimation of mu, but as it is the proportion of missing observations - it is not wholly clear how it can be estimated at all. |
ICLR_2022_2323 | ICLR_2022 | Weakness:
1. The literature review is inaccurate, and connections to prior works are not sufficiently discussed. To be more specific, there are three connections, (i) the connection of (1) to prior works on multivariate unlabeled sensing (MUS), (ii) the connection of (1) to prior works in unlabeled sensing (US), and (iii) the connection of the paper to (Yao et al., 2021).
(i) In the paper, the authors discussed this connection (i). However, the experiments shown in Figure 2 do not actually use the MUS algorithm of (Zhang & Li, 2020) to solve (1); instead the algorithm is used to solve the missing entries case. This seems to be an unfair comparison as MUS algorithms are not designed to handle missing entries. Did the authors run matrix completion prior to applying the algorithm of (Zhang & Li, 2020)? Also, the algorithm of (Zhang & Li, 2020) is expected to fail in the case of dense permutation.
(ii) Similar to (i), the methods for unlabeled sensing (US) can also be applied to solve (1), using one column of B_0 at a time. There is an obvious advantage because some of the US methods can handle arbitrary permutations (sparse or dense), and they are immune to initialization. In fact, these methods were used in (Yao et al., 2021) for solving more general versions of (1) where each column of B has undergone arbitrary and usually different permutations; moreover, this can be applied to the d-correspondence problem of the paper. I kindly wish the authors consider incoporating discussions and reviews on those methods.
(iii) Finally, the review on (Yao et al., 2021) is not very accurate. The framework of (Yao et al., 2021), when applied to (1), means that the subspace that contains the columns of A and B is given (when generating synthetic data the authors assume that A and B come from the same subspace). Thus the first subspace-estimation step in the pipeline of (Yao et al., 2021) is automatically done; the subspace is just the column space of A. As a result, the method of (Yao et al., 2021) can handle the situation where the rows of B are densely shuffled, as discussed above in (ii). Also, (Yao et al., 2021) did not consider only "a single unknown correspondence". In fact, (Yao et al., 2021) does not utilize the prior knowledge that each column of B is permuted by the same permutation (which is the case of (1)), instead it assumes every column of B is arbitrarily shuffled. Thus it is a more general situation of (1) and of the d-correspondence problem. Finally, (Yao et al., 2021) discusses theoretical aspects of (1) with missing entries, while an algorithm for this is missing until the present work.
2. In several places the claims of the paper are not very rigorous. For example,
(i) Problem (15) can be solved via linear assignment algorithms to global optimality, why do the authors claim that "it is likely to fall into an undesirable local solution"? Also I did not find a comparison of the proposed approach with linear assignment algorithms.
(ii) Problem (16) seems to be "strictly convex", not "strongly convex". Its Hessian has positive eigenvalues everywhere but the minimum eigenvalue is not lower bounded by some positive constant. This is my feeling though, as in the situation of logistic regression, please verify this.
(iii) The Sinkhorn algorithm seems to use O(n^2) time per iteration, as in (17) there is a term C(hat{M_B}), which needs O(n^2) time to be computed. Experiments show that the algorithm needs > 1000 iterations to converge. Hence, in the regime where n << 1000 the algorithm might take much more time than O(n^2) (this is the regime considered in the experiments). Also I did not see any report on running times. Thus I feel uncomfortable to see the author claim in Section 5 that "we propose a highly efficient algorithm".
3. Even though an error bound is derived in Theorem 1 for the nuclear norm minimization problem, there is no guarantee of success on the alternating minimization proposal. Moreover, the algorithm requires several parameters to tune, and is sensitive to initialization. As a result, the algorithm has very lage variance, as shown in Figure 3 and Table 1. Questions:
1. In (3) the last term r+H(pi_P) and C(pi_P) is very interesting. Could you provide some intuition how it shows up, and in particular give an example?
2. I find Assumption 1 not very intuitive; and it is unclear to me why "otherwise the influence of the permutation will be less significant". Is it that the unknown permutation is less harmful if the magnitudes of A and B are close?
3. Solving the nuclear norm minimization program seems to be NP-hard as it involves optimization over permutation matrices and a complicated objective. Is there any hardness result for this problem?
Suggestions: The following experiments might be useful.
1. Sensitivity to permutation sparsity: As shown in the literature of unlabeled sensing, the alternating minimization of (Abid et al., 2017) works well if the data are sparsely permuted. This might also apply to the proposed alternating minimization algorithm here.
2. Sensitivity to initialization: One could present the performance as a function of the distance of initialization M^0 to the ground-truth M^*. That is for varying distance c (say from 0.01:0.01:0.1), randomly sample a matrix M^0 so that M^0 - M^* _F < c as initialization, and report the performance accordingly. One would expect that the mean error and variance increases as the quality of initialization decreases.
3. Sensitivity to other hyper-parameters.
Minor Comments on language usage: (for example)
1. "we typically considers" in the above of (7)
2. "two permutation" in the above of Theorem 1
3. "until converge" in the above of (14)
4. ......
Please proofread the paper and fix all language problems. | 2. Sensitivity to initialization: One could present the performance as a function of the distance of initialization M^0 to the ground-truth M^*. That is for varying distance c (say from 0.01:0.01:0.1), randomly sample a matrix M^0 so that M^0 - M^* _F < c as initialization, and report the performance accordingly. One would expect that the mean error and variance increases as the quality of initialization decreases. |
NIPS_2022_2592 | NIPS_2022 | - (major) I don’t agree with the limitation (ii) of current TN models: “At least one Nth-order factor is required to physically inherit the complex interactions from an Nth-order tensor”. TT and TR can model complex modes interactions if the ranks are large enough. The fact that there is a lack of direct connections from any pair of nodes is not a limitation because any nodes are fully connected through a TR or TT. However, the price to pay with TT or TR to model complex modes interactions is having bigger core tensor (larger number of parameters). The new proposed topology has also a large price to pay in terms of model size because the core tensor C grows exponentially with the number of dimensions, which makes it intractable in practice. The paper lacks from a comparison of TR/TT and TW for a fixed size of both models (see my criticism to experiments below). - The new proposed model can be used only with a small number of dimensions because of the curse of dimensionality imposed by the core tensor C. - (major) I think the proposed TW model is equivalent to TR by noting that, if the core tensor C is represented by a TR (this can be done always), then by fusing this TR with the cores G_n we can reach to TR representation equivalent to the former TW model. I would have liked to see this analysis in the paper and a discussion justifying TW over TR. - (major) Comparison against other models in the experiments are unclear. The value of the used ranks for all the models are omitted which make not possible a fair comparison. To show the superiority of TW over TT and TR, the authors must compare the tensor completion results for all the models but having the same number of model parameters. The number of model parameters can be computed by adding the number of entries of all core tensors for each model (see my question about experiment settings below). - (minor) The title should include the term “tensor completion” because that is the only application of the new model that is presented in the paper. - (minor) The absolute value operation in the definition of the Frobenius norm in line 77 is not needed because tensor entries are real numbers. - (minor) I don’t agree with the statement in line 163: “Apparently, the O(NIR^3+R^N) scales exponentially”. The exponential grow is not apparent, it is a fact.
I updated my scores after rebuttal. See my comments below
Yes, the authors have stated that the main limitation of their proposed model is its exponentionally grow of model parameters with the number of dimensions. | - (minor) The absolute value operation in the definition of the Frobenius norm in line 77 is not needed because tensor entries are real numbers. |
NIPS_2019_629 | NIPS_2019 | - To my opinion, the setting and the algorithm lack a bit of originality and might seem as incremental combinations of methods of graph labelings prediction and online learning in a switching environment. Yet, the algorithm for graph labelings is efficient, new and seem different from the existing ones. - Lower bounds and optimality of the results are not discussed. In the conclusion section, it is asked whether the loglog(T) can be removed. Does this mean that up to this term the bounds are tight? I would like more discussions on this. More comparison with existing upper-bounds and lower-bound without switches could be made for instance. In addition, this could be interesting to plot the upper-bound on the experiments, to see how tight is the analysis. Other comments: - Only bounds in expectation are provided. Would it be possible to get high-probability bounds? For instance by using ensemble methods as performed in the experiments. Some measure about the robustness could be added to the experiments (such as error bars or standard deviation) in addition to the mean error. - When reading the introduction, I thought that the labels were adversarially chosen by an adaptive adversary. It seems that the analysis is only valid when all labels are chosen in advance by an oblivious adversary. Am I right? This should maybe be clarified. - This paper deals with many graph notions and it is a bit hard to get into it but the writing is generally good though more details could sometimes be provided (definition of the resistance distance, more explanations on Alg. 1 with brief sentences defining A_t, Y_t,...). - How was alpha tuned in the experiments (as 1/(t+1) or optimally)? - Some possible extensions could be discussed (are they straightforward?): directed or weighted graph, regression problem (e.g, to predict the number of bikes in your experiment)... Typo: l 268: the sum should start at 1 | - Only bounds in expectation are provided. Would it be possible to get high-probability bounds? For instance by using ensemble methods as performed in the experiments. Some measure about the robustness could be added to the experiments (such as error bars or standard deviation) in addition to the mean error. |
ICLR_2021_1014 | ICLR_2021 | - I am not an expert in the area of pruning. I think this motivation is quite good but the results seem to be less impressive. Moreover, I believe the results should be evaluated from more aspects, e.g., the actual latency on target device, the memory consumption during the inference time and the actual network size. - The performance is only compared with few methods. And the proposed is not consistently better than other methods. For those inferior results, some analysis should be provided since the results violate the motivation.
I am willing to change my rating according to the feedback from authors and the comments from other reviewers. | - I am not an expert in the area of pruning. I think this motivation is quite good but the results seem to be less impressive. Moreover, I believe the results should be evaluated from more aspects, e.g., the actual latency on target device, the memory consumption during the inference time and the actual network size. |
NIPS_2016_238 | NIPS_2016 | - My biggest concern with this paper is the fact that it motivates âdiversityâ extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned that there is no diversity. - The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions: - The first sentence of the abstract needs to be re-written. - Diversity should be toned down. - line 108, the first âfâ should be âgâ in âwe fixed the form of ..â - extra â.â in the middle of a sentence in line 115. One Question: For the baseline MCL with deep learning, how did the author ensure that each of the networks have converged to a reasonable results. Cutting the learners early on might significantly affect the ensemble performance. | - My biggest concern with this paper is the fact that it motivates âdiversityâ extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned that there is no diversity. |
z69tlSxAwf | EMNLP_2023 | 1. The problem that how catastrophic forgetting exerts a strong influence on novel slots detection keeps unclear.
2. The paper does not study on large language models, which may be the current SOTA models for novel slots detection and their effective usage in dialogue context.
3. The method is little bit complex and hard to follow, e.g., how the method implement the final effect in Figure?
The proposed may be not easy to implement in real-world scenarios.
4. some experiments are missing , e.g., contrastive learning and adversarial learning.
5. The comparing baselines is only few while the proposed method is claimed to be SOTA model. | 4. some experiments are missing , e.g., contrastive learning and adversarial learning. |
NIPS_2019_933 | NIPS_2019 | + I liked the simplicity of the solution to divide the problem into star graphs. The domination number introduced seems to be a natural quantity for this problem. +/- To my opinion, the setting seems somewhat contrived combining feedback graphs and switching costs. The application to policy regret with counterfactual however provides a convincing example that the analysis can be useful and inspire future work. +/- The main part of the paper is rather clear and well written. Yet, I found the proofs in the appendices sometimes a bit hard to follow with sequences of unexplained equations. I would suggest to had some details. - There is a gap between the lower bound and the upper-bound (\sqrt(\beta) instead of \beta^{1/3}). In particular, for some graphs, the existing bound with the independence number may be better. This is also true for the results on the adaptive adversary and the counterfactual feedback. Other remarks: - Was the domination number already introduced for feedback graphs without switching costs? If yes, existing results for this problem should be cited. If not, it would be interesting to state what kind of results your analysis would provide without using the mini-batches. - Note that the length of the mini-batches tau_t may be non-integers. This should be clarified to be sure there are no side effects. For instance, what happens if $\tau_t << 1$? I am not sure if the analysis is still valid. - A better (more formal) definition of the independence and the domination numbers should be provided. It took me some time to understand their meaning. - Alg 1 and Thm 3.1: Since only upper-bounds on the pseudo-regret are provided, the exploration parameter gamma seems to be useless, isn't it? The choice gamma=0 seems to be optimal. A remark on high-probability upper-bounds and the role of gamma might be interesting. In particular, do you think your analysis (which is heavily based on expectations) can be extended to high-probability bounds on the regret? - I understand that this does not suit the analysis (which uses the equivalence in expectation btw Alg1 and Alg6) but it seems to be suboptimal (at least in practice) to discard all the feedbacks obtained while playing non-revealing actions. It would be nice to have practical experiments to understand better if we lose something here. It would be also nice to compare it with existing algorithms. Typos: - p2, l86: too many )) - Thm 3.1: A constant 2 in the number of switches is missing. - p13, l457: some notations seem to be undefined (w_t, W_t). - p14, you may add a remark - p15, l458: the number of switches can be upper-bounded by **twice** the number of times the revealing action is played - p16, l514: I did not understand why Thm 3.1 implies the condition of Thm C.5 with alpha=1/2 and not 1. By the way, (rho_t) should be non-decreasing for this condition to hold. | - There is a gap between the lower bound and the upper-bound (\sqrt(\beta) instead of \beta^{1/3}). In particular, for some graphs, the existing bound with the independence number may be better. This is also true for the results on the adaptive adversary and the counterfactual feedback. Other remarks: |
kRjLBXWn1T | ICLR_2025 | 1. I find that separating Theorem 3.3 into parts A and B is tangential to the story and overly confusing. In reality, we do not have full control over our correction network and its Lipschitz constant. Therefore, we can never determine the best scheduling. This section seems like its being theoretical for its own sake! It might be clearer to simply present Lemma A.2 of the appendix in its most general form:
$$W_2(p^b_t, p^f_t) \le W_2(p^b_{t_0}, p^f_{t_0}) \cdot e^{L(t-t_0)}$$
and say that improving the Wasserstein distance on the RHS for $t_0$ can effectively bound the Wasserstein distance on the LHS, especially for $t$ that is sufficiently close to $t_0$. I don't think A, B, and examples 3.4 and 3.5 are particularly insightful when it is not directly factored into the decisions made in the experiments. The results in A and B can still be included, but in the appendix.
2. The parallel sampling section seems slightly oversold! To my understanding, while both forward passes can be done in parallel, it cannot be done in one batch because the forward call is on different methods. Could you please provide a time comparison between parallel and serial sampling on one experiment with the hardware that you have?
3. The statement of Lemma 3.6 seems to spill over to the rest of the main text and I generally do not agree with the base assumption that $p_t^f = p^b_{t, \sigma}$ which is the main driver for Lemma 3.6. Please let me know if I am misunderstanding this!
4. I don't find the comparison between this method and Dai and Wipf [B] appropriate! [B] trains a VAE on VAE to fix problems associated with the dimensionality mismatch between the data manifold and the manifold induced by the (first) VAE. That is not a concern in flow-matching and diffusion models as these models are known not to suffer from the manifold mismatch difficulties as much.
5. Although FIDs are still being widely used for evaluation, there have been clear flaws associated with them and the simplistic Inception network [C]. Please use DinoV2 Frechet Distances for the comparisons from [C], in addition to the widely used FID metric.
6. Please also provide evaluations "matching" the same NFEs in the corresponding non-corrected models.
### Minor points
1. I personally do not agree with the notation abuse of rewriting the conditional probability flow $p_t(x | z)$ as the marginal probability flow $p_t(x)$; it is highly confusing in my opinion.
2. Rather than introducing the new probability flows $\nu_t$ and $\mu_t$, in theorem 3.3, please consider using the same $p^b_t$ and $p^f_t$ for reduced notation overhead, and then restate the theorem in full formality for the appendix.
3. (nitpick) In Eq. (8), $t$ should be a sub-index of $u$. | 5. Although FIDs are still being widely used for evaluation, there have been clear flaws associated with them and the simplistic Inception network [C]. Please use DinoV2 Frechet Distances for the comparisons from [C], in addition to the widely used FID metric. |
GFgPmhLVhC | EMNLP_2023 | 1. Novelty seems incremental to me. What are the ways in which this paper differs from https://aclanthology.org/2021.findings-acl.57.pdf? Is it just applying a very similar methodology to new task?
2. Performance gains seem small. There should be p-test or atleast confidence intervals to check statistical significance. | 1. Novelty seems incremental to me. What are the ways in which this paper differs from https://aclanthology.org/2021.findings-acl.57.pdf? Is it just applying a very similar methodology to new task? |
ICLR_2023_3918 | ICLR_2023 | - The evaluation results reported in table 1 are based on only three trials for each case. While this is fine, statistically this is not significant, and thus it does not make sense to report the deviations. That is why that in many cases the deviation is 0. Due to this reason, statements such as “our performance is at least two standard deviation better than the next best baseline” do not make sense. - In the reported ablation studies in Table 2, for CUB and SOP datasets, the complete loss function performed even worse than those with some terms missing. That does not appear to make sense. Why? | - In the reported ablation studies in Table 2, for CUB and SOP datasets, the complete loss function performed even worse than those with some terms missing. That does not appear to make sense. Why? |
aGH43rjoe4 | ICLR_2024 | I do have several queries/concerns however:
- **a. Fixed time horizon**: The use of an MLP to convert the per-timestep embeddings into per-sequence Fourier coefficients means that you can only consider fixed-length sequences. This seems to me to be a real limitation, since often neural/behavioral data – especially naturalistic behavior – is not of a fixed length. This could be remedied by using an RNN or neural process in place of the MLP, so this is not catastrophic as far as I can tell. However, I at least expect to see this noted as a limitation of the method, and, preferably, substitute in an RNN or neural process for the MLP in one of the examples, just to concretely demonstrate that this is not a fundamental limitation.
- **b. Hidden hyperparameters and scaling issues**: Is there a problem if the losses/likelihoods from the channels are “unbalanced”? E.g. if the behavioral data is 1080p video footage, and you have say 5 EEG channels, then a model with limited capacity may just ignore the EEG data. This is not mentioned anywhere. I think this can be hacked by including a $\lambda$ multiplier on the first term of (6) or raising one of the loss terms to some power (under some sensible regularization), trading off the losses incurred by each channel and making sure the model pays attention to all the data. I am not 100% sure about this though. Please can the authors comment.
- **c. Missing experiments**: There are a couple of experiments/baselines that I think should be added.
- Firstly, in Figure 3, I'd like to see a model that uses the data independently to estimate the latent states and reconstruction. It seems unfair to compare multimodal methods to methods that use just one channel. I’m not 100% sure what this would look like, but an acceptable baseline would be averaging the predictions of image-only and neuron-only models (co-trained with this loss). At least then all models have access to the same data, and it is your novel structure that is increasing the performance.
- Secondly, I would like to see an experiment sweeping over the number of observed neurons in the MNIST experiment. If you have just one neuron, then performance of MM-GP-VAE should be basically equivalent to GP-VAE. If you have 1,000,000 neurons, then you should have near-perfect latent imputations (for a sufficiently large model), which can be attributed solely to the neural module. This should be a relatively easy experiment to add and is a good sanity check.
- Finally, and similarly to above, i’d like to see an experiment where the image is occluded (half of the image is randomly blacked out). This (a) simulates the irregularity that is often present in neural/behavioral data (e.g. keypoint detection failed for some mice in some frames), and (b) would allow us to inspect the long-range “inference” capacity of the model, as opposed to a nearly-supervised reconstruction task.
Again, these should be reasonably easy experiments to run. I’d expect to see all of these experiments included in a final version (unless the authors can convince me otherwise).
- **d. Slightly lacking analysis**: This is not a deal-breaker for me, but the analysis of the inferred latents is somewhat lacking. I’d like to see some more incisive analysis of what the individual and shared features pull out of the data – are there shared latent states that indicate “speed”, or is this confined to the individual behavioral latent? Could we decode a stimulus type from the continuous latent states? How does decoding accuracy from each of the three different $z$ terms differ? etc. I think this sort of analysis is the point of training and deploying models like this, and so I was disappointed to not see any attempt at such an analysis. This would just help drive home the benefits of the method.
### Minor weaknesses / typographical errors:
1. Page 3: why are $\mu_{\psi}$ and $\sigma_{\psi}^2$ indexed by $\psi$? These are variational posteriors and are a function of the data; whereas $\psi$ are static model parameters.
2. Use \citet{} for textual citations (e.g. “GP-VAE, see (Casale et al., 2018).” -> “GP-VAE, see Casale et al. (2018).”)
3. The discussion of existing work is incredibly limited (basically two citations). There is a plethora of work out there tackling computational ethology/neural data analysis/interpretable methods. This notable weakens the paper in my opinion, because it paints a bit of an incomplete picture of the field, and actually obfuscates why this method is so appealing! I expect to see a much more thorough literature review in any final version.
4. Text in Figure 5 is illegible.
5. Only proper nouns should be capitalized (c.f. Pg 2 “Gaussian Process” -> “Gaussian process”), and all proper nouns should be capitalized (c.f. Pg 7 “figure 4(c)”).
6. Figure 1(a): Is there are sampling step to obtain $\tilde{\mu}$ and $\tilde{\sigma}^2$? This sample step should be added, because right now it looks like a deterministic map.
7. I think “truncate” is more standard than “prune” for omitting higher-frequency Fourier terms.
8. I find the use of “A” and “B” very confusing – the fact that A is Behaviour, and B is Neural? I’m not sure what better terms are. I would suggest B for Behavioural – and then maybe A for neural? Or A for (what is currently referred to as) behavioral, but be consistent (sometimes you call it “other”) and refer to it as Auxiliary or Alternative data, and then B is “Brain” data or something.
9. The weakest section in terms of writing is Section 3. The prose in there could do with some tightening. (It’s not terrible, but it’s not as polished as the rest of the text).
10. Use backticks for quotes (e.g. ‘behavioral modality’ -> ``behavioral modality’’). | - Finally, and similarly to above, i’d like to see an experiment where the image is occluded (half of the image is randomly blacked out). This (a) simulates the irregularity that is often present in neural/behavioral data (e.g. keypoint detection failed for some mice in some frames), and (b) would allow us to inspect the long-range “inference” capacity of the model, as opposed to a nearly-supervised reconstruction task. Again, these should be reasonably easy experiments to run. I’d expect to see all of these experiments included in a final version (unless the authors can convince me otherwise). |
NIPS_2016_153 | NIPS_2016 | weakness of previous models. Thus I find these results novel and exciting.Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. b. Model accuracy, where the aim is to provide a model which is better than the state of the art. To the best of my understanding, this study mostly focuses on the latter, i.e. provide a better than state of the art model. If I am misunderstanding, then it would probably be important to stress the biological insights gained from the study. Yet if indeed modeling accuracy is the focus, it's important to provide a fair comparison to the state of the art, and I see a few caveats in that regard: 1. The authors mention the GLM model of Pillow et al. which is pretty much state of the art, but a central point in that paper was that coupling filters between neurons are very important for the accuracy of the model. These coupling filters are omitted here which makes the comparison slightly unfair. I would strongly suggest comparing to a GLM with coupling filters. Furthermore, I suggest presenting data (like correlation coefficients) from previous studies to make sure the comparison is fair and in line with previous literature. 2. The authors note that the LN model needed regularization, but then they apply regularization (in the form of a cropped stimulus) to both LN models and GLMs. To the best of my recollection the GLM presented by pillow et al. did not crop the image but used L1 regularization for the filters and a low rank approximation to the spatial filter. To make the comparison as fair as possible I think it is important to try to reproduce the main features of previous models. Minor notes: 1. Please define the dashed lines in fig. 2A-B and 4B. 2. Why is the training correlation increasing with the amount of training data for the cutout LN model (fig. 4A)? 3. I think figure 6C is a bit awkward, it implies negative rates, which is not the case, I would suggest using a second y-axis or another visualization which is more physically accurate. 4. Please clarify how the model in fig. 7 was trained. Was it on full field flicker stimulus changing contrast with a fixed cycle? If the duration of the cycle changes (shortens, since as the authors mention the model cannot handle longer time scales), will the time scale of adaptation shorten as reported in e.g Smirnakis et al. Nature 1997. | 3. I think figure 6C is a bit awkward, it implies negative rates, which is not the case, I would suggest using a second y-axis or another visualization which is more physically accurate. |
NIPS_2018_396 | NIPS_2018 | 1. The only difference between SimplE and CP seems to be that SimplE further uses inverse triples. The technical contributions of this paper are quite limited. 2. Introducing inverse triples might also be used in other embedding models besides CP. But the authors did not test such cases in their experiments. 3. It would be better if the authors could further explain why such a simple step (introducing inverse triples) can improve the CP model that much. | 2. Introducing inverse triples might also be used in other embedding models besides CP. But the authors did not test such cases in their experiments. |
ICLR_2022_2671 | ICLR_2022 | , starting from the most critical ones.
Unclear Threat Model. The threat model is provided in Section 4.1, and the authors (implicitly) use the general notation of goal, capabilities, knowledge (the strategy is explained later). However, the capabilities of the attacker are not well defined. Specifically, the paper says that the attacker can control a fraction ( α
) of the M clients. Hence, I ask: what can the attacker do with such clients? Can they perform any manipulation? Are some of the features not modifiable? Is such attacker subject to realistic constraints or feature dependencies? Is the perturbation magnitude bounded? I am referring to the well-known issue of ‘problem vs feature space’ attacks [B], because real attackers are subject to many real world constraints (and especially in networked systems [C, D, E]) and not all adversarial perturbations may be physically realizable ([F]). The authors should elucidate this issue, because it could differentiate between “fictional” and “practical” attacks, therefore defining whether the proposed method is applicable to solve real world problems. Moreover, the following is unclear:
• “There are α
fraction of agents that are malicious and total β
fraction of instances with backdoor trigger across all malicious agents.” I am unable to understand the relationship between α and β
. Is β
a subset of α ?
• From Equation 3 onwards, is h^{j} meant to denote h j
? Is there a need for the braces, or is it a typo?
• Please differentiate from h b e n i g n and h B
. The current notation is very confusing
Tradeoff. A common problem in adversarial ML countermeasures is that they may degrade baseline performance [G, H]. Hence, I am interested in knowing how the proposed method responds when there are no “malicious” clients (or when such clients behave legitimately). In this paper, I am unable to determine what is the baseline performance of the models in “non-adversarial” settings. Does such performance degrade after RVFR is applied? How does RVFR compare to previous works in these circumstances? Even if the baseline performance does not decrease, what is the overhead of the proposed RVFR with respect to past defenses? Figure 3 and 2 only show results for adversarial scenarios. In real circumstances, a defense should have some practical utility. Note that I would not reject the paper even if RVFR does have a significant “cost”. However, such cost must be known.
Very poor Introduction and Abstract. The Introduction fails to provide a concrete justification and enough context for the considered problem. Let me list all the issues I encountered while reading the introductory part of the paper:
• This statement in the abstract is vague: “However, unlike the standard horizontal federated learning, improving the robustness of robust VFL remains challenging.”. Specifically, why is it challenging? Just name a few reasons.
• This statement in the abstract is unclear: “ensure that with a low-rank feature subspace, a small number of attacked samples, and other mild assumptions”. What does this mean? The abstract should be more high-level. Such technicalities are not necessary.
• Please be consistent. In the Introduction: “In FL, a central server coordinates with multiple agents/clients”. Either use “client” or “agent”.
• I suggest mentioning [A] for a practical, recent and useful application of FL in a real world problem.
• This example is unclear: “In VFL, different agents hold different parts of features for the same set of training data. For example, in VFL for credit score prediction, Agent 1 may have the banking data of a user and Agent 2 may have the credit history of the same user, while the server holds corresponding labels.”. To me, it appears that Agent 1 and Agent 2 have different data, and hence represents a HFL problem (and I still do not understand the necessity of the last statement involving the ‘label’). Perhaps the authors should provide a visual example that better explains the difference between HFL and VFL.
• The Introduction still suffers of the same problem as the abstract. “However, it is challenging to defend against malicious attacks in VFL.”. Why? It is very annoying to read the abstract, not get a response to such question; and then re-reading the same concept in the Introduction, and not finding an answer even there. The impression is that the authors are trying to make the problem more difficult than what it currently is: if it is challenging, it should be clear.
• In the Introduction: “The fraction of malicious agents is relatively small”. Can you define such fraction? Does it have an upper boundary?
• The term “backdoor attacks” is never mentioned in the Introduction until the “contribution” paragraph. Such term should be better contextualized: not all poisoning attacks are backdoor attacks.
• I had to reach the Related Work section to understand why poisoning attacks in VFL are “challenging”. However, the motivation provided by the authors is confusing—to say the least. According to the paper, the challenge is “Backdoor attack against VFL is challenging since in the default setting the agent does not have the label information.” First, what is challenging exactly: the attack, or the defense? Second, what is such “default setting”? Third, why does the agent not have the label information? I believe the latter is due to the (poor) explanation provided in the early example in the introduction.
Minor issues:
• Please use the term “stage” (or “phase”) and not “time” to differentiate between training and testing/inference.
• Please add some text between Section 6 and 6.1
• In Section 6.1, the authors state “We study the classification task on two datasets: NUS-WIDE and CIFAR-10. Following (Liu et al., 2020), which proposed the backdoor attack against VFL, we use NUS-WIDE dataset to evaluate our defense.”. Does it mean that the defense is only evaluated on NUS-WIDE?
• In Section 6.2, the authors state “The noise variance is 0:05 for NUS-WIDE and 0:0001 for CIFAR-10 to preserve reasonable utility of the model.”. Please define what “utility” means in this statement.
• Figure 2: please maintain the same range for the y-axis. It does not only varies among the rows, but also among the columns.
• Figure 3 and Figure 2: the range of the y-axis should be the same for both figures.
• Use the same term to refer to figures. If the figures are named “Figures”, then use “Figures” in the text, and not “Fig.”
• What is the difference between “Backdoor Accuracy” and “Clean Accuracy” in Figures 2 and 3?
EXTERNAL REFERENCES
[A]: "Federated learning for predicting clinical outcomes in patients with COVID-19." Nature medicine (2021): 1-9.
[B]: "Intriguing properties of adversarial ml attacks in the problem space." 2020 IEEE Symposium on Security and Privacy (SP). IEEE, 2020.
[C]: "Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems." ACM Digital Threats: Research and Practice. 2021.
[D]: "Constrained concealment attacks against reconstruction-based anomaly detectors in industrial control systems." ACM Annual Computer Security Applications Conference. 2020.
[E]: "Resilient networked AC microgrids under unbounded cyber attacks." IEEE Transactions on Smart Grid 11.5 (2020): 3785-3794.
[F]: "Improving robustness of ML classifiers against realizable evasion attacks using conserved features." 28th {USENIX} Security Symposium ({USENIX} Security 19). 2019.
[G]: "Adversarial example defense: Ensembles of weak defenses are not strong." 11th {USENIX} workshop on offensive technologies ({WOOT} 17). 2017.
[H]: "Deep reinforcement adversarial learning against botnet evasion attacks." IEEE Transactions on Network and Service Management 17.4 (2020): 1975-1987. | • This statement in the abstract is unclear: “ensure that with a low-rank feature subspace, a small number of attacked samples, and other mild assumptions”. What does this mean? The abstract should be more high-level. Such technicalities are not necessary. |
NIPS_2022_1971 | NIPS_2022 | Weakness: No significant weaknesses. Two minor comments/suggestions: 1) It seems that there is still room to improve the complexity of Algorithm 2; 2) some numerical experiments comparing the proposed algorithm with non-adaptive benchmarks (showing the trade-off of adaptivity and performance) could be interesting.
the authors adequately addressed the limitations and potential negative societal impact of their work. | 1) It seems that there is still room to improve the complexity of Algorithm 2; |
XvA1Mn9OFy | ICLR_2025 | 1. While I agree that the performance gains in table 1 illustrate that GAM > SAM > SGD, the relative gains of GAM over SAM seem relatively small.
2. It would be nice to see some results in other modalities (e.g., maybe some language related tasks. Aside: for language related tasks, people care about OOD performance as well, so maybe expected test loss is not as meaningful?) | 2. It would be nice to see some results in other modalities (e.g., maybe some language related tasks. Aside: for language related tasks, people care about OOD performance as well, so maybe expected test loss is not as meaningful?) |
ICLR_2022_1617 | ICLR_2022 | Weakness: 1. The motivation of this work should be further justified. In few-shot learning, we usually consider how to leverage a few instances to learn a generalizable model. This paper defines and creates a few-shot situation for graph link prediction, but the proposed method does not consider how to effectively use “few-shot” and how to guarantee the trained model can be generalized well to new tasks with 0/few training steps. 2. The definition of “domain” in this paper is unclear. For instance, why select multiple domains from the same single graph in ogbn-products? Should we consider the selected domains as “different domains”? 3. The application of adversarial learning in few-shot learning is confusing. Adversarial learning in domain adaptation aims to learn domain-invariant representations, but why do we need such kind of representation in few-shot learning? | 1. The motivation of this work should be further justified. In few-shot learning, we usually consider how to leverage a few instances to learn a generalizable model. This paper defines and creates a few-shot situation for graph link prediction, but the proposed method does not consider how to effectively use “few-shot” and how to guarantee the trained model can be generalized well to new tasks with 0/few training steps. |
ICLR_2021_665 | ICLR_2021 | Weakness
1 The way of using GP is kind of straightforward and naive. In the GP community, dynamical modeling has been widely investigated, from the start of Gaussian Process Dynamical Model in NIPs 2005.
2 I do not quite get the modules of LSTM Frame Generation and GP Frame Generation in Eq (4). Where are these modules in Fig.3 ? The D in the Stage 3? Using GP to generate Images? Does it make sense? GP is more suitable to work in the latent space, is it?
3 The datasets are not quite representative, due to the simple and experimental scenarios. Moreover, the proposed method is like a fundamental work. But is it useful for high-level research topics, e.g., large-scale action recognition, video caption, etc? | 1 The way of using GP is kind of straightforward and naive. In the GP community, dynamical modeling has been widely investigated, from the start of Gaussian Process Dynamical Model in NIPs 2005. |
NIPS_2019_387 | NIPS_2019 | - The main weakness is empirical---scratchGAN appreciably underperforms an MLE model in terms of LM score and reverse LM score. Further, samples from Table 7 are ungrammatical and incoherent, especially when compared to the (relatively) coherent MLE samples. - I find this statement in the supplemental section D.4 questionable: "Interestingly, we found that smaller architectures are necessary for LM compared to the GAN model, in order to avoid overfitting". This is not at all the case in my experience (e.g. Zaremba et al. 2014 train 1500-dimensional LSTMs on PTB!), which suggests that the baseline models are not properly regularized. D.4 mentions that dropout is applied to the embeddings. Are they also applied to the hidden states? - There is no comparison against existing text GANs , many of which have open source implentations. While SeqGAN is mentioned, they do not test it with the pretrained version. - Some natural ablation studies are missing: e.g. how does scratchGAN do if you *do* pretrain? This seems like a crucial baseline to have, especially the central argument against pretraining is that MLE-pretraining ultimately results in models that are not too far from the original model. Minor comments and questions : - Note that since ScratchGAN still uses pretrained embeddings, it is not truly trained from "scratch". (Though Figure 3 makes it clear that pretrained embeddings have little impact). - I think the authors risk overclaiming when they write "Existing language GANs... have shown little to no performance improvements over traditional language models", when it is clear that ScratchGAN underperforms a language model across various metrics (e.g. reverse LM). | - I find this statement in the supplemental section D.4 questionable: "Interestingly, we found that smaller architectures are necessary for LM compared to the GAN model, in order to avoid overfitting". This is not at all the case in my experience (e.g. Zaremba et al. 2014 train 1500-dimensional LSTMs on PTB!), which suggests that the baseline models are not properly regularized. D.4 mentions that dropout is applied to the embeddings. Are they also applied to the hidden states? |
Ugs2W5XFFo | ICLR_2025 | 1. Note that the paper mainly focuses on SD-based (SD 2.1, SDXL) models. These models are mostly the same styles, e.g., similar network structures and traditional denoising training strategies. Is there any possibility that the MI tuning incorporated with flow-based models like DiT-based models (SD3, Pixart series or so). And it is interesting to see if the proposed MI tuning behaves different with different types of models.
2. The evaluations on MI mainly focus on only simple semantic concepts like color, shape and texture. Is MI-tuning sensitive to object numbers or so?
3. The paper fixes the denoising steps to 50 when inferencing an image, are there any differences in performance of MI-tuning when using different steps except 50?
4. In quantitative analysis of Sect. 3.1, the paper mentions that the point-wise MI ranks images and select 1st, 25th and 50th as the representative images. Why the three images are representative? This needs more detailed explanations. Also, the reason of the selection needs quantitative analysis.
5. Some of the ablations mentioned in previous sections are hard to locate in the following contents, the writing can be improved in this part. | 5. Some of the ablations mentioned in previous sections are hard to locate in the following contents, the writing can be improved in this part. |
NIPS_2016_395 | NIPS_2016 | - I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differential privacy, without experimental validation, should be omitted from the main paper in favor of the preliminary experimental evidence of the tensor method. The results on privacy appear too preliminary to appear in a "conference of record" like NIPS. TECHNICAL COMMENTS: 1) Section 1.2: the dimensions of the projection matrices are written as $A_i \in \mathbb{R}^{m_i \times d_i}$. I think this should be $A_i \in \mathbb{R}^{d_i \times m_i}$, otherwise you cannot project a tensor $T \in \mathbb{R}^{d_1 \times d_2 \times \ldots d_p}$ on those matrices. But maybe I am wrong about this... 2) The neighborhood condition in Definition 3.2 for differential privacy seems a bit odd in the context of topic modeling. In that setting, two tensors/databases would be neighbors if one document is different, which could induce a change of something like $\sqrt{2}$ (if there is no normalization, so I found this a bit confusing. This makes me think the application of the method to differential privacy feels a bit preliminary (at best) or naive (at worst): even if a method is robust to noise, a semantically meaningful privacy model may not be immediate. This $\sqrt{2}$ is less than the $\sqrt{6}$ suggested by the authors, which may make things better? 3) A major concern I have about the differential privacy claims in this paper is with regards to the noise level in the algorithm. For moderate values of $L$, $R$, and $K$, and small $\epsilon = 1$, the noise level will be quite high. The utility theorem provided by the author requires a lower bound on $\epsilon$ to make the noise level sufficiently low, but since everything is in "big-O" notation, it is quite possible that the algorithm may not work at all for reasonable parameter values. A similar problem exists with the Hardt-Price method for differential privacy (see a recent ICASSP paper by Imtiaz and Sarwate or an ArXiV preprint by Sheffet). For example, setting L=R=100 and K=10, \epsilon = 1, \delta = 0.01 then the noise variance is of the order of 4 x 10^4. Of course, to get differentially private machine learning methods to work in practice, one either needs large sample size or to choose larger $\epsilon$, even $\epsilon \gg 1$. Having any sense of reasonable values of $\epsilon$ for a reasonable problem size (e.g. in topic modeling) would do a lot towards justifying the privacy application. 4) Privacy-preserving eigenvector computation is pretty related to private PCA, so one would expect that the authors would have considered some of the approaches in that literature. What about (\epsilon,0) methods such as the exponential mechanism (Chaudhuri et al., Kapralov and Talwar), Laplace noise (the (\epsilon,0) version in Hardt-Price), or Wishart noise (Sheffet 2015, Jiang et al. 2016, Imtiaz and Sarwate 2016)? 5) It's not clear how to use the private algorithm given the utility bound as stated. Running the algorithm is easy: providing $\epsilon$ and $\delta$ gives a private version -- but since the $\lambda$'s are unknown, verifying if the lower bound on $\epsilon$ holds may not be possible: so while I get a differentially private output, I will not know if it is useful or not. I'm not quite sure how to fix this, but perhaps a direct connection/reduction to Assumption 2.2 as a function of $\epsilon$ could give a weaker but more interpretable result. 6) Overall, given 2)-5) I think the differential privacy application is a bit too "half-baked" at the present time and I would encourage the authors to think through it more clearly. The online algorithm and robustness is significantly interesting and novel on its own. The experimental results in the appendix would be better in the main paper. 7) Given the motivation by topic modeling and so on, I would have expected at least an experiment on one real data set, but all results are on synthetic data sets. One problem with synthetic problems versus real data (which one sees in PCA as well) is that synthetic examples often have a "jump" or eigenvalue gap in the spectrum that may not be observed in real data. While verifying the conditions for exact recovery is interesting within the narrow confines of theory, experiments are an opportunity to show that the method actually works in settings where the restrictive theoretical assumptions do not hold. I would encourage the authors to include at least one such example in future extended versions of this work. | 6) Overall, given2)-5) I think the differential privacy application is a bit too "half-baked" at the present time and I would encourage the authors to think through it more clearly. The online algorithm and robustness is significantly interesting and novel on its own. The experimental results in the appendix would be better in the main paper. |
ICLR_2023_598 | ICLR_2023 | Weakness: 1. The contribution of multilingual chain-of-thought is incremental compared to the villa chain-of-though. 2. It is interesting to see the large gap between Translate-EN, EN-CoT and Native-CoT in MGSM. While the gaps in other benchmarks are not too much. Is it because MGSM benchmark is originated from translation? | 1. The contribution of multilingual chain-of-thought is incremental compared to the villa chain-of-though. |
NIPS_2020_195 | NIPS_2020 | My main concerns are: - Feeding the actual pose of one arm (master) and the relative pose of the second arm (slave) with respect to the master and similarly other objects would have been more informative for the network to capture the relational dependencies at the pose level. A baseline comparison with this method would be useful to understand the dependency structure, especially to improve the performance for the second task. Adding other baselines with state-of-the-art methods in the related work would further improve the understanding of the work. - The authors discuss a few examples with the position in table tasks, but the effect of orientation is not explained. Does the approach generalize with randomly sampled orientation of the target object ? Are the orientations normalized to unit quaternions after prediction ? The authors are encouraged to show orientation errors and quantify the performance. - Adding visual pixel information from pixels would help to establish the true merits of graph attention mechanism. - 29 percent accuracy with table assembly tasks is rather low. The Euclidean distance error units in Table 1 seem very high. Are they normalized to per datapoint position errors ? If so, an error of 5 cm with HDR-IL seems unreasonably high. - It is not clear if the proposed methodology is specific to bimanual manipulation. Just using robotic manipulation could be more appropriate. - Experiments with real setup would have been useful to establish the merits of the proposed approach. | - It is not clear if the proposed methodology is specific to bimanual manipulation. Just using robotic manipulation could be more appropriate. |
51gbtl2VxL | EMNLP_2023 | 1. The paper don't compare their models with more recent SOTAs [1-3], so it can not get higher soundness.
2. You should provide the results on more datasets, such as Test2018.
3. You should provide the METEOR results, which is also reported in recent works [1-5].
4. The Figure 5 is not clear, you should give more explanation about it.
[1] Yi Li, Rameswar Panda, Yoon Kim, Chun-Fu Richard Chen, Rogerio S Feris, David Cox, and Nuno Vasconcelos. 2022. VALHALLA: Visual Hallucination for Machine Translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5216–5226.
[2] Junjie Ye, Junjun Guo, Yan Xiang, Kaiwen Tan, and Zhengtao Yu. 2022. Noiserobust Cross-modal Interactive Learning with Text2Image Mask for Multi-modal Neural Machine Translation. In Proceedings of the 29th International Conference on Computational Linguistics. 5098–5108.
[3] Junjie Ye and Junjun Guo. 2022. Dual-level interactive multimodal-mixup encoder for multi-modal neural machine translation. Applied Intelligence 52, 12 (2022), 14194–14203.
[4] Good for Misconceived Reasons: An Empirical Revisiting on the Need for Visual Context in Multimodal Machine Translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 6153–6166.
[5] Li B, Lv C, Zhou Z, et al. On Vision Features in Multimodal Machine Translation[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022: 6327-6337. | 3. You should provide the METEOR results, which is also reported in recent works [1-5]. |
huo8MqVH6t | ICLR_2025 | 1. This paper (Section 4) examines G-effects of each unlearning objective independently and in isolation to other learning objectives. Results are also shown and discussed in separate figures and parts of the paper. Studying G-effect of each learning objective in isolation, raises the concern regarding the comparability of G-effect values across various unlearning objectives and approaches.
- Why empirical analysis of each unlearning approach is shown and discussed in separate parts of the paper?
- Are G-effect values comparable across different unlearning approaches? are values comparable and why?
- Can the proposed G-effect rank unlearning approaches?
2. Section 5 and its Table 1 provide a comprehensive comparison of various unlearning approaches using TOFU unlearning dataset for the removal of fictitious author profiles from LLMs finetuned on them. However, this comparison uses only existing metrics: forget quality, model utility, and PS-scores, and does not report the proposed G-effects.
- Why G-effects are missing in this section?
- How do G-effect values correlate with metrics presented in Table 1?
- Why are the order and ranking of unlearning objectives different across different removal and retention metrics?
3. G-effects need access to intermediate checkpoints during unlearning, especially given the pattern of values in for example Figure 3 (i.e., a peak and then flat close to zero). How does this limit the applicability of the proposed metric?
4. The G-effect definition uses model checkpoints at different time steps and does not directly take into account the risk and unlearning of the initial model.
- Why does this make sense?
- Is this why you need to do accumulative?
- what does the G-effect at each unlearning step mean?
- what does accumulation across unlearning steps mean?
- What does pick mean in Figure 3? Should we stop after that step to have an effective unlearning? what would be the benefit of continuing? is 0 G-effect value the limitation of your method?
5. Some of the claims are not completely supported. For example, the claim "In terms of the unlearning G-effects, it indicates that the unlearning strength of NPO is weaker; however, for the retaining G-effects, it suggests that NPO better preserves the model integrity." As an initial step, I would link it to numbers in Table 1.
6. Membership inference attacks are a common approach in the literature for evaluating the removal capability of unlearning approaches [MUSE]. However, this paper does not report the success of membership inference attacks. How the unlearning G-effect is compared to the success of MIA? Are they aligned?
[MUSE] Weijia Shi, Jaechan Lee, Yangsibo Huang, Sadhika Malladi, Jieyu Zhao, Ari Holtzman, Daogao Liu, Luke Zettlemoyer, Noah A Smith, and Chiyuan Zhang. Muse: Machine unlearning six-way evaluation for language models, 2024. | 1. This paper (Section 4) examines G-effects of each unlearning objective independently and in isolation to other learning objectives. Results are also shown and discussed in separate figures and parts of the paper. Studying G-effect of each learning objective in isolation, raises the concern regarding the comparability of G-effect values across various unlearning objectives and approaches. |
NIPS_2021_138 | NIPS_2021 | weakness Originality
+ Interesting and novel insights into the task difficulty for meta-learners. Although the idea of controlling episode difficulty is not very novel [47, 28], the authors go about it in a novel way and propose to use importance sampling for a faster and more efficient way of sampling episodes that is model agnostic. Quality
+ Solid evaluation and analysis. Testing over multiple models, algorithms, and benchmarks makes for a convincing set of results.
+ Confirmed assumptions through experimentation. The paper uses Q-Q plots and Shapiro-Wilk to verify the assumption that episode difficulty follows a normal distribution.
- Advantage of UNIFORM over other procedures is not consistent. The tables show that UNIFORM does not always offer a clear advantage over the results, especially in the 1-shot setting. Do the authors have a theory for why the method is not as effective on the 1 shot setting? Clarity
+ Experiments are well designed, and the results are clear.
+ Paper is organized and well-written. Significance
+ Interesting solution to a relevant problem. Episode difficulty is understudied in few-shot learning, and uniform presents a model agnostic solution to the problem of how to sample tasks during meta-learning. Post-Rebuttal
I have read the rebuttal and other reviews, and I increase my original rating. This paper addresses an interesting and relevant issue of episode difficulty in meta-learning and will be a particularly valuable contribution at the conference. The paper presents a solid set of evaluations and analyses, verifying assumptions (e.g. via Q-Q plots and Shapiro-Wilk tests) and reporting results across multiple benchmarks. The paper presents a novel, simple but effective method that, on the most part, boosts algorithm performance by a couple of additional accuracy points on average. Therefore, I recommend this work for the conference with a score of 8. | - Advantage of UNIFORM over other procedures is not consistent. The tables show that UNIFORM does not always offer a clear advantage over the results, especially in the 1-shot setting. Do the authors have a theory for why the method is not as effective on the 1 shot setting? Clarity + Experiments are well designed, and the results are clear. |
fkAKjbRvxj | EMNLP_2023 | 1. Have the author(s) thought about the reason why, information value is "stronger predictor" for dialogue(Complementarity in page 7 or discussion in page 8), is there any already existing linguistic theory which could explain it. If so, adding that will make this one a stronger paper.
2.It turns out to me that different information value serves as a strong predictor among the 5 chosen corpora, for example, in PROVO it is the syntactic information value. Again have the author(s) already had a potential hypothesis for this phenomenon?Is it practical to do a linguistic analysis on this 5 corpora to find the reason?
3. Is it possible that by increasing the set size of A_(x), the generated sentences sampled from "ancestral sampling" could have already covered some/all of the samples from other sampling methods, e.g., "temperature sampling"? As far as I know, both of the two mentioned sampling methods are based on the conditional probability, while typical sampling is a comparatively new sampling method which could cut off the dependence on the conditional probability to some degree and helps to generate more creative and diversify next sentences instead of entering receptive loop(Meister 2023).
Based on that,I would suggest the author(s) making a statistic report about the distributions of generated sentences from different sampling methods and maybe then making a selection of just two representative sampling methods based on the observation. Also a reference to temperature sampling is needed. | 1. Have the author(s) thought about the reason why, information value is "stronger predictor" for dialogue(Complementarity in page 7 or discussion in page 8), is there any already existing linguistic theory which could explain it. If so, adding that will make this one a stronger paper. |
NIPS_2020_1857 | NIPS_2020 | - The paper is not particularly novel or exciting since it takes algorithms already applied in the field of semantic segmentation and applies them to the stereo depth estimation problem. The idea of using AutoML for stereo is not particularly novel either, as stated by the authors themselves, even if the proposed algorithm outperforms the previous proposal. - From my point of view the main reason to use AutoML approaches, besides improving raw performances, is extracting hints that can be reused in the design of new network architectures in the future. Unfortunately the authors did not spend much time commenting on these aspects. For example, what might be the biggest takeaways from the found architecture? - I would have liked to see an additional ablation study to better highlight the contribution of the proposed method with respect to AutoDispNet. The main differences with respect to the previously published work is the search performed also on the network level and the use of two separate feature and matching networks. Ablating the contribution of one and the other might have been interesting. - The evaluation on Middleburry should include for fairness a test of the found architecture running at quarter resolution to match the testing setup of all the other deep architecture. While it is true that the ability to run at higher resolution is an advantage of the proposed method there is nothing (besides hw limitation) preventing the other networks to run at higher resolution as well. Therefore I think that a fair comparison between networks running on the same test setup will improve the paper highlighting the contribution of the proposed method. - Some minor implementation details are missing from the paper, I will expand this point in my questions to the authors. | - From my point of view the main reason to use AutoML approaches, besides improving raw performances, is extracting hints that can be reused in the design of new network architectures in the future. Unfortunately the authors did not spend much time commenting on these aspects. For example, what might be the biggest takeaways from the found architecture? |
Kjs0mpGJwb | EMNLP_2023 | The experiments is somewhat weak.
1) The main contribution of this paper lies in the structure-aware encoder-based model for seed lexicon induction, there should be an experiment to study the quality of seed lexicon.
2) While the paper focuses on bilingual lexicon induction, it would be beneficial to include a downstream task, such as cross-lingual Natural Language Inference (NLI), to demonstrate the potential impact of the proposed method on downstream applications. This would provide further insights into the effectiveness of the approach beyond the specific lexicon induction task. | 1) The main contribution of this paper lies in the structure-aware encoder-based model for seed lexicon induction, there should be an experiment to study the quality of seed lexicon. |
NIPS_2019_991 | NIPS_2019 | [Clarity] * What is the value of the c constant (MaxGapUCB algorithm) used in experiments? How was it determined? How does it impact the performance of MaxGapUCB? * The experiment results could be discussed more. For example, should we conclude from the Streetview experiment that MaxGapTop2UCB is better than the other ones? [Significance] * The real-world applications of this new problem setting are not clear. The authors mention applicability to sorting/ranking. It seems like this would require a recursive application of proposed algorithms to recover partial ordering. However, the procedure to find the upper bounds on gaps (Alg. 4) has complexity K^2, where K is the number of arms. How would that translate in computational complexity when solving a ranking problem? Minor details: * T_a(t) is used in Section 3.1, but only defined in Section 4. * The placement of Figure 2 is confusing. --------------------------------------------------------------------------- I have read the rebuttal. Though the theoretical contribution seems rather low given existing work on pure exploration, the authors have convinced me of the potential impacts of this work. | * T_a(t) is used in Section 3.1, but only defined in Section 4. |
NIPS_2022_1564 | NIPS_2022 | 1.The main part can be more concise (especially for the introduction part)and including empirical results.
2.Given the new introduced hyper-parameters, it is still not clear whether this new proposed method is empirically useful. How to choose hyper-parameters in a more practical training setting?
3.The empirical evaluations can not well supported their theoretical analysis. As the authors claim running experiments with 24 A100 GPUs, all methods should be compared in a relatively large scaled training task. Only small linear regression experiment results are reported, where communication is not really an issue.
The paper discusses a new variant on a technique in distributed training. As far as I’m concerned, there is no serious issue or limitation that would impact society. | 1.The main part can be more concise (especially for the introduction part)and including empirical results. |
5VK1UulEbE | ICLR_2025 | * The contribution of the paper seems limited. According to equation (10), the core approach mainly involves applying a linear transformation to frequency components based on the matrix $\mathrm{S}$. It is unclear how $\mathrm{S}$ functions as a constraint to realize the key motivation of enhancing the weight of stable frequency components.
* In Definition 2, the authors introduce the concept of a Stable Frequency Subset $\mathcal{O}$ but do not provide a clear criterion for determining what qualifies as a stable frequency. Some notations, such as Stable Frequency Subset $\mathcal{O}$, and Theorem 1 presented in Section 2 are not well linked to the design of the method proposed in Section 3.
* The experimental evaluation lacks thoroughness, as it includes only three baseline models (Dlinear, PatchTST, and iTransformer). Although the authors mention other models, such as CrossFormer, they do not include it in their comparisons. Additionally, the Nonstationary Transformer [1] would also be a relevant model for further comparison.
* The empirical analysis in Figure 3 is confusing to the reviewer. Could the authors provide additional clarification on how the adjustments to the amplitudes of the input series and forecasting target, based on the Frequency Stability score, affect model prediction accuracy? Additionally, could you explain why these adjustments are effective in enhancing the model's performance? Both Equations (9) and (10) have large spacing from the preceding text.
[1] Liu, Yong, et al. "Non-stationary transformers: Exploring the stationarity in time series forecasting." Advances in Neural Information Processing Systems 35 (2022): 9881-9893. | * The empirical analysis in Figure 3 is confusing to the reviewer. Could the authors provide additional clarification on how the adjustments to the amplitudes of the input series and forecasting target, based on the Frequency Stability score, affect model prediction accuracy? Additionally, could you explain why these adjustments are effective in enhancing the model's performance? Both Equations (9) and (10) have large spacing from the preceding text. [1] Liu, Yong, et al. "Non-stationary transformers: Exploring the stationarity in time series forecasting." Advances in Neural Information Processing Systems 35 (2022): 9881-9893. |
Ku1tUKnAnC | ICLR_2025 | 1. This approach requires enumerating all off-target concepts in order to construct the oracle probe for Z_e. However, this is a common assumption in the causal-inference-in-text literature, so not surprising.
2. A crux of this paper is the development of *good* oracle probes. However, there is very little analysis of how to evaluate the oracle probes (which are then used to evaluate causal probing methods).
3. Currently, this paper only considers binary concepts. This is a common assumption, so not very limiting. | 3. Currently, this paper only considers binary concepts. This is a common assumption, so not very limiting. |
NIPS_2022_2813 | NIPS_2022 | weakness (insight and contribution), my initial rating is borderline. Strengths:
+ The problem of adapting CLIP under few-shot setting is recent. Compared to the baseline method CoOp, the improvement of the proposed method is significant.
+ The ablation studies and analysis in Section 4.4 is well organized and clearly written. It is easy to follow the analysis and figure our the contribution of each component. Also, Figure 2 is well designed and clear to illustrate the pipeline.
+ The experimental analysis is comprehensive. The analysis on computation time and inference speed is also provided. Weakness:
- (major concern) The contribution is somehow limited. The main contribution is applying optimal transport for few-shot adaptation of CLIP. After reading the paper, it is not clear enough to me why Optimal Transport is better than other distance. Especially, the insight behind the application of Optimal Transport is not clear. I would like to see more analysis and explanation on why Optimal Transport works well. Otherwise, it seems that this work is just an application work on a specific model and a specific task, which limits the contribution.
- The recent related work CoCoOp [1] is not compared in the experiments. Although it is a CVPR'22 work that is officially published after the NeurIPS deadline, as the extended version of CoOp, it is necessary to compare with CoCoOp in the experiments.
- In the approach method, there lacks a separate part or subsection to introduce the inference strategy, i.e., how to use the multiple prompts in the test stage.
- Table 2 mixed different ablation studies (number of prompts, visual feature map, constraint). It would be great if the table can be split into several tables according to the analyzed component.
- The visualization in Figure 4 is not clear. It is not easy to see the attention as it is transparent. References
[1] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In CVPR, 2022.
After reading the authors' response and the revised version, my concerns (especially the contribution of introducing the optimal transport distance for fine-tuning vision-language models) are well addressed and I am happy to increase my rating. | - The recent related work CoCoOp [1] is not compared in the experiments. Although it is a CVPR'22 work that is officially published after the NeurIPS deadline, as the extended version of CoOp, it is necessary to compare with CoCoOp in the experiments. |
PyJ78pUMEE | EMNLP_2023 | - There's over reliance on the LLM to trust the automated scoring with the knowledge that LLMs have their complex biases and sensitivity to prompts (and order).
- It is not clear how the method will perform on long conversations (the dialog datasets used for prompt and demonstration selection seem to contain short conversations)
- The paper can be simplified in writing - the abstract is too long and does not convey the findings well.
- Fig. 1 can also be drawn better to show the processing pipeline (prompt generation and manual check, demonstration selection with ground truth scores, and automatic scoring alongwith showing where model training is being used to optimize the selection modules. | - Fig. 1 can also be drawn better to show the processing pipeline (prompt generation and manual check, demonstration selection with ground truth scores, and automatic scoring alongwith showing where model training is being used to optimize the selection modules. |
NIPS_2020_1430 | NIPS_2020 | 1. The paper only studied two-player situation, the performence of ReBeL in multi-player games is still to be confirm. 2. There are not enough comparison with other relative method in Section2. 3. The author only did experiment on two typical games, the ReBeL's performance on more complex problem. Especially when the game has bigger depth which will cause huge inputs of the value and policy function. 4. The theortical proof had only been considered two-player situation. | 3. The author only did experiment on two typical games, the ReBeL's performance on more complex problem. Especially when the game has bigger depth which will cause huge inputs of the value and policy function. |