paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 7
10.2k
| point
stringlengths 47
690
|
---|---|---|---|
ICLR_2021_1116 | ICLR_2021 | Weakness:
The presentation of background section can be improved as it is too dense with different notations. It may be a good idea to summarize these notations into a table.
Only consider 2 environments: 1 with non-local reward and 1 with long horizon and resonant frequencies.
I have the following questions and comments: Q1. What is the motivation of changing frequency of pendulum to 1.7? instead of 0.5Hz? Have you tried different settings of the pendulum environment (use the original frequency of 0.5Hz)? If yes I am curious how other methods perform compare to TDPO.
C1. It would be nice to have a discussion to compare the computation cost of each iteration of TDPO and other methods like TRPO and TD3.
C2. As shown in the experiments, TDPO does not work well in the common gym environments, is there any changes to algorithm design possible to improve the current algorithm for these setting?
C3. From the way the method is presented, it seems that it is not simple to implement the method? It is better to have a discussion on the implementation aspect of the method.
Small comments:
Secion 1, first paragraph, line 6, there are two \delta_s
The definition of KL divergence in terms of Hilbert space inner product missing a subscript?
Introduce the definition of the term h.o.t. before use although it can be inferred from the context
Overall, I suggest a weak-accept decision based on the following reason:
Novelty in algorithm development with theoretical justification for the monotonic improvement.
Although experiments indeed show the clear advantage of the proposed method over existing ones, more environments or more setting in the same environment should be presented to better evaluate its performance. | 1 with non-local reward and 1 with long horizon and resonant frequencies. I have the following questions and comments: |
NIPS_2017_330 | NIPS_2017 | - Section 4 is very tersely written (maybe due to limitations in space) and could have benefitted with a slower development for an easier read.
- Issues of convergence, especially when applying gradient descent over a non-Euclidean space, is not addressed
In all, a rather thorough paper that derives an efficient way to compute gradients for optimization on LDSs modeled using extended subspaces and kernel-based similarity. At one hand, this leads to improvements over some competing methods. Yet, at its core, the paper avoids handling of the harder topics including convergence and any analysis of the proposed optimization scheme. None the less, the derivation of the gradient computations is interesting by itself. Hence, my recommendation. | - Section 4 is very tersely written (maybe due to limitations in space) and could have benefitted with a slower development for an easier read. |
NIPS_2017_65 | NIPS_2017 | 1) the evaluation is weak; the baselines used in the paper are not even designed for fair classification
2) the optimization procedure used to solve the multi-objective optimization problem is not discussed in adequate detail
Detailed comments below:
Methods and Evaluation: The proposed objective is interesting and utilizes ideas from two well studied lines of research, namely, privileged learning and distribution matching to build classifiers that can incorporate multiple notions of fairness. The authors also demonstrate how some of the existing methods for learning fair classifiers are special cases of their framework. It would have been good to discuss the goal of each of the terms in the objective in more detail in Section 3.3. The part that is probably the most weakest in the entire discussion of the approach is the discussion of the optimization procedure. The authors state that there are different ways to optimize the multi-objective optimization problem they formulate without mentioning clearly which is the procedure they employ and why (in Section 3). There seems to be some discussion about the same in experiments section (first paragraph) and I think what was done is that the objective was first converted into unconstrained optimization problem and then an optimal solution from the pareto set was found using BFGS. This discussion is still quite rudimentary and it would be good to explain the pros and cons of this procedure w.r.t. other possible optimization procedures that could have been employed to optimize the objective.
The baselines used to compare the proposed approach and the evaluation in general seems a bit weak to me. Ideally, it would be good to employ baselines that learn fair classifiers based on different notions (E.g., Hardt et. al. and Zafar et. al.) and compare how well the proposed approach performs on each notion of fairness in comparison with the corresponding baseline that is designed to optimize for that notion. Furthermore, I am curious as to why k-fold cross validation was not used in generating the results. Also, was the split between train and test set done randomly? And, why are the proportions of train and test different for different datasets?
Clarity of Presentation:
The presentation is clear in general and the paper is readable. However, there are certain cases where the writing gets a bit choppy. Comments:
1. Lines 145-147 provide the reason behind x*_n being the concatenation of x_n and z_n. This is not very clear.
2. In Section 3.3, it would be good to discuss the goal of including each of the terms in the objective in the text clearly.
3. In Section 4, more details about the choice of train/test splits need to be provided (see above).
While this paper proposes a useful framework that can handle multiple notions of fairness, there is scope for improving it quite a bit in terms of its experimental evaluation and discussion of some of the technical details. | 1) the evaluation is weak; the baselines used in the paper are not even designed for fair classification |
nE1l0vpQDP | ICLR_2025 | - Given the existing literature on the implicit bias of optimization methods, the primary concern is the significance of the results presented. For instance, the classic result by [Z. Ji and M. Telgarsky] demonstrates a convergence rate $\log\log n/\log n$ of GD to the L2-margin solution, which is faster than the rate shown in this submission. Moreover, [C. Zhang, D. Zou, and Y. Cao] have shown much faster rates for Adam converging to the L-infinity margin solution. This submission also lacks citations to these papers and other relevant works:
[Z. Ji and M. Telgarsky] The implicit bias of gradient descent on nonseparable data, COLT 2019.
[C. Zhang, D. Zou, and Y. Cao] The Implicit Bias of Adam on Separable Data. 2024.
[S. Xie and Z. Li] Implicit Bias of AdamW: l_\infty-Norm Constrained Optimization. ICML 2024
[M. Nacson, N. Srebro, and D. Soudry] Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate. AISTATS 2019.
- Since AdaGrad-Norm has the same implicit bias as GD, the advantages of using AdaGrad-Norm over GD are unclear.
- The bounded noise assumption, while common, is somewhat restrictive in stochastic optimization literature. There have been several efforts to extend these noise conditions:
[A. Khaled and P. Richt´arik]. Better theory for sgd in the nonconvex world. TMLR 2023.
[R. Gower, O. Sebbouh, and N. Loizou] Sgd for structured nonconvex functions: Learning rates, minibatching and interpolation. AISTATS 2021. | - The bounded noise assumption, while common, is somewhat restrictive in stochastic optimization literature. There have been several efforts to extend these noise conditions: [A. Khaled and P. Richt´arik]. Better theory for sgd in the nonconvex world. TMLR 2023. [R. Gower, O. Sebbouh, and N. Loizou] Sgd for structured nonconvex functions: Learning rates, minibatching and interpolation. AISTATS 2021. |
ICLR_2021_2208 | ICLR_2021 | + Nice idea Consistent improvements over cross entropy for hierarchical class structures Improvements w.r.t other competitors (though not consistent) Good ablation study
The improvements are small The novelty is not very significant
More comments:
Figure 1: - It is not clear what distortion is at this stage - It is not clear what perturbed MNist is, and respectively: why is the error of a 3-layer CNN so high (12-16% error are reported)? CNNs with 2-3 layers can solve MNist with accuracy higher than 99.5%? - This figure cannot be presented on page 2 without proper definitions. It should be either presented on page 5, where the experiment is defined, or better explained
Page 4: It is said that s can be computed efficiently and this is shown in the appendix, but the version I have do not have an appendix Page 6: the XE+EMD method is not present in a comprehensible manner. 1) p_k symbols are used without definition (tough I think I these are the network predictions p(\hat{y}=k I) 2) the relation of the formula presented to the known EMD is not clear. The latter is a problem solved as linear programming or similar, and not a closed form formula 3) it is not clear what the role of \mu is and why can be set to 3 irrespective of the scale of metric D page 7: The experiments show small, but consistent improvements of the suggested method over standard cross entropy, and improvements versus most competitors in most cases
I have read the reviews of others and the author's response. My main impression of the work remains as it was: that it is nice idea with small but significant empirical success. However, my acquaintance with the previous literature in this subject is partial compared to the acquaintance of other reviewers, so It may well be possible that they are in a better position than me to see the incremental nature of the proposed work. I therefore reduce the rating a bit, to become closer to the consensus. | 1) p_k symbols are used without definition (tough I think I these are the network predictions p(\hat{y}=k I) 2) the relation of the formula presented to the known EMD is not clear. The latter is a problem solved as linear programming or similar, and not a closed form formula |
lGDmwb12Qq | ICLR_2025 | 1. I think the innovation of this paper is limited. In this paper, I think the main improvement comes from taking the disparity range below 0 into consideration, eliminating the negative impact on the scheme based on distributed supervision in the disparity range below 0. But with a fixed extended disparity range, i.e., 16, I think it's hard to fit the distribution of the scenarios. Do I need to set a new extended range to fit the distribution range in a new scenario? I think this is an offset that is highly relevant to the scene.
2. I think the improvement of this method over SOTA methods such as IGEV is small. Does this mean that there is no multi-peak distribution problem in iterative optimization schemes similar to IGEV? I suggest that the author analyze the distribution of disparities produced by IGEV compared to other baselines to determine why the effect is not significantly improved on IGEV. And I have another concern. Currently, SOTA schemes are basically iterative frameworks similar to IGEV. Is it difficult for Sampling-Gaussian to significantly improve such frameworks? | 2. I think the improvement of this method over SOTA methods such as IGEV is small. Does this mean that there is no multi-peak distribution problem in iterative optimization schemes similar to IGEV? I suggest that the author analyze the distribution of disparities produced by IGEV compared to other baselines to determine why the effect is not significantly improved on IGEV. And I have another concern. Currently, SOTA schemes are basically iterative frameworks similar to IGEV. Is it difficult for Sampling-Gaussian to significantly improve such frameworks? |
ACL_2017_818_review | ACL_2017 | 1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technical details, without clearly explaining the overall approach and why it is a good idea.
2) The experiments and the discussion need to be finished. In particular, there is no discussion of the results of one of the two tasks tackled (lower half of Table 2), and there is one obvious experiment missing: Variant B of the authors' model gives much better results on the first task than Variant A, but for the second task only Variant A is tested -- and indeed it doesn't improve over the baseline. - General Discussion: The paper needs quite a bit of work before it is ready for publication. - Detailed comments: 026 five dimensions, not six Figure 1, caption: "implies physical relations": how do you know which physical relations it implies?
Figure 1 and 113-114: what you are trying to do, it looks to me, is essentially to extract lexical entailments (as defined in formal semantics; see e.g. Dowty 1991) for verbs. Could you please explicit link to that literature?
Dowty, David. " Thematic proto-roles and argument selection." Language (1991): 547-619.
135 around here you should explain the key insight of your approach: why and how does doing joint inference over these two pieces of information help overcome reporting bias?
141 "values" ==> "value"?
143 please also consider work on multimodal distributional semantics, here and/or in the related work section. The following two papers are particularly related to your goals: Bruni, Elia, et al. "Distributional semantics in technicolor." Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, 2012.
Silberer, Carina, Vittorio Ferrari, and Mirella Lapata. " Models of Semantic Representation with Visual Attributes." ACL (1). 2013.
146 please clarify that your contribution is the specific task and approach -- commonsense knowledge extraction from language is long-standing task.
152 it is not clear what "grounded" means at this point Section 2.1: why these dimensions, and how did you choose them?
177 explain terms "pre-condition" and "post-condition", and how they are relevant here 197-198 an example of the full distribution for an item (obtained by the model, or crowd-sourced, or "ideal") would help.
Figure 2. I don't really see the "x is slower than y" part: it seems to me like this is related to the distinction, in formal semantics, between stage-level vs. individual-level predicates: when a person throws a ball, the ball is faster than the person (stage-level) but it's not true in general that balls are faster than people (individual-level).
I guess this is related to the pre-condition vs. post-condition issue. Please spell out the type of information that you want to extract.
248 "Above definition": determiner missing Section 3 "Action verbs": Which 50 classes do you pick, and you do you choose them? Are the verbs that you pick all explicitly tagged as action verbs by Levin? 306ff What are "action frames"? How do you pick them?
326 How do you know whether the frame is under- or over-generating?
Table 1: are the partitions made by frame, by verb, or how? That is, do you reuse verbs or frames across partitions? Also, proportions are given for 2 cases (2/3 and 3/3 agreement), whereas counts are only given for one case; which?
336 "with... PMI": something missing (threshold?)
371 did you do this partitions randomly?
376 "rate *the* general relationship" 378 "knowledge dimension we choose": ? ( how do you choose which dimensions you will annotate for each frame?)
Section 4 What is a factor graph? Please give enough background on factor graphs for a CL audience to be able to follow your approach. What are substrates, and what is the role of factors? How is the factor graph different from a standard graph?
More generally, at the beginning of section 4 you should give a higher level description of how your model works and why it is a good idea.
420 "both classes of knowledge": antecedent missing.
421 "object first type" 445 so far you have been only talking about object pairs and verbs, and suddenly selectional preference factors pop in. They seem to be a crucial part of your model -- introduce earlier? In any case, I didn't understand their role.
461 "also"?
471 where do you get verb-level similarities from?
Figure 3: I find the figure totally unintelligible. Maybe if the text was clearer it would be interpretable, but maybe you can think whether you can find a way to convey your model a bit more intuitively. Also, make sure that it is readable in black-and-white, as per ACL submission instructions.
598 define term "message" and its role in the factor graph.
621 why do you need a "soft 1" instead of a hard 1?
647ff you need to provide more details about the EMB-MAXENT classifier (how did you train it, what was the input data, how was it encoded), and also explain why it is an appropriate baseline.
654 "more skimp seed knowledge": ?
659 here and in 681, problem with table reference (should be Table 2). 664ff I like the thought but I'm not sure the example is the right one: in what sense is the entity larger than the revolution? Also, "larger" is not the same as "stronger".
681 as mentioned above, you should discuss the results for the task of inferring knowledge on objects, and also include results for model (B) (incidentally, it would be better if you used the same terminology for the model in Tables 1 and 2) 778 "latent in verbs": why don't you mention objects here?
781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details. | 781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details. |
4N97bz1sP6 | ICLR_2024 | 1. The authors should make clear the distinction of when the proposed method is trained using only weak supervision and when it is semi-supervised trained. For instance, in Table 1, I think the proposed framework row refers to the semi-supervised version of the method, thus the authors should rename the column to ‘Fully supervised’ from ‘Supervised’. Maybe a better idea is to specify the data used to train ALL the parts of each model and have two big columns ‘Mixture training data’ and ‘Single source data’ which will make it much more prevalent of what is which.
2. Building upon my previous argument, I think that when one is using these large pre-trained networks on single-source data like CLAP, the underlying method becomes supervised in a sense, or to put it more specifically supervised with unpaired data. The authors should clearly explain these differences throughout the manuscript.
3. I think the authors should include stronger text-based sound separation baselines like the model and ideally the training method that uses heterogeneous conditions to train the separation model in [A] which has already shown to outperform LASS-Net (Liu et. al 2022) which is almost always the best performing baseline in this paper.
I would be more than happy to increase my score if all the above weaknesses are addressed by the authors.
[A] Tzinis, E., Wichern, G., Smaragdis, P. and Le Roux, J., 2023, June. Optimal Condition Training for Target Source Separation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE. | 1. The authors should make clear the distinction of when the proposed method is trained using only weak supervision and when it is semi-supervised trained. For instance, in Table 1, I think the proposed framework row refers to the semi-supervised version of the method, thus the authors should rename the column to ‘Fully supervised’ from ‘Supervised’. Maybe a better idea is to specify the data used to train ALL the parts of each model and have two big columns ‘Mixture training data’ and ‘Single source data’ which will make it much more prevalent of what is which. |
2RQokbn4B5 | ICLR_2025 | 1. The analysis of the correlation between dataset size and the Frobenius norm and the singular values is underwhelming. It is not clear if this trend holds across different model architectures, and if so, no theoretical evidence is advanced for this correlation.
2. The proposed method for dataset size recovery is way too simple to offer any insights.
3. The authors only study dataset size recovery for foundation models fine-tuned with a few samples. However, this problem is very general and should be explored in a broader framework. | 1. The analysis of the correlation between dataset size and the Frobenius norm and the singular values is underwhelming. It is not clear if this trend holds across different model architectures, and if so, no theoretical evidence is advanced for this correlation. |
ARR_2022_233_review | ARR_2022 | Additional details regarding the creation of the dataset would be helpful to solve some doubts regarding its robustness. It is not stated whether the dataset will be publicly released.
1) Additional reference regarding explainable NLP Datasets: "Detecting and explaining unfairness in consumer contracts through memory networks" (Ruggeri et al 2021) 2) Some aspects of the creation of the dataset are unclear and the authors must address them. First of all, will the author release the dataset or will it remain private?
Are the guidelines used to train the annotators publicly available?
Having a single person responsible for the check at the end of the first round may introduce biases. A better practice would be to have more than one checker for each problem, at least on a subset of the corpus, to measure the agreement between them and, in case of need, adjust the guidelines.
It is not clear how many problems are examined during the second round and the agreement between the authors is not reported.
It is not clear what is meant by "accuracy" during the annotation stages.
3) Additional metrics that may be used to evaluate text generation: METEOR (http://dx.doi.org/10.3115/v1/W14-3348), SIM(ile) (http://dx.doi.org/10.18653/v1/P19-1427).
4) Why have the authors decided to use the colon symbol rather than a more original and less common symbol? Since the colon has usually a different meaning in natural language, do they think it may have an impact?
5) How much are these problems language-dependent? Meaning, if these problems were perfectly translated into another language, would they remain valid? What about the R4 category? Additional comments about these aspects would be beneficial for future works, cross-lingual transfers, and multi-lingual settings.
6) In Table 3, it is not clear whether the line with +epsilon refers to the human performance when the gold explanation is available or to the roberta performance when the golden explanation is available?
In any case, both of these two settings would be interesting to know, so I suggest, if it is possible, to include them in the comparison if it is possible.
7) The explanation that must be generated for the query, the correct answer, and the incorrect answers could be slightly different. Indeed, if I am not making a mistake, the explanation for the incorrect answer must highlight the differences w.r.t. the query, while the explanation for the answer must highlight the similarity. It would be interesting to analyze these three categories separately and see whether if there are differences in the models' performances. | 1) Additional reference regarding explainable NLP Datasets: "Detecting and explaining unfairness in consumer contracts through memory networks" (Ruggeri et al 2021) |
NIPS_2021_1222 | NIPS_2021 | Claims: 1.a) I think the paper falls short of the high-level contributions claimed in the last sentence of the abstract. As the authors note in the background section, there are a number of published works that demonstrate the tradeoffs between clean accuracy, training with noise perturbations, and adversarial robustness. Many of these, especially Dapello et al., note the relevance with respect to stochasticity in the brain. I do not see how their additional analysis sheds new light on the mechanisms of robust perception or provides a better understanding of the role stochasticity plays in biological computation. To be clear - I think the paper is certainly worthy of publication and makes notable contributions. Just not all of the ones claimed in that sentence.
1.b) The authors note on lines 241-243 that “the two geometric properties show a similar dependence for the auditory (Figure 4A) and visual (Figure 4B) networks when varying the eps-sized perturbations used to construct the class manifolds.” I do not see this from the plots. I would agree that there is a shared general upward trend, but I do not agree that 4A and 4B show “similar dependence” between the variables measured. If nothing else, the authors should be more precise when describing the similarities.
Clarifications: 2.a) The authors say on lines 80-82 that the center correlation was not insightful for discriminating model defenses, but then use that metric in figure 4 A&B. I’m wondering why they found it useful here and not elsewhere? Or what they meant by the statement on lines 80-82.
2.b) On lines 182-183 the authors note measuring manifold capacity for unperturbed images, i.e. clean exemplar manifolds. Earlier they state that the exemplar manifolds are constructed using either adversarial perturbations or from stochasticity of the network. So I’m wondering how one constructs images for a clean exemplar manifold for a non-stochastic network? Or put another way, how is the denominator of figure 2.c computed for the ResNet50 & ATResNet50 networks?
2.c) The authors report mean capacity and width in figure 2. I think this is the mean across examples as well as across seeds. Is the STD also computed across examples and seeds? The figure caption says it is only computed across seeds. Is there a lot of variability across examples?
2.d) I am unsure why there would be a gap between the orange and blue/green lines at the minimum strength perturbation for the avgpool subplot in figure 2.c. At the minimum strength perturbation, by definition, the vertical axis should have a value of 1, right? And indeed in earlier layers at this same perturbation strength the capacities are equal. So why does the ResNet50 lose so much capacity for the same perturbation size from conv1 to avgpool? It would also be helpful if the authors commented on the switch in ordering for ATResNet and the stochastic networks between the middle and right subplots.
General curiosities (low priority): 3.a) What sort of variability is there in the results with the chosen random projection matrix? I think one could construct pathological projection matrices that skews the MFTMA capacity and width scores. These are probably unlikely with random projections, but it would still be helpful to see resilience of the metric to the choice of random projection. I might have missed this in the appendix, though.
3.b) There appears to be a pretty big difference in the overall trends of the networks when computing the class manifolds vs exemplar manifolds. Specifically, I think the claims made on lines 191-192 are much better supported by Figure 1 than Figure 2. I would be interested to hear what the authors think in general (i.e. at a high/discussion level) about how we should interpret the class vs exemplar manifold experiments.
Nitpick, typos (lowest priority): 4.a) The authors note on line 208 that “Unlike VOneNets, the architecture maintains the conv-relu-maxpool before the first residual block, on the grounds that the cochleagram models the ear rather than the primary auditory cortex.” I do not understand this justification. Any network transforming input signals (auditory or visual) would have to model an entire sensory pathway, from raw input signal to classification. I understand that VOneNets ignore all of the visual processing that occurs before V1. I do not see how this justifies adding the extra layer to the auditory network.
4.b) It is not clear why the authors chose a line plot in figure 4c. Is the trend as one increases depth actually linear? From the plot it appears as though the capacity was only measured at the ‘waveform’ and ‘avgpool’ depths; were there intermediate points measured as well? It would be helpful if they clarified this, or used a scatter/bar plot if there were indeed only two points measured per network type.
4.c) I am curious why there was a switch to reporting SEM instead of STD for figures 5 & 6.
4.c) I found typos on lines 104, 169, and the fig 5 caption (“10 image and”). | 3.a) What sort of variability is there in the results with the chosen random projection matrix? I think one could construct pathological projection matrices that skews the MFTMA capacity and width scores. These are probably unlikely with random projections, but it would still be helpful to see resilience of the metric to the choice of random projection. I might have missed this in the appendix, though. |
ICLR_2023_1553 | ICLR_2023 | of the papers in my opinion are as follows: 1)The mehtod is only tested on two datasets. Have the authors tried more datasets to get a better idea of the performance. 2) The codes for the paper are not released. | 1)The mehtod is only tested on two datasets. Have the authors tried more datasets to get a better idea of the performance. |
Pb1DhkTVLZ | EMNLP_2023 | 1. The assessment criteria for the performance of large language models is limited to accuracy metrics. Such a limited view does not necessarily provide a comprehensive representation of the performance of large language models in real-world applications.
2. The method exhibits dependence on similar examples from the training dataset. This raises potential concerns regarding the distribution consistency between the training and test datasets adopted in the study. An in-depth visualization and analysis of the data distributions might be beneficial to address such concerns.
3. The evaluative framework appears somewhat limited in scope. With considerations restricted to merely three Question-Answering tasks and two language models, there are reservations about the method's broader applicability. Its potential to generalize to other reasoning or generation tasks or more advanced models, such as vicunna or alpaca, remains a subject of inquiry. | 3. The evaluative framework appears somewhat limited in scope. With considerations restricted to merely three Question-Answering tasks and two language models, there are reservations about the method's broader applicability. Its potential to generalize to other reasoning or generation tasks or more advanced models, such as vicunna or alpaca, remains a subject of inquiry. |
huo8MqVH6t | ICLR_2025 | 1. This paper (Section 4) examines G-effects of each unlearning objective independently and in isolation to other learning objectives. Results are also shown and discussed in separate figures and parts of the paper. Studying G-effect of each learning objective in isolation, raises the concern regarding the comparability of G-effect values across various unlearning objectives and approaches.
- Why empirical analysis of each unlearning approach is shown and discussed in separate parts of the paper?
- Are G-effect values comparable across different unlearning approaches? are values comparable and why?
- Can the proposed G-effect rank unlearning approaches?
2. Section 5 and its Table 1 provide a comprehensive comparison of various unlearning approaches using TOFU unlearning dataset for the removal of fictitious author profiles from LLMs finetuned on them. However, this comparison uses only existing metrics: forget quality, model utility, and PS-scores, and does not report the proposed G-effects.
- Why G-effects are missing in this section?
- How do G-effect values correlate with metrics presented in Table 1?
- Why are the order and ranking of unlearning objectives different across different removal and retention metrics?
3. G-effects need access to intermediate checkpoints during unlearning, especially given the pattern of values in for example Figure 3 (i.e., a peak and then flat close to zero). How does this limit the applicability of the proposed metric?
4. The G-effect definition uses model checkpoints at different time steps and does not directly take into account the risk and unlearning of the initial model.
- Why does this make sense?
- Is this why you need to do accumulative?
- what does the G-effect at each unlearning step mean?
- what does accumulation across unlearning steps mean?
- What does pick mean in Figure 3? Should we stop after that step to have an effective unlearning? what would be the benefit of continuing? is 0 G-effect value the limitation of your method?
5. Some of the claims are not completely supported. For example, the claim "In terms of the unlearning G-effects, it indicates that the unlearning strength of NPO is weaker; however, for the retaining G-effects, it suggests that NPO better preserves the model integrity." As an initial step, I would link it to numbers in Table 1.
6. Membership inference attacks are a common approach in the literature for evaluating the removal capability of unlearning approaches [MUSE]. However, this paper does not report the success of membership inference attacks. How the unlearning G-effect is compared to the success of MIA? Are they aligned?
[MUSE] Weijia Shi, Jaechan Lee, Yangsibo Huang, Sadhika Malladi, Jieyu Zhao, Ari Holtzman, Daogao Liu, Luke Zettlemoyer, Noah A Smith, and Chiyuan Zhang. Muse: Machine unlearning six-way evaluation for language models, 2024. | 1. This paper (Section 4) examines G-effects of each unlearning objective independently and in isolation to other learning objectives. Results are also shown and discussed in separate figures and parts of the paper. Studying G-effect of each learning objective in isolation, raises the concern regarding the comparability of G-effect values across various unlearning objectives and approaches. |
NIPS_2020_1344 | NIPS_2020 | 1. There have been several results on the problems of batched top-k ranking and fully adaptive coarse ranking in recent years. From that point of view the results in this paper are not particularly surprising. Even the idea that one can reduce the size of active arm set by a factor of n^{1/R} has appeared in [37] for the problem of collaborative top-1 ranking. However, the main novelty in this paper seems to be the application of this idea for the problem of coarse ranking using a successive-accepts-and-rejects type algorithm. 2. Also, proving lower bounds for round complexity is the major chuck of work involved in proving results for batched ranking problems. However, this paper exploits an easy reduction from the problem of collaborative ranking, and hence, the lower bound results follow as an easy corollary of these collaborative ranking results. | 2. Also, proving lower bounds for round complexity is the major chuck of work involved in proving results for batched ranking problems. However, this paper exploits an easy reduction from the problem of collaborative ranking, and hence, the lower bound results follow as an easy corollary of these collaborative ranking results. |
NIPS_2019_991 | NIPS_2019 | [Clarity] * What is the value of the c constant (MaxGapUCB algorithm) used in experiments? How was it determined? How does it impact the performance of MaxGapUCB? * The experiment results could be discussed more. For example, should we conclude from the Streetview experiment that MaxGapTop2UCB is better than the other ones? [Significance] * The real-world applications of this new problem setting are not clear. The authors mention applicability to sorting/ranking. It seems like this would require a recursive application of proposed algorithms to recover partial ordering. However, the procedure to find the upper bounds on gaps (Alg. 4) has complexity K^2, where K is the number of arms. How would that translate in computational complexity when solving a ranking problem? Minor details: * T_a(t) is used in Section 3.1, but only defined in Section 4. * The placement of Figure 2 is confusing. --------------------------------------------------------------------------- I have read the rebuttal. Though the theoretical contribution seems rather low given existing work on pure exploration, the authors have convinced me of the potential impacts of this work. | * T_a(t) is used in Section 3.1, but only defined in Section 4. |
NIPS_2017_110 | NIPS_2017 | weakness of this paper in my opinion (and one that does not seem to be resolved in Schiratti et al., 2015 either), is that it makes no attempt to answer this question, either theoretically, or by comparing the model with a classical longitudinal approach.
If we take the advantage of the manifold approach on faith, then this paper certainly presents a highly useful extension to the method presented in Schiratti et al. (2015). The added flexibility is very welcome, and allows for modelling a wider variety of trajectories. It does seem that only a single breakpoint was tried in the application to renal cancer data; this seems appropriate given this dataset, but it would have been nice to have an application to a case where more than one breakpoint is advantageous (even if it is in the simulated data). Similarly, the authors point out that the model is general and can deal with trajectories in more than one dimensions, but do not demonstrate this on an applied example.
(As a side note, it would be interesting to see this approach applied to drug response data, such as the Sanger Genomics of Drug Sensitivity in Cancer project).
Overall, the paper is well-written, although some parts clearly require a background in working on manifolds. The work presented extends Schiratti et al. (2015) in a useful way, making it applicable to a wider variety of datasets.
Minor comments:
- In the introduction, the second paragraph talks about modelling curves, but it is not immediately obvious what is being modelled (presumably tumour growth).
- The paper has a number of typos, here are some that caught my eyes: p.1 l.36 "our model amounts to estimate an average trajectory", p.4 l.142 "asymptotic constrains", p.7 l. 245 "the biggest the sample size", p.7l.257 "a Symetric Random Walk", p.8 l.269 "the escapement of a patient".
- Section 2.2., it is stated that n=2, but n is the number of patients; I believe the authors meant m=2.
- p.4, l.154 describes a particular choice of shift and scaling, and the authors state that "this [choice] is the more appropriate.", but neglect to explain why.
- p.5, l.164, "must be null" - should this be "must be zero"?
- On parameter estimation, the authors are no doubt aware that in classical mixed models, a popular estimation technique is maximum likelihood via REML. While my intuition is that either the existence of breakpoints or the restriction to a manifold makes REML impossible, I was wondering if the authors could comment on this.
- In the simulation study, the authors state that the standard deviation of the noise is 3, but judging from the observations in the plot compared to the true trajectories, this is actually not a very high noise value. It would be good to study the behaviour of the model under higher noise.
- For Figure 2, I think the x axis needs to show the scale of the trajectories, as well as a label for the unit.
- For Figure 3, labels for the y axes are missing.
- It would have been useful to compare the proposed extension with the original approach from Schiratti et al. (2015), even if only on the simulated data. | - It would have been useful to compare the proposed extension with the original approach from Schiratti et al. (2015), even if only on the simulated data. |
NIPS_2018_296 | NIPS_2018 | weakness of the proposed approach. Model-based algorithms (LevinTS is model-based) for planning do not have such requirements. On the other hand, if the goal is to refine a policy at the end of some optimization procedure I understand the choice of using a policy-guided heuristic. - Concerning LubyTS it is hard to quantify the meaning of the bound in Thm. 6 (the easy part is to see when it fails, as mentioned before). There are other approaches based on generative models with guarantees, e.g., (Grill, Valko, Munos. Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning, NIPS 2016). How do you perform compared to them? - Your setting is very specific: you need to know the model or/and have access to a generative model (for expanding or generating trajectories), the problem should be episodic and the reward should be given just at the end of a task (i.e., reaching the target goal). Can you extend this approach to more general settings? - In this paper, you perform planning offline since you use a model-based approach (e.g., generative model). Is it possible to remove the assumption of the knowledge of the model? In that case, you would have to interact with the environment trying to minimize the number of times a "bad" state is visited. Since the expansion of a node can be seen as an exploratory step, this approach seems to be related to the exploration-exploitation dilemma. Bounding the number of expansions does not correspond to having small regret in general settings. Can you integrate a concept related to the long-term performance in the search strategy? This is what is often done in MCTS. All the proposed approaches have weaknesses that are partially acknowledged by the authors. A few points in the paper can be discussed in more details and clarified but I believe it is a nice contribution overall. -------- after feedback I thank you for the feedback. In particular, I appreciated the way you addressed the zero probability issue. I think that this is a relevant aspect of the proposed approaches and should be addressed in the paper. Another topic should be added to the paper is the comparison with Trailblazer and other MCTS algorithms. Despite that, I personally believe that the main limitation is the assumption of deterministic transitions. While the approach can be extended to stochastic (known) models I would like to know how the theoretical results become. | 6 (the easy part is to see when it fails, as mentioned before). There are other approaches based on generative models with guarantees, e.g., (Grill, Valko, Munos. Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning, NIPS 2016). How do you perform compared to them? |
ACL_2017_318_review | ACL_2017 | 1. Presentation and clarity: important details with respect to the proposed models are left out or poorly described (more details below). Otherwise, the paper generally reads fairly well; however, the manuscript would need to be improved if accepted.
2. The evaluation on the word analogy task seems a bit unfair given that the semantic relations are explicitly encoded by the sememes, as the authors themselves point out (more details below).
- General Discussion: 1. The authors stress the importance of accounting for polysemy and learning sense-specific representations. While polysemy is taken into account by calculating sense distributions for words in particular contexts in the learning procedure, the evaluation tasks are entirely context-independent, which means that, ultimately, there is only one vector per word -- or at least this is what is evaluated. Instead, word sense disambiguation and sememe information are used for improving the learning of word representations. This needs to be clarified in the paper.
2. It is not clear how the sememe embeddings are learned and the description of the SSA model seems to assume the pre-existence of sememe embeddings. This is important for understanding the subsequent models. Do the SAC and SAT models require pre-training of sememe embeddings?
3. It is unclear how the proposed models compare to models that only consider different senses but not sememes. Perhaps the MST baseline is an example of such a model? If so, this is not sufficiently described (emphasis is instead put on soft vs. hard word sense disambiguation). The paper would be stronger with the inclusion of more baselines based on related work.
4. A reasonable argument is made that the proposed models are particularly useful for learning representations for low-frequency words (by mapping words to a smaller set of sememes that are shared by sets of words). Unfortunately, no empirical evidence is provided to test the hypothesis. It would have been interesting for the authors to look deeper into this. This aspect also does not seem to explain the improvements much since, e.g., the word similarity data sets contain frequent word pairs.
5. Related to the above point, the improvement gains seem more attributable to the incorporation of sememe information than word sense disambiguation in the learning procedure. As mentioned earlier, the evaluation involves only the use of context-independent word representations. Even if the method allows for learning sememe- and sense-specific representations, they would have to be aggregated to carry out the evaluation task.
6. The example illustrating HowNet (Figure 1) is not entirely clear, especially the modifiers of "computer".
7. It says that the models are trained using their best parameters. How exactly are these determined? It is also unclear how K is set -- is it optimized for each model or is it randomly chosen for each target word observation? Finally, what is the motivation for setting K' to 2? | 5. Related to the above point, the improvement gains seem more attributable to the incorporation of sememe information than word sense disambiguation in the learning procedure. As mentioned earlier, the evaluation involves only the use of context-independent word representations. Even if the method allows for learning sememe- and sense-specific representations, they would have to be aggregated to carry out the evaluation task. |
NIPS_2021_1251 | NIPS_2021 | - Typically, expected performance under observation noise is used for evaluation because the decision-maker is interested in the true objective function and the noise is assumed to be noise (misleading, not representative). In the formulation in this paper, the decision maker does care about the noise; rather the objective function of interest is the stochastic noisy function. It would be good to make this distinction clearer upfront. - The RF experiment is not super compelling. It is not nearly as interesting as the FEL problem, and the risk aversion does not make a significant difference in average performance. Overall the empirical evaluation is fairly limited. - It is unclear why the mean-variance model is the best metric to use for evaluating performance - Why not also evaluate performance in terms of the VaR or CVaR? - The MV objective is nice for the proposed UCB-style algorithm and theoretical work, but for evaluation VaR and CVaR also are important considerations
Writing: - Very high quality and easy to follow writing - Grammar: - L164: “that that” - Figure 5 caption: “Simple regret fat the reprted”
Questions: - Figure 2: “RAHBO not only leads to strong results in terms of MV, but also in terms of mean objective”? Why is it better than GP-UCB on this metric? Is this an artifact of the specific toy problem?
Limitations are discussed and potential future directions are interesting. “We are not aware of any societal impacts of our work” – this (as with an optimization algorithm) could be used for nefarious endeavors and could be discussed. | - Typically, expected performance under observation noise is used for evaluation because the decision-maker is interested in the true objective function and the noise is assumed to be noise (misleading, not representative). In the formulation in this paper, the decision maker does care about the noise; rather the objective function of interest is the stochastic noisy function. It would be good to make this distinction clearer upfront. |
wcqBfk4jv6 | EMNLP_2023 | 1. although the choice of models seems fine at first, but I am not sure as to how much of the citation information is actually being utilized. The maximum input size can only be 1024 and given the size of articles and the abstracts I am not sure how much inforamtion is being used in either. Can you comment on how much information is lost at token-level that is being fed to model.
2. I like the idea of aggregation using various cited articles. The only problem and I might be possibly confused as to how are you ensuring the quality of chosen articles. It could be so that the claims made in the article might also be contradicting to the claims made in cited articles or might not at all be related to the claims being discussed in the articles. Do you have any analysis on correlation between cited articles and the main articles, and whether it affects the quality of generation?
3. The authors have reprt significane testing but I think the choice of test might be incorrect. Since the comparision is to be done between two samples generated from same input why not some paired test setting was used like wilcoxon signed ranked test? | 3. The authors have reprt significane testing but I think the choice of test might be incorrect. Since the comparision is to be done between two samples generated from same input why not some paired test setting was used like wilcoxon signed ranked test? |
2xRTdzmQ6C | ICLR_2025 | - (Major) Despite the elegant framework proposed, some implementation details may lack clarity and require further justification; please see the “questions” section below;
- (Major) The technical method for minimizing mutual information (MI) in the proposed IB-based CBM method is actually not so novel and largely relies on existing methods such as [1];
- (Major) The comparison between the two IB implementations appears somewhat simplistic and may provide only limited insights. What makes the estimator-based implementation more useful than the other?
- (Minor) While the presentation is generally good, some content could be more concise and structured. For instance, the derivation in Section 3.1 could be streamlined to present only the essential final estimator used in practice, relegating the full derivation to the appendix;
- (Minor) The main experimental results are based on only three runs. While I appreciate the author’s transparency in reporting this, more runs could be considered for better robustness of the results;
- (Minor) When assessing intervenability, a comparison between the proposed CIBM method and the original CBM is lacking. How CIBM exactly helps in improving intervenability does not seem apparent.
- (Minor) Reproducibility: despite the very interesting and elegant proposal, no code repo is shared. Together with the missing technical details mentioned above, this weaken the reproducibility of the work. | - (Minor) The main experimental results are based on only three runs. While I appreciate the author’s transparency in reporting this, more runs could be considered for better robustness of the results; |
NIPS_2017_53 | NIPS_2017 | Weakness
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later.
3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further.
4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation.
5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable.
6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map?
7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences).
8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case?
Minor Points:
- L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (arenât we mutliplying by a nicely-conditioned matrix to make sure everything is dense?).
- Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU?
Perliminary Evaluation
The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*).
[A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. âNeural Module Networks.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799.
[B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. âMultimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847.
[C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. âSimple Baseline for Visual Question Answering.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167. | 6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map? |
ACL_2017_318_review | ACL_2017 | 1. Presentation and clarity: important details with respect to the proposed models are left out or poorly described (more details below). Otherwise, the paper generally reads fairly well; however, the manuscript would need to be improved if accepted.
2. The evaluation on the word analogy task seems a bit unfair given that the semantic relations are explicitly encoded by the sememes, as the authors themselves point out (more details below).
- General Discussion: 1. The authors stress the importance of accounting for polysemy and learning sense-specific representations. While polysemy is taken into account by calculating sense distributions for words in particular contexts in the learning procedure, the evaluation tasks are entirely context-independent, which means that, ultimately, there is only one vector per word -- or at least this is what is evaluated. Instead, word sense disambiguation and sememe information are used for improving the learning of word representations. This needs to be clarified in the paper.
2. It is not clear how the sememe embeddings are learned and the description of the SSA model seems to assume the pre-existence of sememe embeddings. This is important for understanding the subsequent models. Do the SAC and SAT models require pre-training of sememe embeddings?
3. It is unclear how the proposed models compare to models that only consider different senses but not sememes. Perhaps the MST baseline is an example of such a model? If so, this is not sufficiently described (emphasis is instead put on soft vs. hard word sense disambiguation). The paper would be stronger with the inclusion of more baselines based on related work.
4. A reasonable argument is made that the proposed models are particularly useful for learning representations for low-frequency words (by mapping words to a smaller set of sememes that are shared by sets of words). Unfortunately, no empirical evidence is provided to test the hypothesis. It would have been interesting for the authors to look deeper into this. This aspect also does not seem to explain the improvements much since, e.g., the word similarity data sets contain frequent word pairs.
5. Related to the above point, the improvement gains seem more attributable to the incorporation of sememe information than word sense disambiguation in the learning procedure. As mentioned earlier, the evaluation involves only the use of context-independent word representations. Even if the method allows for learning sememe- and sense-specific representations, they would have to be aggregated to carry out the evaluation task.
6. The example illustrating HowNet (Figure 1) is not entirely clear, especially the modifiers of "computer".
7. It says that the models are trained using their best parameters. How exactly are these determined? It is also unclear how K is set -- is it optimized for each model or is it randomly chosen for each target word observation? Finally, what is the motivation for setting K' to 2? | 3. It is unclear how the proposed models compare to models that only consider different senses but not sememes. Perhaps the MST baseline is an example of such a model? If so, this is not sufficiently described (emphasis is instead put on soft vs. hard word sense disambiguation). The paper would be stronger with the inclusion of more baselines based on related work. |
WDO5hfLZvN | ICLR_2025 | 1. The paper combines EvoPrompt and Mid Vision Feedback (MVF), but does not explain the principles and detailed processes of the two in the intrduction or related work section. In addition, the method section is a bit casual, without strict mathematical definitions and rigorous process expressions, making the method not specific and clear enough.
2. The paper does not have sufficient experimental demonstration of the contribution points. There is only an experimental comparison between ELF (the author's method) and the baseline without Mid Vision Feedback (MVF), but no comparison with the image classification result of Mid Vision Feedback (MVF). This does not prove that the schema searched by ELF (the author's method) is better than the schema in Mid Vision Feedback (MVF).
3. The description of the experimental section is not rigorous enough (potentially, it may lead to an imprecise experimental setting). For example, in the comparison of Stage1 and ELF in Table 1, the total training generations of the two do not seem to be consistent. Whether Stage1 has reached sufficient convergence may need to be explained. In lines #346-347, the author mentions using a 32x32 input size neural network for CIFAR100 experiments, but in lines #383-384, the experiment continues on ImageNet, switching to a larger ViT-B/16 and ResNet50, and the resolution setting is not explained at this time.
4. The analysis in the experimental part is not sufficient. The authors can show the difference between the schema optimized by EvoPrompt and the original schema (and MVF), and explain clearly and more deeply the growth points brought by using EvoPrompt to optimize the schema. | 2. The paper does not have sufficient experimental demonstration of the contribution points. There is only an experimental comparison between ELF (the author's method) and the baseline without Mid Vision Feedback (MVF), but no comparison with the image classification result of Mid Vision Feedback (MVF). This does not prove that the schema searched by ELF (the author's method) is better than the schema in Mid Vision Feedback (MVF). |
ICLR_2021_329 | ICLR_2021 | Weakness
- I do not see how MFN 'largely outperforms' existing baseline methods. It is difficult to identify the quality difference between output from the proposed method and SIREN -- shape representation seems to even prefer SIREN's results (what is the ground truth for Figure 5a). The paper is based on the idea of replacing compositional models with recursive, multiplicative ones, though neither the theory nor the results are convincing to prove this linear approximation is better. I have a hard time getting the intuition of the advantages of the proposed method.
- this paper, and like other baselines (e.g. SIREN) do not comment much on the generalization power of these encoding schemes. Apart from image completion, are there other experiments showing the non-overfitting results, for example, on shape representation or 3D tasks?
- the proposed model has shown to be more efficient in training, and I assume it is also more compact in size, but there is no analysis or comments on that? Suggestions
- Result figures are hard to spot differences against baselines. It's recommended to use a zoom or plot the difference image to show the difference.
- typo in Corollary 2 -- do you mean linear combination of Gabor bases?
- It's recommended to add reference next to baseline names in tables (e.g. place citation next to 'FF Positional' if that refers a paper method)
- In Corollary 1, $\Omega$ is not explicitly defined (though it's not hard to infer what it means). | - It's recommended to add reference next to baseline names in tables (e.g. place citation next to 'FF Positional' if that refers a paper method) - In Corollary 1, $\Omega$ is not explicitly defined (though it's not hard to infer what it means). |
ARR_2022_65_review | ARR_2022 | 1. The paper covers little qualitative aspects of the domains, so it is hard to understand how they differ in linguistic properties. For example, I think it is vague to say that the fantasy novel is more “canonical” (line 355). Text from a novel may be similar to that from news articles in that sentences tend to be complete and contain fewer omissions, in contrast to product comments which are casually written and may have looser syntactic structures. However, novel text is also very different from news text in that it contains unusual predicates and even imaginary entities as arguments. It seems that the authors are arguing that syntactic factors are more significant in SRL performance, and the experimental results are also consistent with this. Then it would be helpful to show a few examples from each domain to illustrate how they differ structurally.
2. The proposed dataset uses a new annotation scheme that is different from that of previous datasets, which introduces difficulties of comparison with previous results. While I think the frame-free scheme is justified in this paper, the compatibility with other benchmarks is an important issue that needs to be discussed. It may be possible to, for example, convert frame-based annotations to frame-free ones. I believe this is doable because FrameNet also has the core/non-core sets of argument for each frame. It would also be better if the authors can elaborate more on the relationship between this new scheme and previous ones. Besides eliminating the frame annotation, what are the major changes to the semantic role labels?
- In Sec. 3, it is a bit confusing why there is a division of source domain and target domain. Thus, it might be useful to mention explicitly that the dataset is designed for domain transfer experiments.
- Line 226-238 seem to suggest that the authors selected sentences from raw data of these sources, but line 242-244 say these already have syntactic information. If I understand correctly, the data selected is a subset of Li et al. (2019a)’s dataset. If this is the case, I think this description can be revised, e.g. mentioning Li et al. (2019a) earlier, to make it clear and precise.
- More information about the annotators would be needed. Are they all native Chinese speakers? Do they have linguistics background?
- Were pred-wise/arg-wise consistencies used in the construction of existing datasets? I think they are not newly invented. It is useful to know where they come from.
- In the SRL formulation (Sec. 5), I am not quite sure what is “the concerned word”. Is it the predicate? Does this formulation cover the task of identifying the predicate(s), or are the predicates given by syntactic parsing results?
- From Figure 3 it is not clear to me how ZX is the most similar domain to Source. Grouping the bars by domain instead of role might be better (because we can compare the shapes). It may also be helpful to leverage some quantitative measure (e.g. cross entropy).
- How was the train/dev/test split determined? This should be noted (even if it is simply done randomly). | - Line 226-238 seem to suggest that the authors selected sentences from raw data of these sources, but line 242-244 say these already have syntactic information. If I understand correctly, the data selected is a subset of Li et al. (2019a)’s dataset. If this is the case, I think this description can be revised, e.g. mentioning Li et al. (2019a) earlier, to make it clear and precise. |
NIPS_2021_1222 | NIPS_2021 | Claims: 1.a) I think the paper falls short of the high-level contributions claimed in the last sentence of the abstract. As the authors note in the background section, there are a number of published works that demonstrate the tradeoffs between clean accuracy, training with noise perturbations, and adversarial robustness. Many of these, especially Dapello et al., note the relevance with respect to stochasticity in the brain. I do not see how their additional analysis sheds new light on the mechanisms of robust perception or provides a better understanding of the role stochasticity plays in biological computation. To be clear - I think the paper is certainly worthy of publication and makes notable contributions. Just not all of the ones claimed in that sentence.
1.b) The authors note on lines 241-243 that “the two geometric properties show a similar dependence for the auditory (Figure 4A) and visual (Figure 4B) networks when varying the eps-sized perturbations used to construct the class manifolds.” I do not see this from the plots. I would agree that there is a shared general upward trend, but I do not agree that 4A and 4B show “similar dependence” between the variables measured. If nothing else, the authors should be more precise when describing the similarities.
Clarifications: 2.a) The authors say on lines 80-82 that the center correlation was not insightful for discriminating model defenses, but then use that metric in figure 4 A&B. I’m wondering why they found it useful here and not elsewhere? Or what they meant by the statement on lines 80-82.
2.b) On lines 182-183 the authors note measuring manifold capacity for unperturbed images, i.e. clean exemplar manifolds. Earlier they state that the exemplar manifolds are constructed using either adversarial perturbations or from stochasticity of the network. So I’m wondering how one constructs images for a clean exemplar manifold for a non-stochastic network? Or put another way, how is the denominator of figure 2.c computed for the ResNet50 & ATResNet50 networks?
2.c) The authors report mean capacity and width in figure 2. I think this is the mean across examples as well as across seeds. Is the STD also computed across examples and seeds? The figure caption says it is only computed across seeds. Is there a lot of variability across examples?
2.d) I am unsure why there would be a gap between the orange and blue/green lines at the minimum strength perturbation for the avgpool subplot in figure 2.c. At the minimum strength perturbation, by definition, the vertical axis should have a value of 1, right? And indeed in earlier layers at this same perturbation strength the capacities are equal. So why does the ResNet50 lose so much capacity for the same perturbation size from conv1 to avgpool? It would also be helpful if the authors commented on the switch in ordering for ATResNet and the stochastic networks between the middle and right subplots.
General curiosities (low priority): 3.a) What sort of variability is there in the results with the chosen random projection matrix? I think one could construct pathological projection matrices that skews the MFTMA capacity and width scores. These are probably unlikely with random projections, but it would still be helpful to see resilience of the metric to the choice of random projection. I might have missed this in the appendix, though.
3.b) There appears to be a pretty big difference in the overall trends of the networks when computing the class manifolds vs exemplar manifolds. Specifically, I think the claims made on lines 191-192 are much better supported by Figure 1 than Figure 2. I would be interested to hear what the authors think in general (i.e. at a high/discussion level) about how we should interpret the class vs exemplar manifold experiments.
Nitpick, typos (lowest priority): 4.a) The authors note on line 208 that “Unlike VOneNets, the architecture maintains the conv-relu-maxpool before the first residual block, on the grounds that the cochleagram models the ear rather than the primary auditory cortex.” I do not understand this justification. Any network transforming input signals (auditory or visual) would have to model an entire sensory pathway, from raw input signal to classification. I understand that VOneNets ignore all of the visual processing that occurs before V1. I do not see how this justifies adding the extra layer to the auditory network.
4.b) It is not clear why the authors chose a line plot in figure 4c. Is the trend as one increases depth actually linear? From the plot it appears as though the capacity was only measured at the ‘waveform’ and ‘avgpool’ depths; were there intermediate points measured as well? It would be helpful if they clarified this, or used a scatter/bar plot if there were indeed only two points measured per network type.
4.c) I am curious why there was a switch to reporting SEM instead of STD for figures 5 & 6.
4.c) I found typos on lines 104, 169, and the fig 5 caption (“10 image and”). | 2.b) On lines 182-183 the authors note measuring manifold capacity for unperturbed images, i.e. clean exemplar manifolds. Earlier they state that the exemplar manifolds are constructed using either adversarial perturbations or from stochasticity of the network. So I’m wondering how one constructs images for a clean exemplar manifold for a non-stochastic network? Or put another way, how is the denominator of figure 2.c computed for the ResNet50 & ATResNet50 networks? |
NIPS_2017_114 | NIPS_2017 | - More evaluation would have been welcome, especially on CIFAR-10 in the full label and lower label scenarios.
- The CIFAR-10 results are a little disappointing with respect to temporal ensembles (although the results are comparable and the proposed approach has other advantages)
- An evaluation on the more challenging STL-10 dataset would have been welcome. Comments
- The SVNH evaluation suggests that the model is better than pi an temporal ensembling especially in the low-label scenario. With this in mind, it would have been nice to see if you can confirm this on CIFAR-10 too (i.e. show results on CIFAR-10 with less labels)
- I would would have like to have seen what the CIFAR-10 performance looks like with all labels included.
- It would be good to include in the left graph in fig 3 the learning curve for a model without any mean teacher or pi regularization for comparison, to see if mean teacher accelerates learning or slows it down.
- I'd be interested to see if the exponential moving average of the weights provides any benefit on it's own, without the additional consistency cost. | - It would be good to include in the left graph in fig 3 the learning curve for a model without any mean teacher or pi regularization for comparison, to see if mean teacher accelerates learning or slows it down. |
ICLR_2023_591 | ICLR_2023 | There are multiple axes along which the current paper falls short of applying to realistic settings: 1) the assumption that one is given an oracle adversary, i.e. we have access to the worst-case perturbation (as opposed to a noisy gradient oracle, i.e. just doing PGD); 2) the results in section 4 apply only to shallow fully-connected ReLU networks; 3) the results hold only in a regime very close to initialization and it is assumed one has an early stopping criterion/oracle.
Weaknesses 2) and 3) are not unique to this work, and thus I heavily discount their severity when considering my overall recommendation. | 2) the results in section 4 apply only to shallow fully-connected ReLU networks; |
ARR_2022_294_review | ARR_2022 | The implications discussed in the paper apply *if* one were to try to project continuous prompts to discrete space using GPT-2's (or any other) embedding matrix, but, as far as I am aware, no one has attempted to interpret continuous prompts in this way. The paper could be strengthened by more explicitly discussing *why* we should be concerned with this particular method of prompt interpretation: 1. Do the authors think we should be especially concerned about nearest-neighbor, embedding matrix projections because that's how language model outputs and word2vec are calculated? This seems to be the case, but could be made more explicit. It's not clear why the use of an embedding matrix at the output layer of language models implies this would be a popular way of discretizing continuous prompts.
2. Is it because the particular discrete projection studied is the simplest that the authors believe it might be used in the future?
3. Do the authors think their hypothesis generalizes to other methods of discretely interpreting prompts? If so, that could also be made more explicit.
1. Space permitting, it may be beneficial to move the finding of Appendix B to the main paper as I found this interesting.
2. Line 246 is difficult to parse. Possibly split this into two sentences. | 2. Is it because the particular discrete projection studied is the simplest that the authors believe it might be used in the future? |
ICLR_2021_2892 | ICLR_2021 | - Proposition 2 seems to lack an argument why Eq 16 forms a complete basis for all functions h. The function h appears to be defined as any family of spherical signals parameterized by a parameter in [-pi/2, pi/2]. If that’s the case, why eq 16? As a concrete example, let \hat{h}^\theta_lm = 1 if l=m=1 and 0 otherwise, so constant in \theta. The only constant associated Legendre polynomial is P^0_0, so this h is not expressible in eq 16. Instead, it seems like there are additional assumptions necessary on the family of spherical functions h to let the decomposition eq 16, and thus proposition 2, work. Hence, it looks like that proposition 2 doesn’t actually characterize all azimuthal correlations. - In its discussion of SO(3) equivariant spherical convolutions, the authors do not mention the lift to SO(3) signals, which allow for more expressive filters than the ones shown in figure 1. - Can the authors clarify figure 2b? I do not understand what is shown. - The architecture used for the experiments is not clearly explained in this paper. Instead the authors refer to Jiang et al. (2019) for details. This makes the paper not self-contained. - The authors appear to not use a fast spherical Fourier transform. Why not? This could greatly help performance. Could the authors comment on the runtime cost of the experiments? - The sampling of the Fourier features to a spherical signal and then applying a point-wise non-linearity is not exactly equivariant (as noted by Kondor et al 2018). Still, the authors note at the end of Sec 6 “This limitation can be alleviated by applying fully azimuthal-rotation equivariant operations.”. Perhaps the authors can comment on that? - The experiments are limited to MNIST and a single real-world dataset. - Out of the many spherical CNNs currently in existence, the authors compare only to a single one. For example, comparisons to SO(3) equivariant methods would be interesting. Furthermore, it would be interesting to compare to SO(3) equivariant methods in which SO(3) equivariance is broken to SO(2) equivariance by adding to the spherical signal a channel that indicates the theta coordinate. - The experimental results are presented in an unclear way. A table would be much clearer. - An obvious approach to the problem of SO(2) equivariance of spherical signals, is to project the sphere to a cylinder and apply planar 2D convolutions that are periodic in one direction and not in the other. This suffers from distortion of the kernel around the poles, but perhaps this wouldn’t be too harmful. An experimental comparison to this method would benefit the paper.
Recommendation: I recommend rejection of this paper. I am not convinced of the correctness of proposition 2 and proposition 1 is similar to equivariance arguments made in prior work. The experiments are limited in their presentation, the number of datasets and the comparisons to prior work.
Suggestions for improvement: - Clarify the issue around eq 16 and proposition 2 - Improve presentation of experimental results and add experimental details - Evaluate the model of more data sets - Compare the model to other spherical convolutions
Minor points / suggestions: - When talking about the Fourier modes as numbers, perhaps clarify if these are reals or complex. - In Def 1 in the equation it is confusing to have theta twice on the left-hand side. It would be clearer if h did not have a subscript on the left-hand side. | - When talking about the Fourier modes as numbers, perhaps clarify if these are reals or complex. |
FXObwPWgUc | EMNLP_2023 | * The paper could have provided a clearer use case for why the NMT model is still necessary. When powerful and large general language models are used for post-editing, why should one use a specialized neural machine translation model to get the translations in the first place? Including GPT-4 translations plus GPT-4 post-editing scores would have been insightful.
* The paper lacks an ablation study explaining why they chose the prompt in this specific way, e.g., few-shot examples for CoT might improve performance.
* The reliance on an external model via API, where it's unclear how the underlying model changes, makes it hard to reproduce the results. There is also a risk of data pollution since translations might already be in the training data of GPT-4. The authors only state that the WMT-22 test data is after the cutoff date of the GPT-4 training data, but they do not say anything about the WMT-20 and WMT-21 datasets that they also use.
* The nature of post-edited translation experiments is only partially done: En-Zh and Zh-En for GPT-4 but not for En-De and De-En for GPT-3.5.
* The title is misleading since the authors also evaluate GPT-3.5. | * The paper lacks an ablation study explaining why they chose the prompt in this specific way, e.g., few-shot examples for CoT might improve performance. |
NIPS_2019_900 | NIPS_2019 | -no consideration for approximate number schemes in related work. -no support for float numbers. -At many points in the paper, it is not clear if unecrypted model is a model with PAA or a model with ReLU activation. -what is TCN? the abbreviation is explained way too late into the paper -Tables in chapter 5 are overloaded and abbreviations used are not explained properly. -Figure 3a does not highlight that the shift operation is cheap. - Although the authors claim they implement ImageNet for the first time, it is very slow and accuracy is very low; "SHE needs 1 day and 2.5 days to test an ImageNet picture by AlexNet and ResNet-18, respectively" and accuracy is around 70% | - Although the authors claim they implement ImageNet for the first time, it is very slow and accuracy is very low; "SHE needs 1 day and 2.5 days to test an ImageNet picture by AlexNet and ResNet-18, respectively" and accuracy is around 70% |
ARR_2022_82_review | ARR_2022 | - In the “Updating Facts” section, although the results seem to show that modifying the neurons using the word embeddings is effective, the paper lacks a discussion on this. It is not intuitive to me that there is a connection between a neuron at a middle layer and the word embeddings (which are used at the input layer). - Using integrated gradients to measure the attribution has been studied in existing papers. The paper also proposes post-processing steps to filter out the “false-positive” neurons, however, the paper doesn’t show how important these post-processing steps are. I think an ablation study may be needed.
- The paper lacks details of experimental settings. For example, how are those hyperparameters ($t$, $p$, $\lambda_1$, etc.) tuned? In table 5, why do “other relations” have a very different scale of perplexity compared to “erased relation” before erasing? Are “other relations” randomly selected?
- The baseline method (i.e., using activation values as the attribution score) is widely used in previous studies. Although the paper empirically shows that the baseline is not as effective as the proposed method, - I expect more discussion on why using activation values is not a good idea.
- One limitation of this study is that the paper only focuses on single-word cloze queries (as discussed in the paper).
- Figure 3: The illustration is not clear to me. Why are there two “40%” in the figure?
- I was confused that the paper targets single-token cloze queries or multi-token ones. I did not see a clear clarification until reading the conclusion. | - Using integrated gradients to measure the attribution has been studied in existing papers. The paper also proposes post-processing steps to filter out the “false-positive” neurons, however, the paper doesn’t show how important these post-processing steps are. I think an ablation study may be needed. |
ICLR_2021_1014 | ICLR_2021 | - I am not an expert in the area of pruning. I think this motivation is quite good but the results seem to be less impressive. Moreover, I believe the results should be evaluated from more aspects, e.g., the actual latency on target device, the memory consumption during the inference time and the actual network size. - The performance is only compared with few methods. And the proposed is not consistently better than other methods. For those inferior results, some analysis should be provided since the results violate the motivation.
I am willing to change my rating according to the feedback from authors and the comments from other reviewers. | - I am not an expert in the area of pruning. I think this motivation is quite good but the results seem to be less impressive. Moreover, I believe the results should be evaluated from more aspects, e.g., the actual latency on target device, the memory consumption during the inference time and the actual network size. |
ICLR_2021_147 | ICLR_2021 | the empirical validation is weak. Therefore, more new models need to be compared. For more details, please refer to “Reasons for reject”
Reasons for accept: 1. The structure of this paper is clear and easy to read. Specifically, the motivation of this paper is clear and the structure is well organized; the related work is elaborated in detail; the experimental setup is complete. 2. Based on the use of replay to solve catastrophic forgetting, the current popular graph structure is introduced to capture the similarities between samples. Combined with the proposed Graph Regularization, this paper provides a new perspective for solving catastrophic forgetting. 3. The experimental results given in the paper can basically show that the proposed method is effective. The ablation study also verified the effectiveness of each component.
Reasons for reject: 1. The lack of comparison of experimental effects after replacing Graph Regularization with other regularization methods mentioned in this paper, or other distance measurement methods, eg., L2.
This paper compares relatively few baselines, especially recent studies. I hope to see the comparison results of some papers in the list below. The latest papers on the three types of methods (regularization, expansion, and rehearsal) for solving catastrophic forgetting are included. Therefore, if it can be compared with some of these models, it will be beneficial to the evaluation of GCL.
[1] Ostapenko O , Puscas M , Klein T , et al. Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning. ICML 2019 [2] Y Wu, Y Chen, et al. Large Scale Incremental Learning. CVPR 2019 [3] Liu Y , Liu A A , Su Y , et al. Mnemonics training: Multi-class incremental learning without forgetting. CVPR 2020 [4] Zhang J , Zhang J , Ghosh S , et al. Class-incremental learning via deep model consolidation. 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) [5] Guanxiong Zeng, Yang Chen, Bo Cui, and Shan Yu. Continuous learning of context-dependent processing in neural networks. Nature Machine Intelligence, 2019. [6] Wenpeng Hu, Zhou Lin, et al. Overcoming catastrophic forgetting for continual learning via model adaptation. ICLR 2019 [7] Rao D , Visin F , Rusu A A , et al. Continual Unsupervised Representation Learning. NeurIPS 2019 | 1. The structure of this paper is clear and easy to read. Specifically, the motivation of this paper is clear and the structure is well organized; the related work is elaborated in detail; the experimental setup is complete. |
NIPS_2017_81 | NIPS_2017 | in my view.
Potential weaknesses (if thereâs space, please comment on the following in the rebuttal):
W1. Loss (5) seems to have a degeneracy when v==gu, i.e., p(v|u) term is ignored. While this doesnât seem to affect the results much, Iâm wondering if the formulation could be slightly improved by adding a constant epsilon so that itâs (||v-gu||^\gamma + epsilon) * p(v|u).
W2. L170-172 mentions that the learning formulation can be modified so that knowledge of g is not required. I find having g in the formulation somewhat unsatisfying, so Iâm wondering if you can say a bit more about this possibility.
W3. The shown examples are approximately planar scenes. It might be nice to include another toy example of a 3D object, e.g., textured sphere or other closed 3D object.
Minor comments/suggested edits:
+ There are a number of small typos throughout; please pass through a spell+grammar checker.
+ Equation after L123 - It isn't clear what alpha is on first read, and is described later in the paper. Maybe introduce alpha here.
+ L259 mentions that the approach simply observes a large dataset of unlabelled images. However, flow is also required. Perhaps rephrase. | + L259 mentions that the approach simply observes a large dataset of unlabelled images. However, flow is also required. Perhaps rephrase. |
ACL_2017_71_review | ACL_2017 | -The explanation of methods in some paragraphs is too detailed and there is no mention of other work and it is repeated in the corresponding method sections, the authors committed to address this issue in the final version.
-README file for the dataset [Authors committed to add README file] - General Discussion: - Section 2.2 mentions examples of DBpedia properties that were used as features. Do the authors mean that all the properties have been used or there is a subset? If the latter please list them. In the authors' response, the authors explain in more details this point and I strongly believe that it is crucial to list all the features in details in the final version for clarity and replicability of the paper.
- In section 2.3 the authors use Lample et al. Bi-LSTM-CRF model, it might be beneficial to add that the input is word embeddings (similarly to Lample et al.) - Figure 3, KNs in source language or in English? ( since the mentions have been translated to English). In the authors' response, the authors stated that they will correct the figure.
- Based on section 2.4 it seems that topical relatedness implies that some features are domain dependent. It would be helpful to see how much domain dependent features affect the performance. In the final version, the authors will add the performance results for the above mentioned features, as mentioned in their response.
- In related work, the authors make a strong connection to Sil and Florian work where they emphasize the supervised vs. unsupervised difference. The proposed approach is still supervised in the sense of training, however the generation of training data doesn’t involve human interference | - Based on section 2.4 it seems that topical relatedness implies that some features are domain dependent. It would be helpful to see how much domain dependent features affect the performance. In the final version, the authors will add the performance results for the above mentioned features, as mentioned in their response. |
qJ0Cfj4Ex9 | ICLR_2024 | - Goal Misspecification: Failures on the ALFRED benchmark often occurred due to goal misspecification, where the LLM did not accurately recover the formal goal predicate, especially when faced with ambiguities in human language.
- Policy Inaccuracy: The learned policies sometimes failed to account for low-level, often geometric details of the environment.
- Operator Overspecification: Some learned operators were too specific, e.g., the learned SliceObject operator specified a particular type of knife, leading to planning failures if that knife type was unavailable.
- Limitations in Hierarchical Planning: The paper acknowledges that it doesn't address some core problems in general hierarchical planning. For instance, it assumes access to symbolic predicates representing the environment state and doesn't tackle finer-grained motor planning. The paper also only considers one representative pre-trained LLM and not others like GPT-4. | - Goal Misspecification: Failures on the ALFRED benchmark often occurred due to goal misspecification, where the LLM did not accurately recover the formal goal predicate, especially when faced with ambiguities in human language. |
HjBDSop3ME | EMNLP_2023 | 1) Reducing the vocabulary size is one way of reducing the size of embedding, however, there are other alternatives such as dimensionality reduction (Raunak et al. 2019), quantization (see some works here (Gholami et al. 2021)), bloom embedding (Serra & Karatzoglou 2017), distillation networks (Hinton et al. 2015), etc.
This work should be compared against some of these related baselines to show its true potential as an innovative approach for embedding compactness.
*Raunak, V., Gupta, V., & Metze, F. (2019, August). Effective dimensionality reduction for word embeddings. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019) (pp. 235-243).
*Serrà, J., & Karatzoglou, A. (2017, August). Getting deep recommenders fit: Bloom embeddings for sparse binary input/output networks. In Proceedings of the Eleventh ACM Conference on Recommender Systems (pp. 279-287).
*Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., & Keutzer, K. (2021). A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630.
*Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
2) The perplexity experiments are carried out on obsolete language models (n-gram HMM, RNN) that are rarely used nowadays. To better align the paper with current NLP trends, I believe the authors should showcase their approach using transformer-based (masked) language models.
3) The reliance of this approach on a secondary step (vowel-retrieval) to make the text human-readable again could limit its applicability. It would be interesting to see how this representation would perform on generation tasks such as translation or summarization. Since the vowel-retrieval process is not loss-less (word-error-rate 9 for consonant-only and ~3 for masked-vowel representations), it may cause a drastic drop in the performance of the models on such tasks.
4) In addition, this extra vowel-retrieval step would add to the required computational steps and may actually increase the computational requirements (as opposed to the paper’s claim on saving on computational resources). | 2) The perplexity experiments are carried out on obsolete language models (n-gram HMM, RNN) that are rarely used nowadays. To better align the paper with current NLP trends, I believe the authors should showcase their approach using transformer-based (masked) language models. |
vg55TCMjbC | EMNLP_2023 | - Although the situations are checked by human annotators, the seed situations are generated by ChatGPT. The coverage of situation types might be limited.
- The types of situations/social norms (e.g., physical/psychological safety) are not clear in the main paper.
- It’s a bit hard to interpret precision on NormLens-MA, where the different labels could be considered as gold. | - The types of situations/social norms (e.g., physical/psychological safety) are not clear in the main paper. |
7GxY4WVBzc | EMNLP_2023 | * The contribution of the vector database to improving QA performance is unclear. More analysis and ablation studies are needed to determine its impact and value for the climate change QA task.
* Details around the filtering process used to create the Arabic climate change QA dataset are lacking. More information on the translation and filtering methodology is needed to assess the dataset quality.
* The work is focused on a narrow task (climate change QA) in a specific language (Arabic), so its broader impact may be limited.
* The limitations section lacks specific references to errors and issues found through error analysis of the current model. Performing an analysis of the model's errors and limitations would make this section more insightful. | * The work is focused on a narrow task (climate change QA) in a specific language (Arabic), so its broader impact may be limited. |
ICLR_2023_2396 | ICLR_2023 | 1. Lack of the explanation about the importance and the necessity to design deep GNN models . In this paper, the author tries to address the issue of over-smoothing and build deeper GNN models. However, there is no explanation about why should we build a deep GNN model. For CNN, it could be built for thousands of layers with significant improvement of the performance. While for GNN, the performance decreases with the increase of the depth (shown in Figure 1). Since the deeper GNN model does not show the significant improvement and consumes more computational resource, the reviewer wonders the explanation of the importance and the necessity to design deep models. 2. The experimental results are not significantly improved compared with GRAND. For example, GRAND++-l on Cora with T=128 in Table 1, on Computers with T=16,32 in Table 2. Since the author claims that GRAND suffers from the over-smoothing issue while DeepGRAND significantly mitigates such issue, how to explain the differences between the theoretical and practical results, why GRAND performs better when T is larger? Besides, in Table 3, DeepGRAND could not achieve the best performance with 1/2 labeled on Citeseer, Pubmed, Computers and CoauthorCS dataset, which could not support the argument that DeepGRAND is more resilient under limited labeled training data. 3. Insufficient ablation study on \alpha. \alpha is only set to 1e-4, 1e-1, 5e-1 in section 5.4 with a large gap between 1e-4 and 1e-1. The author is recommended to provide more values of \alpha, at least 1e-2 and 1e-3. 4. Minor issues. The x label of Figure 2, Depth (T) rather than Time (T). | 3. Insufficient ablation study on \alpha. \alpha is only set to 1e-4, 1e-1, 5e-1 in section 5.4 with a large gap between 1e-4 and 1e-1. The author is recommended to provide more values of \alpha, at least 1e-2 and 1e-3. |
ICLR_2023_4834 | ICLR_2023 | There are some concerns regarding the method description and designs: 1) As described in the ASR update strategy, the rehearsal samples for previous tasks are based on the samples with high and low AS scores (the samples with middle AS scores are discarded) while for the current task the rehearsal samples are uniformly sampled from the corresponding data stream sorted by AS scores. Such difference between previous tasks and current task should be experimentally verified (for instance, why can't the current task follow the same principle as the previous tasks to select the rehearsal samples); 2) In line 5 of Algo.1, should it be noted that B^i_{t-}1 is already sorted according to the AS?
There are other concerns regarding the experimental settings and results: 1) As mentioned in Sec. 4.2, the mixup technique in LUMP is also adopted for the proposed method in the experiments on SplitCIFAR-100 and SplitTiny-ImageNet, there should be experimental results of excluding such mixup technique from the proposed method in order to demonstrate its pure contribution; 2) In order to better demonstrate the contribution of using ASR to update the reply buffer and its generalizabilty, there should be experiments of replacing the rehearsal buffer update strategy in the related works (for both supervised continual learning and continual self-supervised learning baselines that also adopt rehearsal buffer) by the proposed ASR update strategy.
The idea behind "augmentation stability of each sample is positively correlated with its relative position in corresponding category distribution" is not well proven or verified. Though Fig.1 tries to serve such purpose to show such idea, but it is not enough, can the authors provide more solid discussion or even theoretical proof for such idea if it is possible? Moreover, currently there is a hidden assumption that the distribution of each category is single mode, but how if the category distribution is multi-modal (in which it is very likely to happen in more complicated datasets), will AS still be effective as a proxy to the relative positive in a category distribution?
From my own research experience, for supervised continual learning, different strategies of rehearsal example selection (e.g. random or uniform) do not contribute significant difference to the final performance, can authors provide more discussion on the impact of particularly having both representative and discriminative rehearsal samples to the overall performance? | 1) As mentioned in Sec. 4.2, the mixup technique in LUMP is also adopted for the proposed method in the experiments on SplitCIFAR-100 and SplitTiny-ImageNet, there should be experimental results of excluding such mixup technique from the proposed method in order to demonstrate its pure contribution; |
NIPS_2018_232 | NIPS_2018 | - Strengths: the paper is well-written and well-organized. It clearly positions the main idea and proposed approach related to existing work and experimentally demonstrates the effectiveness of the proposed approach in comparison with the state-of-the-art. - Weaknesses: the research method is not very clearly described in the paper or in the abstract. The paper lacks a clear assessment of the validity of the experimental approach, the analysis, and the conclusions. Quality - Your definition of interpretable (human simulatable) focuses on to what extent a human can perform and describe the model calculations. This definition does not take into account our ability to make inferences or predictions about something as an indicator of our understanding of or our ability to interpret that something. Yet, regarding your approach, you state that you are ânot trying to find causal structure in the data, but in the modelâs responseâ and that âwe can freely manipulate the input and observe how the model response changesâ. Is your chosen definition of interpretability too narrow for the proposed approach? Clarity - Overall, the writing is well-organized, clear, and concise. - The abstract does a good job explaining the proposed idea but lacks description of how the idea was evaluated and what was the outcome. Minor language issues p. 95: âfrom fromâ -> âfromâ p. 110: âto toâ -> âhow toâ p. 126: âas wayâ -> âas a wayâ p. 182 âcan sortedâ -> âcan be sortedâ p. 197: âon directly onâ -> âdirectly onâ p. 222: âwhere wantâ -> âwhere we wantâ p. 245: âas accurateâ -> âas accurate asâ Tab. 1: âsquareâ -> âsquared errorâ p. 323: âthis are featuresâ -> âthis is featuresâ Originality - the paper builds on recent work in IML and combines two separate lines of existing work; the work by Bloniarz et al. (2016) on supervised neighborhood selection for local linear modeling (denoted SILO) and the work by Kazemitabar et al. (2017) on feature selection (denoted DStump). The framing of the problem, combination of existing work, and empirical evaluation and analysis appear to be original contributions. Significance - the proposed method is compared to a suitable state-of-the-art IML approach (LIME) and outperforms it on seven out of eight data sets. - some concrete illustrations on how the proposed method makes explanations, from a user perspective, would likely make the paper more accessible for researchers and practitioners at the intersection between human-computer interaction and IML. You propose a âcausal metricâ and use it to demonstrate that your approach achieves âgood local explanationsâ but from a user or human perspective it might be difficult to get convinced about the interpretability in this way only. - the experiments conducted demonstrate that the proposed method is indeed effective with respect to both accuracy and interpretability, at least for a significant majority of the studied datasets. - the paper points out two interesting directions for future work, which are likely to seed future research. | - The abstract does a good job explaining the proposed idea but lacks description of how the idea was evaluated and what was the outcome. Minor language issues p. |
NIPS_2016_69 | NIPS_2016 | - The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images. - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF dataset, which uses more complex (deeper, also processing optic flow) architectures. Questions: - What is the size of the beach/golf course/train station/hospital datasets? - How do the video generation results from the network trained on 5000 hours of video look? Summary: While somewhat incremental, the paper seems to have enough novelty for a poster. The visual results encouraging but with many artifacts. The action classification results demonstrate benefits of the learnt representation compared with random weights but are significantly below state-of-the-art results on the considered dataset. | - What is the size of the beach/golf course/train station/hospital datasets? |
xtOydkE1Ku | ICLR_2024 | - The core innovation claimed by the paper is the reduction in computational complexity through a two-stage solution, first estimating marginals and then dependencies. However, this approach isn't novel, as seen in references [1,2]. The paper would benefit from a clearer distinction of how its methodology differs significantly from these existing methods.
- The paper's primary contribution seems to be an incremental advancement in efficiency over the TACTiS approach. More substantial evidence or arguments are needed to establish this as a significant contribution to the field.
- When evaluating the model's efficacy, the improvement in terms of Negative Log-Likelihood (NLL) is notable. However, the Mean Continuous Ranked Probability Score (CRPS) metric indicates that these improvements are only marginal when compared to the TACTiS model.
[1] Andersen, Elisabeth Wreford. "Two-stage estimation in copula models used in family studies." Lifetime Data Analysis 11 (2005)
[2] Joe, Harry. "Asymptotic efficiency of the two-stage estimation method for copula-based models." Journal of Multivariate Analysis 94.2 (2005). | - The paper's primary contribution seems to be an incremental advancement in efficiency over the TACTiS approach. More substantial evidence or arguments are needed to establish this as a significant contribution to the field. |
NIPS_2018_76 | NIPS_2018 | - A main weakness of this work is its technical novelty with respect to spatial transformer networks (STN) and also the missing comparison to the same. The proposed X-transformation seems quite similar to STN, but applied locally in a neighborhood. There are also existing works that propose to apply STN in a local pixel neighborhood. Also, PointNet uses a variant of STN in their network architecture. In this regard, the technical novelty seems limited in this work. Also, there are no empirical or conceptual comparisons to STN in this work, which is important. - There are no ablation studies on network architectures and also no ablation experiments on how the representative points are selected. - The runtime of the proposed network seems slow compared to several recent techniques. Even for just 1K-2K points, the network seem to be taking 0.2-0.3 seconds. How does the runtime scales with more points (say 100K to 1M points)? It would be good if authors also report relative runtime comparisons with existing techniques. Minor corrections: - Line 88: "lose" -> "loss". - line 135: "where K" -> "and K". Minor suggestions: - "PointCNN" is a very short non-informative title. It would be good to have a more informative title that represents the proposed technique. - In several places: "firstly" -> "first". - "D" is used to represent both dimensionality of points and dilation factor. Better to use different notation to avoid confusion. Review summary: - The proposed technique is sensible and the performance on different benchmarks is impressive. Missing comparisons to established STN technique (with both local and global transformations) makes this short of being a very good paper. After rebuttal and reviewer discussion: - I have the following minor concerns and reviewers only partially addressed them. 1. Explicit comparison with STN: Authors didn't explicitly compare their technique with STN. They compared with PointNet which uses STN. 2. No ablation studies on network architecture. 3. Runtimes are only reported for small point clouds (1024 points) but with bigger batch sizes. How does runtime scale with bigger point clouds? Authors did not provide new experiments to address the above concerns. They promised that a more comprehensive runtime comparison will be provided in the revision. Overall, the author response is not that satisfactory, but the positive aspects of this work make me recommend acceptance assuming that authors would update the paper with the changes promised in the rebuttal. Authors also agreed to change the tile to better reflect this work. | - "D" is used to represent both dimensionality of points and dilation factor. Better to use different notation to avoid confusion. Review summary: |
NIPS_2016_321 | NIPS_2016 | #ERROR! | - Since the paper mentions the possibility to use Chebyshev polynomials to achieve a speed-up, it would have been interesting to see a runtime comparison at test time. |
rwpv2kCt4X | EMNLP_2023 | The primary concerns include,
* The necessity of evaluating the degree of personalization is not clear to me.
- According to this paper, I only found three previous research that did personalized summarizers. And all of them utilize the current common metrics to measure performance. It seems these metrics are enough for this task.
- Let's assume the new evaluation metric is necessary. When we have the pairs of user profiles (such as user-expected summaries) and generated summaries for each user, why can we not use the average and variance of current metrics (such as Rouge) to show the degree of personalization? The average presents the performance of the summarizer to generate high-quality summaries, and variance can represent the performance of generating summaries close to each user. It is much easier to evaluate based on current metrics than new ones.
* The new proposed metric is only tested on a single dataset.
* There is no human judgment for this new metric. I notice the authors said, in Limitations, they are trying for the human evaluation. I think it is better to accept the next version with human judgment results.
* The metric is high time-cost due to the eight Jenson-Shannon Divergence calculations.
Besides, the details of $rot()$ were missed in Line 184. | * The new proposed metric is only tested on a single dataset. |
ICLR_2022_2318 | ICLR_2022 | Weakness: 1. This paper is built on the SPAIR framework and focuses on point cloud data, which is somehow incremental. 2. There is no ablation study to validate the effectiveness of the proposed components and the loss. 3. It is hard to follow Sec. 3.2. The author may improve it and give more illustrations and examples. 4. It is unclear how the method can work and decompose a scene into different objects. I did not see how Chamfer Mixture loss can achieve this goal. More explanation should go here. | 3. It is hard to follow Sec. 3.2. The author may improve it and give more illustrations and examples. |
MY8SBpUece | ICLR_2024 | Weakness:
1. Based on my understanding, the core advantage of the proposed analysis is from the Hermite expansion of the activation layer, which can characterize higher-order nonlinearity and explain more non-linear behaviors than the orthogonal decomposition used in Ba et al. 2022. Please clarify this.
2. The required condition on the learning rate (scaling with the number of samples) is not scalable. I never see a step size grows with the sample size in practice, which will lead to unreasonably large learning rate when learning on large-scale dataset. I understand the authors need a way to precisely characterize the benefit of large learning rates, but this condition is not realistic itself. | 2. The required condition on the learning rate (scaling with the number of samples) is not scalable. I never see a step size grows with the sample size in practice, which will lead to unreasonably large learning rate when learning on large-scale dataset. I understand the authors need a way to precisely characterize the benefit of large learning rates, but this condition is not realistic itself. |
sjvz40tazX | ICLR_2025 | - One main weakness is in the baselines. There is no comparison to regular finetuned baselines for the ASR and spoken translation (ST) tasks, that make use of the 15 hours of transcribed and translated speech. While the main objective of this work is to promote multimodal in-context learning with text and speech, it would still be useful to see how existing pretrained ASR/ST models fare after finetuning with labeled Kalamang speech.
- For the human baseline, the human only closely follows a little more than 1 hour of speech recordings rather than the full 15 hours. This makes the human baseline considerably weaker than the model baseline, apart from all the other factors mentioned in Section 4.1 that make the human baseline weaker. In the abstract, the authors mention "already beating the 34.2% CER and 4.51 BLEU achieved by a human who learned Kalamang from the same resources" which is a bit misleading given the 1 hour vs. 15 hour disparity. The authors should consider softening this claim. | - For the human baseline, the human only closely follows a little more than 1 hour of speech recordings rather than the full 15 hours. This makes the human baseline considerably weaker than the model baseline, apart from all the other factors mentioned in Section 4.1 that make the human baseline weaker. In the abstract, the authors mention "already beating the 34.2% CER and 4.51 BLEU achieved by a human who learned Kalamang from the same resources" which is a bit misleading given the 1 hour vs. |
Fg04yPK0BH | ICLR_2025 | 1. There is some disconnection between Proposition 2.2 that the adjacency matrix of the line graph has the same support of some unitary matrix and the proposed method which finds the projection of a weighted adjacency matrix to the set of unitary matrices. It is unclear to me if the result in Proposition 2.3 has the same support as the adjacency matrix of the line graph.
2. The computational complexity of the proposed method is very high as it involves taking the square root of a matrix of the size $2e \times 2e$ where $e$ is the number of edges in the graph. Though the block matrix structure can be exploited, but there is no guarantee how many blocks can be found in the matrix. For example, it's likely that the proposed method cannot perform on all of the LRGB datasets.
3. The experiments are not very convincing. They only compared with the one-hop variant of some models that aims to solve the oversquashing problem. Note that the oversquashing problem is intrinsically multi-hop and I don't see the rationale weakening the baseline models to one-hop.
4. The preprocessing time that involves the computation of the block matrix is not reported.
5. It's unclear why there is a base layer GNN encoding in the proposed method. An ablation study on the necessity of the base layer GNN encoding would be helpful.
6. On the Peptide dataset, the GCN can easily achieve the accuracy of the proposed method by some proper data preprocessing or normalization. The authors should provide a comparison following [1].
[1] Tönshoff, Jan, et al. "Where did the gap go? Reassessing the long-range graph benchmark." arXiv preprint arXiv:2309.00367 (2023). | 5. It's unclear why there is a base layer GNN encoding in the proposed method. An ablation study on the necessity of the base layer GNN encoding would be helpful. |
NIPS_2019_360 | NIPS_2019 | weakness (since unfortunately there are other confounding factors). Further, orthogonally to the accuracy results, it is an interesting finding if standard approaches indeed suffer from this and the proposed method provides a remedy. I would therefore focus on these qualitative results more, and explain in the main text (not just the appendix) exactly how those visualization are created, and show those results for various models. 2) Somewhat related to the previous point: Pure metric-based models like Prototypical Networks lack an explicit mechanism for adaptation to each task at hand and it therefore seems plausible that they indeed suffer from the identified issue. However, it is less clear whether (or to what extent) models that do perform task-specific adaptation run the same danger. Intuitively, it seems that task adaptation also constitutes a mechanism for modifying the embedding function so that it favours the identification of objects that are targets of the associated classification task. By task adaptation here Iâm referring either to gradient-based adaptation (as in MAML and variants) or amortized conditioning-based adaptation (as in TADAM for example). Therefore, it would be very interesting to empirically compare the proposed method to these other ones not only in terms of classification accuracy but also qualitatively via visualizations as in Figure 1 that show the areas of the image that a model focuses more for making classification decisions. 3) Suggestion for the transductive framework: In Equation 8, it might be useful to incorporate the unlabeled examples in a weighted fashion instead of trusting that every example whose confidence surpasses a manually-set threshold can safely contribute to the prototype of the class that it is predicted to belong to. Specifically, the contribution of an unlabeled example to the updated class prototype can be weighted by the cosine similarity between that unlabeled example and that prototype (normalized across classes) and maybe additionally by the confidence c_b^q. This might slightly relieve the need to find the perfect threshold, since even if it is not conservative enough, a query example will be prohibited by modifying a prototype too much. An example of this is in Ren et al. [1] when computing refined prototypes by including unlabeled examples. 4) It seems that the weakness that this method is addressing would be more prominent in images comprised of multiple objects, or cluttered scenes. It would be very interesting to compare this approach to previous ones on few-shot classification on such a dataset! 5) For more easily assessing the degree of apples-to-applesness of the various comparisons in the tables, it would be useful to note which of the listed methods use data augmentations (as until recently this was not common practice for few-shot classification), what architecture they use, and what objective (most are episodic only but I think TADAM also performs joint training as the proposed method). 6) Another difference between the proposed approach and previous Prototypical Network-like methods is that the distance comparisons that inform the classification decisions are done in a feature-wise manner in this work. Specifically, when comparing embeddings a and b, for each spatial location, the distance between the feature vectors of a and b at that location is computed. The final estimate of the distance between a and b is obtained by aggregating those feature-wise distance estimates over all spatial locations. In contrast, usually the output of the last embedding layer is reshaped into a single vector (of shape channels x height x width) and distance comparisons of examples are made by directly comparing these vectors. It would therefore be useful to perform another ablation where a standard Prototypical Network is modified to perform the same type of distance comparison as their method. 7) Similarly to how the proposed transductive method was applied to other models, it would be nice to see results where the proposed joint training is also applied to other models, since this is orthogonal to the choice of the meta-learner too. References [1] Meta-Learning for Semi-Supervised Few-shot Classification. Ren et al. ICLR 2018. | 4) It seems that the weakness that this method is addressing would be more prominent in images comprised of multiple objects, or cluttered scenes. It would be very interesting to compare this approach to previous ones on few-shot classification on such a dataset! |
NIPS_2019_961 | NIPS_2019 | - It would be good to better justify and understand the bernoulli poisson link. Why are the number of layers used in the link in the poisson part? The motivation for the original paper [40] seems to be that one can capture communities and the sum in the exponential is over r_k coefficientst where each coefficient corresponds to a community. In this case the sum is over layers. How do the intuitions from that work transfer here? In what way do the communities correspond to layers in the encoder? It would be nice to beter understand this. Missing Baselines - It would be instructive to vary the number of layers of processing for the representation during inference and analyze how that affects the representations and performance on downstream tasks. - Can we run VGAE with a vamp prior to more accurately match the doubly stochastic construction in this work? That would help inform if the benefits are coming from a better generative model or better inference due to doubly-semi implicit variational inference. Minor Points - Figure 3: It might be nice to keep the generative model fixed and then optimize only the inference part of the model, parameterizing it as either SIG-VAE or VGAE to compare the representations. Its impossible to know / compare representations when the underlying generative models are also potentially different. | - Can we run VGAE with a vamp prior to more accurately match the doubly stochastic construction in this work? That would help inform if the benefits are coming from a better generative model or better inference due to doubly-semi implicit variational inference. Minor Points - Figure 3: It might be nice to keep the generative model fixed and then optimize only the inference part of the model, parameterizing it as either SIG-VAE or VGAE to compare the representations. Its impossible to know / compare representations when the underlying generative models are also potentially different. |
NIPS_2022_2814 | NIPS_2022 | 1. A solution towards removing the position encoding is not discussed. 2. Importance of quantifying the strength of PPP is not clear to me. 3. Authors state that reliable PPP metrics are important for understanding PPP effects in different tasks. While this point is surely intriguing, such an explanation or understanding is not explicitly given in the article. Can the authors explicitly explain what type of understanding one reaches by looking at the PPP maps? 4. The conclusion of the article remains a bit vague. While the proposed metrics have some more desirable attributes, value of these attributes for applications is unclear to me. How will this actually improve the practice or our understanding? | 3. Authors state that reliable PPP metrics are important for understanding PPP effects in different tasks. While this point is surely intriguing, such an explanation or understanding is not explicitly given in the article. Can the authors explicitly explain what type of understanding one reaches by looking at the PPP maps? |
ARR_2022_361_review | ARR_2022 | - It lacks novelty that I would expect from a long research paper. It is read more like a report, rather than a research paper. - The paper could be restructured and some parts could be better written. Please see comments and suggestions for more details. - Some contributions are not clear, e.g., which datasets were available, which are released (what kind of postprocessing is done)? hard to digest this from text. Some techniques that could be a contribution remain somehow hidden in lines 425-462, and some are not clear. - Most of the evaluation tasks are already well-known, but had been executed separately. Morpheme-based tasks are more relevant to multilingual probing literature (e.g., LINSPECTOR: Multilingual Probing Tasks for Word Representations) which are not mentioned throughout the paper.
- 091: I don't understand how this is relevant to the paper. It is mentioned a couple of times however I haven't seen such an effort, maybe I'm missing something? - 096: typo: MLR -> MRL - 121: Not clear if you propose these tasks or only combine them into a pipeline? You could write your contributions as bullet points. - 156: "objective objectives" sounds weird.
- Section 2 can be condensed quite a lot. 133-156 is already known, so too long. I'd be more interested to see a review on multilingual benchmarks and probing tasks, especially for morphologically rich languages. I'd rather see more details of the other Hebrew models. - 174-176: Again sounds like the authors take GLUE (or similar benchmark) and translate the tasks into Hebrew. - 178:181: But there is a rich literature on morphological and syntactic probing tasks (e.g., LINSPECTOR: Multilingual Probing Tasks for Word Representations) - Throughout the paper, the authors use the term NLU, however I'm not sure whether some of the lower level tasks count as NLU or syntactic tasks. - 254: Oscar is introduced here but there is no description until line 276.
- 313 - footnote 2: What's the postprocessing effort here, not clear. - 320-346: This could be a table.
- 330: Introduce SPMR, what does it stand for?
- 332: which UD version?
- Table 3: The english transcriptions are needed - footnote 4,5: what's the added value (if any)? why is 5 anonymous, didn't understand?
- 428-448: I didn't understand how you feed the subword or token representation to a char-LSTM? Doesn't your model need the "char" representation?
- 636-638: Language-agnostic is over claiming here. - Why Aleph? | - 313 - footnote 2: What's the postprocessing effort here, not clear. |
ARR_2022_286_review | ARR_2022 | While there exist many papers discussing the softmax bottleneck or the stolen probability problem, similar to what the authors found, I personally have not found enough evidence in my work that the problem is really severe.
After all, there are intrinsic uncertainties in the empirical distributions of the training data, and it is quite natural for us to use a smaller hidden dimension size than the vocabulary size, because after all, we call them "word embeddings" for a reason.
I guess what I mean to say here is that the problem is of limited interest to me (which says nothing about a broader audience) because the results agree very well with my expectations.
This is definitely not against the authors because they did a good job in showing this via their algorithm and the concrete experiments.
I feel like the authors could mention and expand on the implications when beam search is used.
Because in reality, especially that many MT models are considered in the paper, greedy search is seldomly used.
In other words, "even if greedy search is used, SPP is not a big deal, let alone that in reality we use beam search", something like that.
Compared to the main text, I am personally more interested in the point brought up at L595.
What implications are there for the training of our models?
How does the gradient search algorithm decide on where to put the word vectors and hidden state vector?
Is there anything we, i.e. the trainers of the NNs, can do to make it easier for the NNs?
Small issues: - L006, as later written in the main text, "thousands" is not accurate here. Maybe add "on the subword level"?
- L010, be predicted - L034, personally, I think "expressiveness" is more commonly used, this happens elsewhere in the paper as well.
- L082, autoencoder - L104, greater than or equal to | - L006, as later written in the main text, "thousands" is not accurate here. Maybe add "on the subword level"? |
ICLR_2023_3208 | ICLR_2023 | 1.The typesetting in some places is out of order, such as equations (23), (25), and (31). 2. In Page 4, "A4 bounds the degree of non-stationarity between consecutive iterations", why this assumption holds? 3. This author should add more description about the contribution of this paper. | 3. This author should add more description about the contribution of this paper. |
3vXpZpOn29 | ICLR_2025 | It is unclear that linear datamodels extend to other kinds of tasks, e.g. language modeling or regression problems. I believe this to be a major weakness of the paper. While linear datamodels lead to simple algorithms in this paper, the previous work [1] does not have a good argument for why linear datamodels work [1; Section 7.2]---in fact Figure 6 of [1] display imperfect matching using linear datamodels. It'd be useful to mention this limitation in this manuscript as well, and discuss the limitation's impact to machine learning.
# Suggestions:
1. Line 156. It'd be useful to the reader to add a citation on differential privacy, e.g. one of the standard works like [2].
2. Line 176. $\hat{f}$ should have output range in $\mathbb{R}^k$ since the range of $f_x$ is in $\mathbb{R}^k$.
3. Line 182. "show" -> "empirically show".
4. Definition 3. Write safe, $S_F$, and input $x$ explicitly in KLoM, otherwise KLoM$(\mathcal{U})$ looks like KLoM of the unlearning function across _all_ safe functions and inputs. I'm curious why the authors wrote KLoM$(\mathcal{U})$.
5. Add a Limitations section.
[1] Ilyas, A., Park, S. M., Engstrom, L., Leclerc, G., & Madry, A. (2022). Datamodels: Predicting predictions from training data. arXiv preprint arXiv:2202.00622.
[2] Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4), 211-407. | 1. Line 156. It'd be useful to the reader to add a citation on differential privacy, e.g. one of the standard works like [2]. |
pwW807WJ9G | ICLR_2024 | 1. Although the authors derive PAC-Bayesian bound for GNNs in the transductive setting and show how the interplay between training and testing sets influences the generalization ability, I fail to see the strong connection between the theoretical analysis and the proposed method. The proposed method seems to simply adopt the idea of the self-attention mechanism from the transformer and apply it to the graph. It's not clear to me how the proposed method enhances the generalization for the distant nodes.
2. My major concern about the proposed method is the graph partition as partitioning the graph usually leads to information loss. Though node2vec is used for positional encoding purposes, it only encodes the local topological structure, and it cannot compensate for the lost information between different subgraphs. Based on algorithm 1 in Appendix E, there is no information exchange between different subgraphs. The nodes in a subgraph can only receive the information from other nodes within this subgraph and these nodes are isolated from the nodes from other subgraphs. The performance seems to highly depend on the quality of the graph partition algorithms. However, it's unclear whether different graph partitions will influence the performance of the proposed method or not.
3. Some experimental setups are not quite clear. See questions below. | 1. Although the authors derive PAC-Bayesian bound for GNNs in the transductive setting and show how the interplay between training and testing sets influences the generalization ability, I fail to see the strong connection between the theoretical analysis and the proposed method. The proposed method seems to simply adopt the idea of the self-attention mechanism from the transformer and apply it to the graph. It's not clear to me how the proposed method enhances the generalization for the distant nodes. |
4A5D1nsdtj | ICLR_2024 | 1. The motivation for the choice of $\theta = \frac{\pi}{2}(1-h)$ from theorem 3, is not very straightforward and clear. The paper states that this choice is empirical, but there is very little given in terms of motivation for this exact form.
2. For this method, the knowledge of the homophily ratio seems to be important. In many practical scenarios, this may not be possible to be estimated accurately and even approximations could be difficult. No ablation study is presented showing the sensitivity of this model to the accurate knowledge of the homophily ratio.
3. The HetFilter seems to degrade rapidly past h=0.3 whereas OrtFilter is lot more graceful to the varying homophily ratio. It is unclear whether one would consider the presented fluctuations as inferior to the presented UniBasis. For UniBasis, in the region of h >= 0.3, the choice of tau should become extremely important (as is evident from Figure 4, where lower tau values can reduce performance on Cora by about 20 percentage points). | 1. The motivation for the choice of $\theta = \frac{\pi}{2}(1-h)$ from theorem 3, is not very straightforward and clear. The paper states that this choice is empirical, but there is very little given in terms of motivation for this exact form. |
NIPS_2016_450 | NIPS_2016 | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, in Figure 1, what exactly is a head? How many layers would it have? What is the "Frame"? I wish the paper would spend a lot more space explaining how exactly bootstrapped DQN operates (Appendix B cleared up a lot of my queries and I suggest this be moved into the main body). 2. The general approach involves partitioning (with some duplication) the samples between the heads with the idea that some heads will be optimistic and encouraging exploration. I think that's an interesting idea, but the setting where it is used is complicated. It would be useful if this was reduced to (say) a bandit setting without the neural network. The resulting algorithm will partition the data for each arm into K (possibly overlapping) sub-samples and use the empirical estimate from each partition at random in each step. This seems like it could be interesting, but I am worried that the partitioning will mean that a lot of data is essentially discarded when it comes to eliminating arms. Any thoughts on how much data efficiency is lost in simple settings? Can you prove regret guarantees in this setting? 3. The paper does an OK job at describing the experimental setup, but still it is complicated with a lot of engineering going on in the background. This presents two issues. First, it would take months to re-produce these experiments (besides the hardware requirements). Second, with such complicated algorithms it's hard to know what exactly is leading to the improvement. For this reason I find this kind of paper a little unscientific, but maybe this is how things have to be. I wonder, do the authors plan to release their code? Overall I think this is an interesting idea, but the authors have not convinced me that this is a principled approach. The experimental results do look promising, however, and I'm sure there would be interest in this paper at NIPS. I wish the paper was more concrete, and also that code/data/network initialisation can be released. For me it is borderline. Minor comments: * L156-166: I can barely understand this paragraph, although I think I know what you want to say. First of all, there /are/ bandit algorithms that plan to explore. Notably the Gittins strategy, which treats the evolution of the posterior for each arm as a Markov chain. Besides this, the figure is hard to understand. "Dashed lines indicate that the agent can plan ahead..." is too vague to be understood concretely. * L176: What is $x$? * L37: Might want to mention that these algorithms follow the sampled policy for awhile. * L81: Please give more details. The state-space is finite? Continuous? What about the actions? In what space does theta lie? I can guess the answers to all these questions, but why not be precise? * Can you say something about the computation required to implement the experiments? How long did the experiments take and on what kind of hardware? * Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy? | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. |
Kjs0mpGJwb | EMNLP_2023 | The experiments is somewhat weak.
1) The main contribution of this paper lies in the structure-aware encoder-based model for seed lexicon induction, there should be an experiment to study the quality of seed lexicon.
2) While the paper focuses on bilingual lexicon induction, it would be beneficial to include a downstream task, such as cross-lingual Natural Language Inference (NLI), to demonstrate the potential impact of the proposed method on downstream applications. This would provide further insights into the effectiveness of the approach beyond the specific lexicon induction task. | 1) The main contribution of this paper lies in the structure-aware encoder-based model for seed lexicon induction, there should be an experiment to study the quality of seed lexicon. |
ICLR_2021_2953 | ICLR_2021 | )
The idea of incorporating the training dynamics to the Bayesian optimization tuning process to construct online BO is novel.
The experiments are conducted on two complex datasets CIFAR-10, CIFAR-100. Weaknesses:
No deep analysis is conducted to understand why the proposed method can lead to better generalization.
I feel unclear with several technical details: 1) What is the x-axis in Figures 1 & 2? Is it the number of epochs? 2) How many experiments are repeated in Figures 1&2 and Table 1? 3) How to set the search space S for GSdyn? In your experiments in Section 5, how do the authors set the search space S? 4) What is the objective function for GSdyn and FABOLAS? In Section 5.1, it is mentioned that the DNN’s accuracy is the objective function, but which accuracy? The accuracy on the validation dataset or on the test dataset? 5) To evaluate BO, the standard practice is to find the hyperparameter set with the best accuracy on the validation dataset. Why in this work, the accuracy on the test dataset (but not validation dataset) is compared between baseline methods (Figures 1 &2)? And which accuracies are there in Table 1? I understand that GSdyn leads to good generalization but the accuracy on the validation dataset is also needed to be shown as it is the objective of the vanilla BO? 6) The experiments might include different hyper-parameters, and more hyper-parameters.
Minor comments:
In the figures, the labels of each axis need to be added.
Third bullet in the summarized contributions in Section 1: Beyes --> Bayes
Line 5 of Algorithm 1: Should be either \sigma_0 or \sigma, not a mix of them?
Line 5 of Algorithm 2: What is Sample function? I understand it is the acquisition function but a rigorous formula of the acquisition function needs to be provided. | 2) How many experiments are repeated in Figures 1&2 and Table 1? |
NIPS_2022_1292 | NIPS_2022 | 1 Discuss whether the proposed method can be applied to a discrete distribution with infinite support such as a Poisson distribution.
2 The authors should discuss the iteration cost (computational budge) of the proposed method. It will be great if the authors discuss the iteration cost of all related methods including baseline methods.
3 The running time instead of the number of training steps should be included in at least one of the plots. | 2 The authors should discuss the iteration cost (computational budge) of the proposed method. It will be great if the authors discuss the iteration cost of all related methods including baseline methods. |
ICLR_2022_562 | ICLR_2022 | 1. The main weakness of this paper is the experiments section. The results are presented only on CIFAR-10 dataset and do not consider many other datasets from Federated learning benchmarks (e.g., LEAF https://leaf.cmu.edu/). The authors should see relevant works like (FedProx https://arxiv.org/abs/1812.06127) and (FedMAX, https://arxiv.org/abs/2004.03657 (ECML 2020)) for details on different datasets and model types. If the experimental evaluation was comprehensive enough, this would be a very good paper (given the interesting and important problem it addresses). 2. One other thing (although this is not the main focus of this paper), the authors should provide comparisons between strategies that result in fast convergence (without sparsity) vs. sparse methods? For example, do non-sparse, fast convergence methods (like FedProx, FedMAX, and others) result in small enough number of epochs compared to sparse methods? Can the fast convergence methods be augmented with sparisity ideas successfully without resulting in significant loss of accuracy? Some discussion and possibly performance numbers are needed here. | 1. The main weakness of this paper is the experiments section. The results are presented only on CIFAR-10 dataset and do not consider many other datasets from Federated learning benchmarks (e.g., LEAF https://leaf.cmu.edu/). The authors should see relevant works like (FedProx https://arxiv.org/abs/1812.06127) and (FedMAX, https://arxiv.org/abs/2004.03657 (ECML 2020)) for details on different datasets and model types. If the experimental evaluation was comprehensive enough, this would be a very good paper (given the interesting and important problem it addresses). |
NIPS_2021_2308 | NIPS_2021 | Writing
The presentation of the overall architecture can be improved. Possible suggestions are to give an overview of the method in the beginning of Section 3 that the reader can follow, or to include notation in the figures that corresponds with what’s presented in the text.
More detail about mesh recovery procedure would be a good addition to the supplement. Currently it’s explained at a high level in the main text.
The proposed technique requires known camera intrinsics and camera pose. This potentially limits its applicability to highly controlled settings where such information is known.
Transformer-based techniques are known to be computationally intensive to train, and the current approach requires 8 GPUs for training. It would be good if the authors can also comment on the total training time of the model.
Update After Discussion Period
Thank you to the authors for engaging in thoughtful discussions with the reviewers. The authors addressed my concerns and no issues came up in the overall discussion that would cause me to reduce my rating. I therefore keep my initial rating: 7 - good paper, accept.
The authors have adequately addressed the limitations and potential negative societal impact of their work | 7 - good paper, accept. The authors have adequately addressed the limitations and potential negative societal impact of their work |
NIPS_2019_819 | NIPS_2019 | Weakness: Due to the intractbility of the MMD DRO problem, the submission did not find an exact reformulation as much other literature in DRO did for other probability metrics. Instead, the author provides several layers of approximation. The reason why I emphasize the importance of a tight bound, if not an exact reformulation, is that one of the major criticism about (distributionally) robust optimization is that it is sometimes too conservative, and thus a loose upper bound might not be sufficient to mitigate the over-conservativeness and demonstrate the power of distributionally robust optimization. When a new distance is introduced into the DRO framework, a natural question is why it should be used compared with other existing approaches. I hope there will be a more fair comparision in the camera-ready version. =============== 1. The study of MMD DRO is mostly motivated by the poor out-of-sample performance of existing phi-divergence and Wasserstein uncertainty sets. However, I don't believe this is indeed the case. For example, Namkoong and Duchi (2016), and Blanchet, Kang, and Murthy (2016) show the dimension-independent bound 1/\sqrt{n} for a broad class of objective functions in the case of phi-divergence and Wasserstein metric respectively. They didn't require the population distribution to be within the uncertainty set, and in fact, such a requirement is way too conservative and it is exactly what they wanted to avoid. 2. Unlike phi-divergence or Wasserstein uncertainty sets, MMD DRO seems not enjoy a tractable exact equivalent reformulation, which seems to be a severe drawback to me. The upper bound provided in Theorem 3.1 is crude especially because it drops the nonnegative constraint on the distribution, and further approximation is still needed even applied to a simple kernel ridge regression problem. Moreover, it seems restrictive to assume the loss \ell_f belongs to the RKHS as already pointed out by the authors. 3. I am confused about the statement in Theorem 5.1, as it might indicate some disadvantage of MMD DRO, as it provides a more conservative upper bound than the variance regularized problem. 4. Given the intractability of the MMD DRO and several layers of approximation, the numerical experiment in Section 6 is insufficient to demonstrate the usefulness of the new framework. References: Namkoong, H. and Duchi, J.C., 2017. Variance-based regularization with convex objectives. In Advances in Neural Information Processing Systems (pp. 2971-2980). Blanchet, J., Kang, Y. and Murthy, K., 2016. Robust wasserstein profile inference and applications to machine learning. arXiv preprint arXiv:1610.05627. | 3. I am confused about the statement in Theorem 5.1, as it might indicate some disadvantage of MMD DRO, as it provides a more conservative upper bound than the variance regularized problem. |
NIPS_2022_2605 | NIPS_2022 | Weakness: 1) In the beginning of the paper, authors often mention that previous works lack the flexibility compared to their work. It is not clear what does it mean and thus makes it harder to understand their explanation. 2) It is not clear regarding the choice of 20 distribution sets. Can we control the number of distribution sets for each class? What if you select only few number of distribution set? 3) The role of Tranfer Matrix T is not discussed or elaborated. 4) It is not clear how to form the target distribution H. How do you formulate H? 5) There is no discussion on how to generate x_H from H and what does x_H constitute of? 6) Despite the significant improvement, it is not clear how this proposed method boost the transferability of the adversarial examples.
As per my understanding, authors briefly addressed the limitations and negative impact in their work. | 5) There is no discussion on how to generate x_H from H and what does x_H constitute of? |
UaZe4SwQF2 | EMNLP_2023 | - This paper is a bit difficult to follow. There are some unclear statements, such as motivation.
- In the introduction, the summarized highlights need to be adequately elaborated, and the relevant research content of this paper needs to be detailed.
- No new evaluation metrics are proposed. Only existing evaluation metrics are linearly combined. In the experimental analysis section, there needed to be an in-depth exploration of the reasons for these experimental results.
- A case study should be added.
- What are the advantages of your method compared to other evaluation metrics? Which needs to be emphasized in the motivation.
- How do you evaluate the significance of model structure or metrics on the gender bias encoding of the model? Because you only conduct experiments in the FIBER model. Furthermore, you should conduct generalization experiments on the CLIP model or other models.
- The citation format is chaotic in the paper.
- There are some grammar mistakes in this paper, which could be found in “Typos, Grammar, Style, and Presentation Improvements”. | - No new evaluation metrics are proposed. Only existing evaluation metrics are linearly combined. In the experimental analysis section, there needed to be an in-depth exploration of the reasons for these experimental results. |
NIPS_2019_286 | NIPS_2019 | 1.) l Line 8,56,70,93,: Usage of the word equivalent. I would suggest a more cautious usage of this word. Especially, if the equivalence is not verified. 2.) A differentiation between the ultrametric d and the ultrametric u would make their different usages clearer. 3.) Line 186 ff. : A usage of the subdominant ultrametric for the cluster-size regularization, would make the algorithms part more consistent with the following considerations in this paper. 4) The paper is not sufficiently clear in some aspects (see below for a list of questions) Overall, I have the impressions the the weaknesses could be fixed until the final paper submission. | 3.) Line 186 ff. : A usage of the subdominant ultrametric for the cluster-size regularization, would make the algorithms part more consistent with the following considerations in this paper. |
ICLR_2022_3058 | ICLR_2022 | . At the end of section 2, the authors tried to explain noisy signals are harmful for the OOD detection. It's obvious that with more independent units the variance of the output is higher. But this affects both ID and OOD data. The explanation is not clear.
. The analysis in section 6 is kind of superficial. 1) Lemma 2: the conclusion is under the assumption that the mean is approximately the same. However, as DICE is not designed to guarantee this assumption, the conclusion in Lemma 2 may not apply to DICE. 2) mean of output: the scoring function used for OOD detection is max_cf_c(x). The difference of mean is not directly related to the detection scoring, so the associated observation may not be used to explain why the algorithm works.
. Overall, it is not well explained why the proposed algorithm would work for some OOD detection. 1) From the observation, although DICE can reduce the variance of both ID and OOD data, the effect on OOD seems more significant. This may due to the large difference between ID and OOD. Therefore, it would be interesting to exam the performance of DICE by varying the likeness between OOD and ID. 2) From Figure 4, the range of ID and OOD seems not to be changed much by sparsification. Similarly, Lemma 2 requires approximately identical mean as the assumption. These conditions are crucial for DICE, but is not well discussed, eg., how to ensure DICE meet these conditions.
. In the experiment, the OOD samples generally are significantly different from ID samples (thus less challenging). As pointed out in the above comment, it would be interesting to compare the performance of DICE by varying the OODness of test samples. For example, the ID data is 8 from MNIST, OOD datasets can be 1) 3 from MNIST; 2) 1 from MNIST; 3) FMNIST; and 4) CIFAR-10.
. The comparison between DICE and generative-based model (Table 3) is unfair as DICE is supervised while the benchmarks are unsupervised. It's not surprising that DICE is better. The authors should add comments on that.
. It is claimed in the experimental part that the in-distribution classification accuracy can be maintained under DICE. Only the result on CIFAR-10 is shown. Please provide more results to support the conclusion if possible.
. Instead of using directed sparsification, one possible solution may be just using a simpler network. Of course this would change the original network architecture. But as one part of the ablation study, it would be interesting to know whether a simpler network would be more beneficial for the OOD detection. | 2) From Figure 4, the range of ID and OOD seems not to be changed much by sparsification. Similarly, Lemma 2 requires approximately identical mean as the assumption. These conditions are crucial for DICE, but is not well discussed, eg., how to ensure DICE meet these conditions. |
NIPS_2022_2741 | NIPS_2022 | The paper is not carefully written. Line 134 contains a missing reference. Line 260 uses an abbreviation TDR (I believe it is a typo of TRE?), which is not defined anywhere.
The authors analyze a different telescoping density-ratio estimator, rather than the one in [2], and acknowledge that the techniques used for the chain defined in the paper do not apply to the estimator of interest in [2]. This means that the analysis is only for the problem defined in this paper. The authors should provide some evidence that the chain defined in this paper has a superior or comparable performance to the estimator of interest in [2].
[1]Kato, M. and Teshima, T. Non-negative bregman divergence minimization for deep direct density ratio estimation. In International Conference on Machine Learning, pp. 5320 – 5333, 2021.
[2] Rhodes, B., Xu, K., and Gutmann, M. U. Telescoping density-ratio estimation. In Advances in Neural Information Processing Systems, 2020. | 5320 – 5333, 2021. [2] Rhodes, B., Xu, K., and Gutmann, M. U. Telescoping density-ratio estimation. In Advances in Neural Information Processing Systems, 2020. |
ICLR_2022_445 | ICLR_2022 | Weakness: Method:
1. Novelty:
Incremental Contribution: The proposed LaMOO is a direct generalization from the LaMCTS method to multi-objective optimization (MOO). The novel part is to use dominance number as criteria for search space partition and hypervolume for promising region selection. These are all straightforward generalizations for MOO. The contribution of this work is somewhat incremental along the line of LaNAS, LaMCTS, and LaP^3.
Missing Closely Related Approaches: This work claims the proposed approach to learn the promising region is fundamentally different from the previous works. However, many classification-based search space partition methods have been proposed in the machine learning community, see [1][2][3] (classification + random sampling). (Tree-based) space partition methods have been widely used for black-box optimization [4][5][6]. In addition, there are also different works on classification-based MOO [7] (SVM + NSGA-II/MO-CMA-ES) [8] (Ordinal SVM + NSGA-II) [9].
2. Theoretical Analysis:
A large part of this work is on the theoretical understanding for space partition and LaMCTS. However, the analysis is mostly for single-objective optimization, and the extension to multi-objective optimization is much less promising.
3. Why LaMOO Works:
Further discussions are needed to clearly clarify the properties of LaMOO.
Dominance-based Approach for Many Objective Optimization: LaMOO uses the dominance number as the split criteria to train the SVM models and partition the search space. However, the dominance-based method is typically not good for many objective optimization due to the lack of dominance pressure (e.g., all solutions are non-dominated with each other, and all have the same dominated number). Why is LaMOO still good for many objective optimization?
Combination with Multi-Objective Bayesian Optimization (MOBO): It is straightforward to see the benefit of using LaMOO with model-free optimization (e.g., NSGA-II and MO-CMA-ES). However, it is not so clear to understand why it also works for MOBO (e.g., qEHVI). The qEHVI approach already builds (global) Gaussian process models to approximate each objective function, and uses hypervolume-based criteria to select the most promising solution(s) (e.g., maximizing the expected hypervolume improvement) for evaluation. Therefore, its selected solution(s) should be already on the approximate Pareto front without the LaMOO approach. Is the good performance due to only use solutions in the promising region to build the models (but I think GP would work well with all data as in the setting considered in this work)? Or because LaMOO restricts the search in the region close to the current best non-dominated solutions (then what is the relation to the trust-region approach [10])?
Exploitation v.s. Exploration: With LaMOO, the solutions can only be selected from the most promising region (e.g., around the current Pareto front), which is good for exploitation. However, will this approach lead to worse overall performance due to the lack of exploration (e.g., cannot find more diverse Pareto solutions far from the current Pareto front)?
4. Time Complexity:
What is the time complexity of the proposed algorithm? In each step of LaMOO, it has to repeatedly calculate the hypervolume of different regions for promising region selection. However, the computation of hypervolume could be time-consuming, especially for problems with many objectives (e.g., >3). Would it make LaMOO impractical for those problems?
5. Inaccurate Description for MOO Methods:
CMA-ES: CMA-ES is a widely-used single objective optimization algorithm [11]. The multi-objective version proposed in (Igel et al., 2007a) is usually called MO-CMA-ES. It is also confusing why most citation for the MO-CMA-ES (in the main paper and Table 1) is for the steady-state updated version (Igel et al., 2007b) but not for the original paper (Igel et al., 2007a).
ParEGO: The seminal algorithm proposed in Knowles (2006) is called ParEGO and the qParEGO is a parallel extension recently proposed in Daulton et al. (2020). It is not suitable to refer the algorithm in Knowles (2006) as qParEGO in Table 1 and the main text.
MOEA/D: In my understanding, MOEA/D is suitable for many objective optimization (objectives > 3), see its performance in the NSGA-III paper (Deb & Jain, 2014), while the main challenge is how to specify the weight vector for a new problem with unknown Pareto front as correctly pointed out in this work.
Hypervolume-based Method: This work indicates the indicator-based method is better for many objective optimization. However, the time complexity and expensive calculation could make the hypervolume-based method impractical for many-objective optimization. Experiment:
6. Missing Experimental Setting:
Many important experiment settings are missing in this work, such as the number of initial solutions for MOBO (and its generation method), the number of batched solutions for MOBO (e.g., q), the reference point for hypervolume (during the optimization, and for the final evaluation), the ground truth Pareto front used for calculating the log hypervolume difference for real-world problems (e.g., Nasbench 201).
7. Comparison to Model-Free Evolutionary Algorithm:
It is reasonable that LaMOO can improve the MO-CMA-ES performance since it builds extra models to allocate computation to the most promising region. However, in my understanding, the model-free evolutionary algorithms are not designed for expensive optimization, and their typical use case is with a large number of cheap evaluations with a fast run time. It is more interesting to directly compare LaMOO with other model-based methods (e.g., MO-CMA-ES with GP models).
8. MOBO Performance:
What are the hyperparameters for qEHVI? It seems its performance on VehicleSafty problem is worse than those reported in the original paper Daulton et al. (2020).
9. Wall Clock Run Time:
Please report the wall clock run time for both LaMOO and other model-free/model-based algorithms, as in Daulton et al. (2020).
Minor Issues:
When citing multiple works, please put them in chronological order. Reference:
[1] Hashimoto, Tatsunori, Steve Yadlowsky, and John Duchi. Derivative free optimization via repeated classification. AISTATS 2018.
[2] Kumar, Manoj, George E. Dahl, Vijay Vasudevan, and Mohammad Norouzi. Parallel architecture and hyperparameter search via successive halving and classification. arXiv:1805.10255.
[3] Yu, Yang, Hong Qian, and Yi-Qi Hu. Derivative-free optimization via classification. AAAI 2016.
[4] Munos, Rémi. Optimistic optimization of a deterministic function without the knowledge of its smoothness. NeurIPS 2011.
[5] Ziyu Wang, Babak Shakibi, Lin Jin, and Nando de Freitas. Bayesian multi-scale optimistic optimization. AISTATS 2014.
[6] Kenji Kawaguchi, Leslie Pack Kaelbling, and Tomas Lozano-Perez. Bayesian optimization with exponential convergence. NeurIPS 2015.
[7] Loshchilov, Ilya, Marc Schoenauer, and Michèle Sebag. A mono surrogate for multiobjective optimization. In Proceedings of the 12th annual conference on Genetic and evolutionary computation, 2010.
[8] Seah, Chun-Wei, Yew-Soon Ong, Ivor W. Tsang, and Siwei Jiang. Pareto rank learning in multi-objective evolutionary algorithms. In 2012 IEEE Congress on Evolutionary Computation, 2012.
[9] Pan, Linqiang, Cheng He, Ye Tian, Handing Wang, Xingyi Zhang, and Yaochu Jin. A classification-based surrogate-assisted evolutionary algorithm for expensive many-objective optimization. IEEE Transactions on Evolutionary Computation 2018.
[10] Daulton, Samuel, David Eriksson, Maximilian Balandat, and Eytan Bakshy. Multi-Objective Bayesian Optimization over High-Dimensional Search Spaces. arXiv:2109.10964, 2021.
[11] Hansen, Nikolaus, and Andreas Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 2001.
[12] Ishibuchi, Hisao, Yu Setoguchi, Hiroyuki Masuda, and Yusuke Nojima. Performance of decomposition-based many-objective algorithms strongly depends on Pareto front shapes. IEEE Transactions on Evolutionary Computation 21, no. 2 (2016): 169-190. | 4. Time Complexity: What is the time complexity of the proposed algorithm? In each step of LaMOO, it has to repeatedly calculate the hypervolume of different regions for promising region selection. However, the computation of hypervolume could be time-consuming, especially for problems with many objectives (e.g., >3). Would it make LaMOO impractical for those problems? |
ICLR_2021_1014 | ICLR_2021 | - I am not an expert in the area of pruning. I think this motivation is quite good but the results seem to be less impressive. Moreover, I believe the results should be evaluated from more aspects, e.g., the actual latency on target device, the memory consumption during the inference time and the actual network size. - The performance is only compared with few methods. And the proposed is not consistently better than other methods. For those inferior results, some analysis should be provided since the results violate the motivation.
I am willing to change my rating according to the feedback from authors and the comments from other reviewers. | - The performance is only compared with few methods. And the proposed is not consistently better than other methods. For those inferior results, some analysis should be provided since the results violate the motivation. I am willing to change my rating according to the feedback from authors and the comments from other reviewers. |
NIPS_2021_815 | NIPS_2021 | - In my opinion, the paper is a bit hard to follow. Although this is expected when discussing more involved concepts, I think it would be beneficial for the exposition of the manuscript and in order to reach a larger audience, to try to make it more didactic. Some suggestions: - A visualization showing a counting of homomorphisms vs subgraph isomorphism counting. - It might be a good idea to include a formal or intuitive definition of the treewidth since it is central to all the proofs in the paper. - The authors define rooted patterns (in a similar way to the orbit counting in GSN), but do not elaborate on why it is important for the patterns to be rooted, neither how they choose the roots. A brief discussion is expected, or if non-rooted patterns are sufficient, it might be better for the sake of exposition to discuss this case only in the supplementary material. - The authors do not adequately discuss the computational complexity of counting homomorphisms. They make brief statements (e.g., L 145 “Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets”), but I think it will be beneficial for the paper to explicitly add the upper bounds of counting and potentially elaborate on empirical runtimes. - Comparison with GSN: The authors mention in section 2 that F-MPNNs are a unifying framework that includes GSNs. In my perspective, given that GSN is a quite similar framework to this work, this is an important claim that should be more formally stated. In particular, as shown by Curticapean et al., 2017, in order to obtain isomorphism counts of a pattern P, one needs not only to compute P-homomorphisms, but also those of the graphs that arise when doing “non-edge contractions” (the spasm of P). Hence a spasm(P)-MPNN would require one extra layer to simulate a P-GSN. I think formally stating this will give the interested reader intuition on the expressive power of GSNs, albeit not an exact characterisation (we can only say that P-GSN is at most as powerful as a spasm(P)-MPNN but we cannot exactly characterise it; is that correct?) - Also, since the concept of homomorphisms is not entirely new in graph ML, a more elaborate comparison with the paper by NT and Maehara, “Graph Homomorphism Convolution”, ICML’20 would be beneficial. This paper can be perceived as the kernel analogue to F-MPNNs. Moreover, in this paper, a universality result is provided, which might turn out to be beneficial for the authors as well.
Additional comments:
I think that something is missing from Proposition 3. In particular, if I understood correctly the proof is based on the fact that we can always construct a counterexample such that F-MPNNs will not be equally strong to 2-WL (which by the way is a stronger claim). However, if the graphs are of bounded size, a counterexample is not guaranteed to exist (this would imply that the reconstruction conjecture is false). Maybe it would help to mention in Proposition 3 that graphs are of unbounded size?
Moreover, there is a detail in the proof of Proposition 3 that I am not sure that it’s that obvious. I understand why the subgraph counts of C m + 1
are unequal between the two compared graphs, but I am not sure why this is also true for homomorphism counts.
Theorem 3: The definition of the core of a graph is unclear to me (e.g., what if P contains cliques of multiple sizes?)
In the appendix, the authors mention they used 16 layers for their dataset. That is an unusually large number of layers for GNNs. Could the authors comment on this choice?
In the same context as above, the experiments on the ZINC benchmark are usually performed with either ~100K or 500K parameters. Although I doubt that changing the number of parameters will lead to a dramatic change in performance, I suggest that the authors repeat their experiments, simply for consistency with the baselines.
The method of Bouritsas et al., arxiv’20 is called “Graph Substructure Networks” (instead of “Structure”). I encourage the authors to correct this.
After rebuttal
The authors have adequately addressed all my concerns. Enhancing MPNNs with structural features is a family of well-performing techniques that have recently gained traction. This paper introduces a unifying framework, in the context of which many open theoretical questions can be answered, hence significantly improving our understanding. Therefore, I will keep my initial recommendation and vote for acceptance. Please see my comment below for my final suggestions which, along with some improvements on the presentation, I hope will increase the impact of the paper.
Limitations: The limitations are clearly stated in section 1, by mainly referring to the fact that the patterns need to be selected by hand. I would also add a discussion on the computational complexity of homomorphism counting.
Negative societal impact: A satisfactory discussion is included in the end of the experimental section. | - The authors define rooted patterns (in a similar way to the orbit counting in GSN), but do not elaborate on why it is important for the patterns to be rooted, neither how they choose the roots. A brief discussion is expected, or if non-rooted patterns are sufficient, it might be better for the sake of exposition to discuss this case only in the supplementary material. |
ARR_2022_248_review | ARR_2022 | It is a bit confusing about the word error rates in Fig. 3. The description in the paper use word error rate (WER) all the time. But in Fig. 3, the authors used (100 - WER) as the vertical axis. It is not clear about the motivation of doing this. Note: the WER can exceed 100% in some cases.
Some grammar issues need to be reviewed. For example, at line 394 in section 4.2: Like the baseline models Shon et al. (2021), we to train on the finer label set (18 entity tags) and evaluate on the com396 bined version (7 entity tags) -> "we to ...." | 3. The description in the paper use word error rate (WER) all the time. But in Fig. 3, the authors used (100 - WER) as the vertical axis. It is not clear about the motivation of doing this. Note: the WER can exceed 100% in some cases. Some grammar issues need to be reviewed. For example, at line 394 in section 4.2: Like the baseline models Shon et al. (2021), we to train on the finer label set (18 entity tags) and evaluate on the com396 bined version (7 entity tags) -> "we to ...." |
ACL_2017_376_review | ACL_2017 | Many points are not explained well in the paper. - General Discussion: This work tackles an important and interesting event extraction problem -- identifying positive and negative interactions between pairs of countries in the world (or rather, between actors affiliated with countries). The primary contribution is an application of supervised, structured neural network models for sentence-level event/relation extraction. While previous work has examined tasks in the overall area, to my knowledge there has not been any publicly availble sentence-level annotated data for the problem -- the authors here make a contribution as well by annotating some data included with the submission; if it is released, it could be useful for future researchers in this area.
The proposed models -- which seem to be an application of various tree-structured recursive neural network models -- demonstrate a nice performance increase compared to a fairly convincing, broad set of baselines (if we are able to trust them; see below). The paper also presents a manual evaluation of the inferred time series from a news corpus which is nice to see.
I'm torn about this paper. The problem is a terrific one and the application of the recursive models seems like a contribution to this problem.
Unfortunately, many aspects of the models, experimentation, and evaluation are not explained very well. The same work, with a more carefully written paper, could be really great.
Some notes: - Baselines need more explanation. For example, the sentiment lexicon is not explained for the SVM. The LSTM classifier is left highly unspecified (L407-409) -- there are multiple different architectures to use an LSTM for classification. How was it trained? Is there a reference for the approach?
Are the authors using off-the-shelf code (in which case, please refer and cite, which would also make it easier for the reader to understand and replicate if necessary)? It would be impossible to replicate based on the two-line explanation here. - (The supplied code does not seem to include the baselines, just the recursive NN models. It's great the authors supplied code for part of the system so I don't want to penalize them for missing it -- but this is relevant since the paper itself has so few details on the baselines that they could not really be replicated based on the explanation in the paper.)
- How were the recursive NN models trained?
- The visualization section is only a minor contribution; there isn't really any innovation or findings about what works or doesn't work here.
Line by line: L97-99: Unclear. Why is this problem difficult? Compared to what? ( also the sentence is somewhat ungrammatical...) L231 - the trees are binarized, but how?
Footnote 2 -- "the tensor version" - needs citation to explain what's being referred to.
L314: How are non-state verbs defined? Does the definition of "event word"s here come from any particular previous work that motivates it? Please refer to something appropriate or related.
Footnote 4: of course the collapsed form doesn't work, because the authors aren't using dependency labels -- the point of stanford collapsed form is to remove prepositions from the dependeny path and instead incorporate them into the labels.
L414: How are the CAMEO/TABARI categories mapped to positive and negative entries? Is performance sensitive to this mapping? It seems like a hard task (there are hundreds of those CAMEO categories....) Did the authors consider using the Goldstein scaling, which has been used in political science, as well as the cited work by O'Connor et al.? Or is it bad to use for some reason?
L400-401: what is the sentiment lexicon and why is it appropriate for the task?
L439-440: Not clear. "We failed at finding an alpha meeting the requirements for the FT model." What does that mean? What are the requirements? What did the authors do in their attempt to find it?
L447,L470: "precision and recall values are based on NEG and POS classes".
What does this mean? So there's a 3x3 contingency table of gold and predicted (POS, NEU, NEG) classes, but this sentence leaves ambiguous how precision and recall are calculated from this information.
5.1 aggregations: this seems fine though fairly ad-hoc. Is this temporal smoothing function a standard one? There's not much justification for it, especially given something simpler like a fixed window average could have been used.
5.2 visualizations: this seems pretty ad-hoc without much justification for the choices. The graph visualization shown does not seem to illustrate much.
Should also discuss related work in 2d spatial visualization of country-country relationships by Peter Hoff and Michael Ward.
5.3 L638-639: "unions of countries" isn't a well defined concept. mMybe the authors mean "international organizations"?
L646-648: how were these 5 strong and 5 weak peaks selected? In particular, how were they chosen if there were more than 5 such peaks?
L680-683: This needs more examples or explanation of what it means to judge the polarity of a peak. What does it look like if the algorithm is wrong? How hard was this to assess? What was agreement rate if that can be judged?
L738-740: The authors claim Gerrish and O'Connor et al. have a different "purpose and outputs" than the authors' work. That's not right. Both those works try to do both (1) extract time series or other statistical information about the polarity of the relationships between countries, and *also* (2) extract topical keywords to explain aspects of the relationships. The paper here is only concerned with #1 and less concerned with #2, but certainly the previous work addresses #1. It's fine to not address #2 but this last sentence seems like a pretty odd statement.
That raises the question -- Gerrish and O'Connor both conduct evaluations with an external database of country relations developed in political science ("MID", military interstate disputes). Why don't the authors of this work do this evaluation as well? There are various weaknesses of the MID data, but the evaluation approach needs to be discussed or justified. | - The visualization section is only a minor contribution; there isn't really any innovation or findings about what works or doesn't work here. Line by line: L97-99: Unclear. Why is this problem difficult? Compared to what? ( also the sentence is somewhat ungrammatical...) L231 - the trees are binarized, but how? Footnote 2 -- "the tensor version" - needs citation to explain what's being referred to. |
ACL_2017_108_review | ACL_2017 | The problem itself is not really well motivated. Why is it important to detect China as an entity within the entity Bank of China, to stay with the example in the introduction? I do see a point for crossing entities but what is the use case for nested entities? This could be much more motivated to make the reader interested. As for the approach itself, some important details are missing in my opinion: What is the decision criterion to include an edge or not? In lines 229--233 several different options for the I^k_t nodes are mentioned but it is never clarified which edges should be present!
As for the empirical evaluation, the achieved results are better than some previous approaches but not really by a large margin. I would not really call the slight improvements as "outperformed" as is done in the paper. What is the effect size? Does it really matter to some user that there is some improvement of two percentage points in F_1? What is the actual effect one can observe? How many "important" entities are discovered, that have not been discovered by previous methods? Furthermore, what performance would some simplistic dictionary-based method achieve that could also be used to find overlapping things? And in a similar direction: what would some commercial system like Google's NLP cloud that should also be able to detect and link entities would have achieved on the datasets. Just to put the results also into contrast of existing "commercial" systems.
As for the result discussion, I would have liked to see some more emphasis on actual crossing entities. How is the performance there? This in my opinion is the more interesting subset of overlapping entities than the nested ones. How many more crossing entities are detected than were possible before? Which ones were missed and maybe why? Is the performance improvement due to better nested detection only or also detecting crossing entities? Some general error discussion comparing errors made by the suggested system and previous ones would also strengthen that part.
General Discussion: I like the problems related to named entity recognition and see a point for recognizing crossing entities. However, why is one interested in nested entities? The paper at hand does not really motivate the scenario and also sheds no light on that point in the evaluation. Discussing errors and maybe advantages with some example cases and an emphasis on the results on crossing entities compared to other approaches would possibly have convinced me more.
So, I am only lukewarm about the paper with maybe a slight tendency to rejection. It just seems yet another try without really emphasizing the in my opinion important question of crossing entities.
Minor remarks: - first mention of multigraph: some readers may benefit if the notion of a multigraph would get a short description - previously noted by ... many previous: sounds a little odd - Solving this task: which one?
- e.g.: why in italics?
- time linear in n: when n is sentence length, does it really matter whether it is linear or cubic?
- spurious structures: in the introduction it is not clear, what is meant - regarded as _a_ chunk - NP chunking: noun phrase chunking?
- Since they set: who?
- pervious -> previous - of Lu and Roth~(2015) - the following five types: in sentences with no large numbers, spell out the small ones, please - types of states: what is a state in a (hyper-)graph? later state seems to be used analogous to node?!
- I would place commas after the enumeration items at the end of page 2 and a period after the last one - what are child nodes in a hypergraph?
- in Figure 2 it was not obvious at first glance why this is a hypergraph.
colors are not visible in b/w printing. why are some nodes/edges in gray. it is also not obvious how the highlighted edges were selected and why the others are in gray ... - why should both entities be detected in the example of Figure 2? what is the difference to "just" knowing the long one?
- denoting ...: sometimes in brackets, sometimes not ... why?
- please place footnotes not directly in front of a punctuation mark but afterwards - footnote 2: due to the missing edge: how determined that this one should be missing?
- on whether the separator defines ...: how determined?
- in _the_ mention hypergraph - last paragraph before 4.1: to represent the entity separator CS: how is the CS-edge chosen algorithmically here?
- comma after Equation 1?
- to find out: sounds a little odd here - we extract entities_._\footnote - we make two: sounds odd; we conduct or something like that?
- nested vs. crossing remark in footnote 3: why is this good? why not favor crossing? examples to clarify?
- the combination of states alone do_es_ not?
- the simple first order assumption: that is what?
- In _the_ previous section - we see that our model: demonstrated? have shown?
- used in this experiments: these - each of these distinct interpretation_s_ - published _on_ their website - The statistics of each dataset _are_ shown - allows us to use to make use: omit "to use" - tried to follow as close ... : tried to use the features suggested in previous works as close as possible?
- Following (Lu and Roth, 2015): please do not use references as nouns: Following Lu and Roth (2015) - using _the_ BILOU scheme - highlighted in bold: what about the effect size?
- significantly better: in what sense? effect size?
- In GENIA dataset: On the GENIA dataset - outperforms by about 0.4 point_s_: I would not call that "outperform" - that _the_ GENIA dataset - this low recall: which one?
- due to _an_ insufficient - Table 5: all F_1 scores seems rather similar to me ... again, "outperform" seems a bit of a stretch here ... - is more confident: why does this increase recall?
- converge _than_ the mention hypergraph - References: some paper titles are lowercased, others not, why? | - why should both entities be detected in the example of Figure 2? what is the difference to "just" knowing the long one? |
ICLR_2023_1833 | ICLR_2023 | . Strengths first:
The paper is one of the first to give an empirical study of quantization of MoE networks. It would be a good manual/starting point for practitioners in the field. Weaknessess:
Thoroughness: Despite having good results and having investigated several quantization options, one would still have questions of "what if?" style. There are many additional experiments and empirical evaluations that are needed to make it a stronger contribution, and to be certain of presented recommendations. For instance here are additional questions: 1) if inference happens in fp16, why to stick with uniform or log-uniform quantization schemes? how about non-inform quantization akin k-means? 2) why not to consider finer grouping for quantization instead of per-tensor and per-channel? 3) why PTQ calibration techniques are not discussed? are all calibrations work the same? 4) what is the tradeoff between # experts vs bit-width of compression? are there certain recommendation? and many other questions of this format
The paper would benefit from another proof-reading pass: there are many places where it is hard to understand what was exactly meant. | 3) why PTQ calibration techniques are not discussed? are all calibrations work the same? |
NIPS_2017_401 | NIPS_2017 | Weakness:
1. There are no collaborative games in experiments. It would be interesting to see how the evaluated methods behave in both collaborative and competitive settings.
2. The meta solvers seem to be centralized controllers. The authors should clarify the difference between the meta solvers and the centralized RL where agents share the weights. For instance, Foester et al., Learning to communicate with deep multi-agent reinforcement learning, NIPS 2016.
3. There is not much novelty in the methodology. The proposed meta algorithm is basically a direct extension of existing methods.
4. The proposed metric only works in the case of two players. The authors have not discussed if it can be applied to more players.
Initial Evaluation:
This paper offers an analysis of the effectiveness of the policy learning by existing approaches with little extension in two player competitive games. However, the authors should clarify the novelty of the proposed approach and other issues raised above. Reproducibility:
Appears to be reproducible. | 1. There are no collaborative games in experiments. It would be interesting to see how the evaluated methods behave in both collaborative and competitive settings. |
NIPS_2022_528 | NIPS_2022 | weakness 1 The Algorithm should be presented and described in detail. 2 The background of Sharpness-Aware Minimization (SAM) shoud be described in detail.
1 The Algorithm should be presented and described in detail, which is helpful for understanding the proposed method. 2 The background of Sharpness-Aware Minimization (SAM) shoud be described in detail. | 1 The Algorithm should be presented and described in detail, which is helpful for understanding the proposed method. |
NIPS_2016_153 | NIPS_2016 | weakness of previous models. Thus I find these results novel and exciting.Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. b. Model accuracy, where the aim is to provide a model which is better than the state of the art. To the best of my understanding, this study mostly focuses on the latter, i.e. provide a better than state of the art model. If I am misunderstanding, then it would probably be important to stress the biological insights gained from the study. Yet if indeed modeling accuracy is the focus, it's important to provide a fair comparison to the state of the art, and I see a few caveats in that regard: 1. The authors mention the GLM model of Pillow et al. which is pretty much state of the art, but a central point in that paper was that coupling filters between neurons are very important for the accuracy of the model. These coupling filters are omitted here which makes the comparison slightly unfair. I would strongly suggest comparing to a GLM with coupling filters. Furthermore, I suggest presenting data (like correlation coefficients) from previous studies to make sure the comparison is fair and in line with previous literature. 2. The authors note that the LN model needed regularization, but then they apply regularization (in the form of a cropped stimulus) to both LN models and GLMs. To the best of my recollection the GLM presented by pillow et al. did not crop the image but used L1 regularization for the filters and a low rank approximation to the spatial filter. To make the comparison as fair as possible I think it is important to try to reproduce the main features of previous models. Minor notes: 1. Please define the dashed lines in fig. 2A-B and 4B. 2. Why is the training correlation increasing with the amount of training data for the cutout LN model (fig. 4A)? 3. I think figure 6C is a bit awkward, it implies negative rates, which is not the case, I would suggest using a second y-axis or another visualization which is more physically accurate. 4. Please clarify how the model in fig. 7 was trained. Was it on full field flicker stimulus changing contrast with a fixed cycle? If the duration of the cycle changes (shortens, since as the authors mention the model cannot handle longer time scales), will the time scale of adaptation shorten as reported in e.g Smirnakis et al. Nature 1997. | 1. Please define the dashed lines in fig. 2A-B and 4B. |
z69tlSxAwf | EMNLP_2023 | 1. The problem that how catastrophic forgetting exerts a strong influence on novel slots detection keeps unclear.
2. The paper does not study on large language models, which may be the current SOTA models for novel slots detection and their effective usage in dialogue context.
3. The method is little bit complex and hard to follow, e.g., how the method implement the final effect in Figure?
The proposed may be not easy to implement in real-world scenarios.
4. some experiments are missing , e.g., contrastive learning and adversarial learning.
5. The comparing baselines is only few while the proposed method is claimed to be SOTA model. | 4. some experiments are missing , e.g., contrastive learning and adversarial learning. |
NIPS_2018_605 | NIPS_2018 | in the related work and the experiments. If some of these concerns would be addressed in the rebuttal, I would be willing to upgrade my recommended score. Strengths: - The results seem to be correct. - In contrast to Huggins et al. (2016) and Tolochinsky & Feldman (2018), the coreset guarantee applies to the standard loss function of logistic regression and not to variations. - The (theoretical) algorithm (without the sketching algorithm for the QR decomposition) seems simple and practical. If space permits, the authors might consider explicitly specifying the algorithm in pseudo-code (so that practitioners do not have to extract it from the Theorems). - The authors include in the Supplementary Materials an example where uniform sampling fails even if the complexity parameter mu is bounded. - The authors show that the proposed approach obtains a better trade-off between error and absolute running time than uniform sampling and the approach by Huggins et al. (2016). Weaknesses: - Novelty: The sensitivity bound in this paper seems very similar to the one presented in [1] which is not cited in the manuscript. The paper [1] also uses a mix between sampling according to the data point weights and the l2-sampling with regards to the mean of the data to bound the sensitivity and then do importance sampling. Clearly, this paper treats a different problem (logistic regression vs k-means clustering) and has differences. However, this submission would be strengthened if the proposed approach would be compared to the one in [1]. In particular, I wonder if the idea of both additive and multiplicative errors in [1] could be applied in this paper (instead of restricting mu-complexity) to arrive at a coreset construction that does not require any assumptions on the data data set. [1] Scalable k-Means Clustering via Lightweight Coresets Olivier Bachem, Mario Lucic and Andreas Krause To Appear In International Conference on Knowledge Discovery and Data Mining (KDD), 2018. - Practical significance: The paper only contains a limited set of experiments, i.e., few data sets and no comparison to Tolochinsky & Feldman (2018). Furthermore, the paper does not compare against any non-coreset based approaches, e.g., SGD, SDCA, SAGA, and friends. It is not clear whether the proposed approach is useful in practice compared to these approaches. - Figure 1 would be much stronger if there were error bars and/or if there were more random trials that would (potentially) get rid of some of the (most likely) random fluctuations in the results. | - Figure 1 would be much stronger if there were error bars and/or if there were more random trials that would (potentially) get rid of some of the (most likely) random fluctuations in the results. |
ICLR_2021_2802 | ICLR_2021 | of this paper include the following aspects: 1. This paper is not well written and some parts are hard to follow. It lacks necessary logical transition and important figures. For example, it lacks explanations to support the connection between the proposed training objective and the Cross Margin Discrepancy. Also, it should at least contain one figure to explain the overall architecture or training pipeline. 2. The authors claim that there is still no research focusing on the joint error for UDA. But this problem of arbitrarily increased joint error has already been studied in previous works like “Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment”, in ICML2019. The authors should discuss on that work and directly illustrate the relationship between that work and the proposed one, and why the proposed method is better. 3. Although the joint error is indeed included in the proposed upper bound, in practice the authors have to use Source-driven Hypothesis Space and Target-driven Hypothesis Space to obtain approximation of f_{S} and f_{T}. To me, in practice the use of three classifiers h, f_{1}, f_{2} is just like an improvement over MCD. Hence, I doubt whether the proposed method can still simultaneously minimize the domain discrepancy and the joint error. For example, as shown in the Digit experiments, the performance is highly sensitive to the choice of \gamma in SHS, and sometimes the optimal \gamma value is conflicting for different domains in the same dataset, which is strange since according to the paper’s theorem, smaller \gamma only means more relaxed constraint on hypothesis space. Also, as shown in the VisDA experiments, the optimal value of \eta is close to 1, which means classification error from the approximate target domain is basically useless. 4. The benchmark results are inferior to the state-of-the-art methods. For instance, the Contrastive Adaptation Network achieves an average of 87.2 on VisDA2017, which is much higher than 79.7 achieved by the proposed method. And the same goes with Digit, Office31, and Office-Home dataset. | 2. The authors claim that there is still no research focusing on the joint error for UDA. But this problem of arbitrarily increased joint error has already been studied in previous works like “Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment”, in ICML2019. The authors should discuss on that work and directly illustrate the relationship between that work and the proposed one, and why the proposed method is better. |
NIPS_2020_103 | NIPS_2020 | About the authors' contribution in introducing ASVs: - It seems like ASVs are giving a new name to a concept that's been studied in game theory for decades: quasivalues (or equivalently, random-order values). Quasivalues are based on precisely the same idea of relaxing the symmetry axiom, and they use very similar notation (see e.g., [1]) to denote the exact same thing as ASVs. - Bringing these into model explanation is a good idea, but section 3.2 is a bit unclear about the origins. The authors do cite relevant papers [1, 2], but it should probably be more clear that their ASV idea exists under a different name. - For the claim that ASVs/quasivalues uniquely satisfy Axioms 1-3, I would think that a proof should either be provided or cited, because this result is non-trivial. About the global Shapley values: - These global Shapley values seem to have been introduced by another paper, but the authors' explanation that they quantify contributions to the model's accuracy has a small problem. The probability of the correct class f_y(x) does not quite represent the model's accuracy, at least in the (conventional) sense of 0-1 accuracy. I found the authors' explanation a bit confusing, so they might distinguish more clearly between their view of accuracy and 0-1 accuracy. - The section about applications of these global Shapley values to feature selection raises some questions. ASVs provide a very inefficient tool for checking whether a set of features U has strong predictive power. The authors' explanation suggests that users should 1) calculate global ASVs using the ordering given by Eq. 15 (which is specific to a given U/V partitioning), and then 2) take the sum of the ASVs in U (Eq. 16) to estimate U's predictive power. But because ASVs are not cheap to calculate, obtaining the global ASVs (i.e., average ASVs across *every example in the dataset*) would quite possibly be more computationally costly than training a *single* model on U, which gives you exactly the performance with U. Unless I'm missing something, this seems like a very inefficient approach to feature selection. A more justified approach for applying global Shapley values to feature selection, which does not suffer from this problem, is given by [3]. - Performing feature selection on structured data, like EEG time series, is an odd choice of task. It permits a convenient telescoping of ASVs (Eq. 19), but as a feature selection application this does not make much sense. What purpose could there be for excluding the last n% of an EEG sample, given that the seizure could occur at any point in the time window? - If the authors really want to show that ASVs are useful for feature selection, they might have included an experiment with non-structured data, where feature selection makes more sense. They might also consider comparing to more baselines methods. About the other experiments: - The application of ASVs to the admissions problem does something odd: it assumes that department choice precedes gender, as if department choice is a causal ancestor of gender. That seems inconsistent, because in the previous example (income prediction), attributes like gender were assumed to be causal of all other features. So the causal order is in a sense reversed in this second experiment. - The authors provide some explanation for this choice, saying this use of the ASV framework is intended to distinguish between "resolving variables" (R) and "sensitive attributes" (S). But it's a bit odd to explain this reversal of the causal order in midst of an experiment, since the point of the paper is that ASVs integrate causal knowledge. - Would defining the causal order in a way that is consistent with the previous experiment change the results? - Figure 2 shows that the majority of credit is given by ASVs to the beginning of EEG samples. But calculating global Shapley values for a structured data type is an odd choice, because evidence for a seizure could occur at any point in the sample. These results (Figure 2b in particular) do not provide strong evidence, in my view, that ASVs are the right choice for time series data. It would have been more useful to examine individual samples and show how SV/ASV attributions relate to the regions where individual seizures occur. [1] Monderer and Samet, "Variations on the Shapley value" [2] Weber, "Probabilistic values for games" [3] Covert, Lundberg and Lee, "Understanding global feature contributions through additive importance measures" | - For the claim that ASVs/quasivalues uniquely satisfy Axioms 1-3, I would think that a proof should either be provided or cited, because this result is non-trivial. About the global Shapley values: |
NIPS_2019_168 | NIPS_2019 | of the submission. * originality: This is a highly specialized contribution building up novel results on two main fronts: The derivation of the lower bound on the competitive ratio of any online algorithm and the introduction of two variants of an existing algorithm so as to meet this lower bound. Most of the proofs and techniques are natural and not surprising. In my view the main contribution is the introduction of the regularized version which brings a different, and arguably more modern interpretation, about the conditions under which these online algorithms perform well in these adversarial settings. * quality: The technical content of the paper is sound and rigorous * clarity: The paper is in general very well-written, and should be easy to follow for expert readers. * significance: As mentioned above this is a very specialized paper likely to interest some experts in the online convex optimization communities. Although narrow in scope, it contains interesting theoretical results advancing the state of the art in dealing with these specific problems. * minor details/comments: - p.1, line 6-7: I would rewrite the sentence to simply express that the lower bound is $\Omega(m^{-1/2})$ \- p.3, line 141: cost an algorithm => cost of an algorithm \- p.4, Algorithm 1, step 3: mention somewhere that this is the projection operator (not every reader will be familiar with this notation \- p.5, Theorem 2: remind the reader that the $\gamma$ in the statement is the parameter of OBD as defined in Algorithm 1 \- p.8, line 314: why surprisingly? | * quality: The technical content of the paper is sound and rigorous * clarity: The paper is in general very well-written, and should be easy to follow for expert readers. |
zkzf0VkiNv | ICLR_2024 | 1. Figure 2 shows that, without employing data augmentation and similarity-based regularization, the performance of CR-OSRS is comparable to RS-GM.
2. Could acceleration be achieved by incorporating entropy regularization into the optimization process?
3. It would be beneficial if the authors could provide an analysis of the computational complexity of this method.
4. The author wants to express too much content in the article, resulting in insufficient details and incomplete content in the main text.
5. The experimental part needs to be reorganized and further improved.
Details comments
1) It is recommended to swap the positions of Sections 4.3 and 4.4. According to the diagram, 4.3 is the training section, and 4.4 aims to measure certified space. Both 4.1 and 4.2 belong to the robustness and testing sections. Therefore, putting these parts together feels more reasonable.
2) The author should emphasize "The article is a general and robust method that can be applied to various GM methods, and we only use NGMv2 as an example." at the beginning of the article, rather than just showing in the title of Method Figure 1. This can better highlight the characteristics and contribution of the method.
3) The experimental part needs to be reorganized and further improved. The experimental section has a lot of content, but the experimental content listed in the main text does not highlight the superiority of the method well, so it needs to be reorganized. Based on the characteristics of the article, the experimental suggestions in the main text should include the following: 1. Robustness comparison and accuracy analysis with other empirical robustness algorithms for the same type of perturbations, rather than just focusing on the RS method, to clarify the superiority of the method. (You should supplement this part.) 2. Suggest using ablation experiments as the second part to demonstrate the effectiveness of the method. 3. Parameter analysis, elucidating the method's dependence on parameters. 4. Consider its applications on six basic algorithms as an extension part. Afterwards, based on the importance, select the important ones to place in the main text, and show the rest in the appendix.
4) In P16, the proof of claim 2, it should be P(I \in B) not P(I \in A).
5) In Table 2 of appendix, the Summary of main existing literature in learning GM can list the related types of perturbations.
6) In Formula 8, please clarify the meaning of lower p (lower bound of unilateral confidence), and the reason and meaning of setting as 1/2. | 3) The experimental part needs to be reorganized and further improved. The experimental section has a lot of content, but the experimental content listed in the main text does not highlight the superiority of the method well, so it needs to be reorganized. Based on the characteristics of the article, the experimental suggestions in the main text should include the following: |
NIPS_2018_865 | NIPS_2018 | weakness of this paper are listed: 1) The proposed method is very similar to Squeeze-and-Excitation Networks [1], but there is no comparison to the related work quantitatively. 2) There is only the results on image classification task. However, one of success for deep learning is that it allows people leverage pretrained representation. To show the effectiveness of this approach that learns better representation, more tasks are needed, such as semantic segmentation. Especially, the key idea of this method is on the context propagation, and context information plays an important role in semantic segmentation, and thus it is important to know. 3) GS module is used to propagate the context information over different spatial locations. Is the effective receptive field improved, which can be computed from [2]? It is interesting to know how the effective receptive field changed after applying GS module. 4) The analysis from line 128 to 149 is not convincing enough. From the histogram as shown in Fig 3, the GS-P-50 model has smaller class selectivity score, which means GS-P-50 shares more features and ResNet-50 learns more class specific features. And authors hypothesize that additional context may allow the network to reduce its dependency. What is the reason such an observation can indicate GS-P-50 learns better representation? Reference: [1] J. Hu, L. Shen and G. Sun, Squeeze-and-Excitation Networks, CVPR, 2018. [2] W. Luo et al., Understanding the Effective Receptive Field in Deep Convolutional Neural Networks, NIPS, 2016. | 4) The analysis from line 128 to 149 is not convincing enough. From the histogram as shown in Fig 3, the GS-P-50 model has smaller class selectivity score, which means GS-P-50 shares more features and ResNet-50 learns more class specific features. And authors hypothesize that additional context may allow the network to reduce its dependency. What is the reason such an observation can indicate GS-P-50 learns better representation? Reference: [1] J. Hu, L. Shen and G. Sun, Squeeze-and-Excitation Networks, CVPR, 2018. [2] W. Luo et al., Understanding the Effective Receptive Field in Deep Convolutional Neural Networks, NIPS, 2016. |
ACL_2017_333_review | ACL_2017 | There are some few details on the implementation and on the systems to which the authors compared their work that need to be better explained. - General Discussion: - Major review: - I wonder if the summaries obtained using the proposed methods are indeed abstractive. I understand that the target vocabulary is build out of the words which appear in the summaries in the training data. But given the example shown in Figure 4, I have the impression that the summaries are rather extractive.
The authors should choose a better example for Figure 4 and give some statistics on the number of words in the output sentences which were not present in the input sentences for all test sets.
- page 2, lines 266-272: I understand the mathematical difference between the vector hi and s, but I still have the feeling that there is a great overlap between them. Both "represent the meaning". Are both indeed necessary? Did you trying using only one of them.
- Which neural network library did the authors use for implementing the system?
There is no details on the implementation.
- page 5, section 44: Which training data was used for each of the systems that the authors compare to? Diy you train any of them yourselves?
- Minor review: - page 1, line 44: Although the difference between abstractive and extractive summarization is described in section 2, this could be moved to the introduction section. At this point, some users might no be familiar with this concept.
- page 1, lines 93-96: please provide a reference for this passage: "This approach achieves huge success in tasks like neural machine translation, where alignment between all parts of the input and output are required."
- page 2, section 1, last paragraph: The contribution of the work is clear but I think the authors should emphasize that such a selective encoding model has never been proposed before (is this true?). Further, the related work section should be moved to before the methods section.
- Figure 1 vs. Table 1: the authors show two examples for abstractive summarization but I think that just one of them is enough. Further, one is called a figure while the other a table.
- Section 3.2, lines 230-234 and 234-235: please provide references for the following two passages: "In the sequence-to-sequence machine translation (MT) model, the encoder and decoder are responsible for encoding input sentence information and decoding the sentence representation to generate an output sentence"; "Some previous works apply this framework to summarization generation tasks."
- Figure 2: What is "MLP"? It seems not to be described in the paper.
- page 3, lines 289-290: the sigmoid function and the element-wise multiplication are not defined for the formulas in section 3.1.
- page 4, first column: many elements of the formulas are not defined: b (equation 11), W (equation 12, 15, 17) and U (equation 12, 15), V (equation 15).
- page 4, line 326: the readout state rt is not depicted in Figure 2 (workflow).
- Table 2: what does "#(ref)" mean?
- Section 4.3, model parameters and training. Explain how you achieved the values to the many parameters: word embedding size, GRU hidden states, alpha, beta 1 and 2, epsilon, beam size.
- Page 5, line 450: remove "the" word in this line? " SGD as our optimizing algorithms" instead of "SGD as our the optimizing algorithms."
- Page 5, beam search: please include a reference for beam search.
- Figure 4: Is there a typo in the true sentence? " council of europe again slams french prison conditions" (again or against?)
- typo "supper script" -> "superscript" (4 times) | - Which neural network library did the authors use for implementing the system? There is no details on the implementation. |
NIPS_2021_442 | NIPS_2021 | of the paper:
Strengths: 1) To the best of my knowledge, the problem investigated in the paper is original in the sense that top-m identification has not been studied in the misspecified setting. 2) The paper provides some interesting results:
i) (Section 3.1) Knowing the level of misspecification ε
is a key ingredient, as not knowing the same would yield sample complexity bounds which are no better than the bound obtainable from unstructured ( ε = ∞
) stochastic bandits. ii) A single no-regret learner is used for the sampling strategy instead of assigning a learner for each of the (N choose k) answers, thus exponentially reducing the number of online learners. iii) The proposed decision rules are shown to match the prescribed lower bound asymptotically. 3) Sufficient experimental validation is provided to showcase the empirical performance of the prescribed decision rules.
Weaknesses: Some of the explanations provided by the authors are a bit unclear to me. Specifically, I have the following questions: 1) IMO, a better explanation of investigating top-m identification in this setting is required. Specifically, in this setting, we could readily convert the problem to the general top-m identification by appending the constant 1 to the features (converting them into d + 1
dimensional features) and trying to estimate the misspecifications η
in the higher dimensional space. Why is that disadvantageous?
Can the authors explain how the lower bound in Theorem 1 explicitly captures the effect of the upper bound on misspecification ε
? The relationship could be shown, for instance, by providing an example of a particular bandit environment (say, Gaussian bandits) ala [Kaufmann2016].
Sample complexity: Theorem 2 states the sample complexity in a very abstract way; it provides an equation which needs to be solved in order to get an explicit expression of the sample complexity. In order to make a comparison, the authors then mention that the unstructured confidence interval β t , δ u n s
is approximately log ( 1 δ )
in the limit of δ → 0
, which is then used to argue that the sample complexity of MISLID is asymptotically optimal. However, β t , δ u n s
also depends on t
. In fact, my understanding is that as δ
goes to 0
, the stopping time t
goes to infinity, where it is not clear as to what value the overall expression β t , δ u n s
converges. Overall, I feel that the authors need to explicate the sample complexity a bit more. My suggestions are: can the authors find a solution to equation (5) (or at least an upper bound on the solution for different regimes of ε
)? Using such an upper bound, even if the authors could give an explicit expression of the (asymptotic) sample complexity and show how it compares to the lower bound, it would be a great contribution.
Looking at Figure 1A (the second figure from the left, for the case of ε = 2
), it looks like LinGapE outperforms MISLID in terms of average sample complexity. Please correct me if I’m missing something, but if what I understand is correct, then why use MISLID and not LinGapE?
Probable typo: Line 211: Should it be θ
instead of θ t
for the self-normalized concentration?
The authors have explained the limitations of the investigation in Section 6. | 3) Sufficient experimental validation is provided to showcase the empirical performance of the prescribed decision rules. Weaknesses: Some of the explanations provided by the authors are a bit unclear to me. Specifically, I have the following questions: |
NIPS_2020_1821 | NIPS_2020 | - To my understanding, the experimental section only compares results generated for this paper. This is good because it keeps apples-to-apples comparisons, however it is suspicious since the task is not novel. Some comparison with results from other works (or a justification of why this is not possible/suitable) would be welcome. For example [2, table 3] seems to have directly comparable results, yet these are nowhere mentioned in this paper. - Albeit the observed effects are strong, it remains unclear “why does the method work?” in particular regarding the L_pixel component. Providing stronger arguments or intuitions of why these particular losses are “bound to help” would be welcome. | - Albeit the observed effects are strong, it remains unclear “why does the method work?” in particular regarding the L_pixel component. Providing stronger arguments or intuitions of why these particular losses are “bound to help” would be welcome. |
NIPS_2018_652 | NIPS_2018 | weakness of the manuscript. Should the manuscript be rejected for this reason (which is not unlikely), I would like to encourage the authors to get the maths correct and resubmit to a leading computational neuroscience journal (e.g., PLoS CompBiol or PNAS). ORIGINALITY: 8 ============== The investigated question is around already for several years. The paper is original in that someone finally worked out the details of the maths of both representations. This is a very important contribution which requires a fine understanding of both theories. A big "Thanks" to the authors for this! Yet, I think that two relevant papers are missing: - Savin, C., Deneve, S. Spatio-temporal representations of uncertainty in spiking neural networks, Advances in Neural Information Processing Systems 27, 2014. This paper relaxes the assumption of a one-to-one correspondence between neurons and latent variables for sampling. - Bill J, Buesing L, Habenschuss S, Nessler B, Maass W, et al. (2015) Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition. PLOS ONE In the discussion section, a similar relation between sampling and PPCs is outlined, for the case of a generative mixture model instead of the linear Gaussian model considered here. SIGNIFICANCE: 10 ================ Sampling vs PPC representations is a hotly debated topic for years, staging the main track talks of the most important computational neuroscience conferences. Reconciling the two views is highly relevant for anyone who studies neural representations of uncertainty. CONCLUSION ========== This is one of the papers, I feel thankful that it has finally been written (and researched, of course). Unfortunately, the paper suffers from an unjustifiable amount of typos and errors in the maths. If accepted to NIPS, I would suggest a talk rather than a poster. Overall, I tend to recommend acceptance, but I could perfectly understand if the quality of presentation does not merit acceptance at NIPS. Due to the very mixed impression, I assigned a separate score to each of the four reviewing criteria. The geometric mean leads to the suggested overall score 6/10. SPECIFIC COMMENTS ================= - L163ff: Strictly speaking the statement is correct: A complete basis A permits to perfectly model the noise in I \sim N(T(s), noise). But I think this is not what the authors mean in the following: The example addresses how well Ax approximates the template T(s) without a noise sample. If so, the discussion should be corrected. Errors in the derivation / equations ------------------------------------ Since I walked through the derivation anyway, I also include some suggestions. - Eq (1): The base measure is missing: ... + f(s); alternatively write p(s|r) \propto exp(...) to match eq (6) of the main result. - L66: Missing def. of PF_i; required e.g. in line 137 - L90: p(s|x) not r - L90 and general: The entire identification of r <--> multiple samples x^[1:t] is missing, but crucial for the paper. - Eq after L92: This should be eq (3). These are the "forward physics" from stimulus generation to spikes in the retina. The name p_exp(I|s) is used later in L109ff, so it should be assigned here. - Eq (3): Copy of the previous eq with wrong subscript. Could be deleted, right? - Eq after L106: Missing def of \bar x = 1/t \sum x^i (important: the mean is not weighted by p(x^i) ) - Eq after L107: p_exp(x|s) is not defined and also not needed: delete equation w/o replacement - L108 and L109: The equality assumption is undefined and not needed. Both lines can be deleted. The next eq is valid immediately (and would deserve a number). - L117 and eq 6: At some point r (from eq 1) and x^[1:t] must be identified with another. Here could be a good point. Section 4 could then refer to this in the discussion of binary x^i and longer spike counts r. Minor Points ------------ - Figure 1 is not referred to in the text. - L92: clarify "periphery" (e.g. retinal ganglion cells). This is an essential step, bringing the "pixel image" into the brain, and would deserve an extra sentence. - L113: The Gaussian converges to a dirac-delta. Then the integration leads to the desired statement. Typos ----- - Figure 1, binary samples, y_ticks: x_n (not N) - Caption Fig 1: e.g.,~ - L48: in _a_ pro... - L66: PF_n (not 2) - L71: ~and~ - L77: x^(i) not t [general suggestion: use different running indices for time and neurons; currently both are i] - Almost all the identity matrices render as "[]" (at least in my pdf viewer). Only in the eq after L92, it is displayed correctly. - Footnote 1: section_,_ valid for infinitely _many_ samples_,_ ... - L106: Dr - Caption Fig 2: asymptotic - L121: section 3 (?, not 4) - L126: neuron~s~ - Figure 3C: ylabels don't match notation in 3B (x and x'), but are illustrations of these points - L165: ~for~ - L200: invariant - L220: _is_ - L229: _easy_ or _simple_ - L230: correspond _to_ - L232: ~a~ - L242: ~the~ | - L66: PF_n (not 2) - L71: ~and~ - L77: x^(i) not t [general suggestion: use different running indices for time and neurons; currently both are i] - Almost all the identity matrices render as "[]" (at least in my pdf viewer). Only in the eq after L92, it is displayed correctly. |
ICLR_2023_2298 | ICLR_2023 | Weakness:
-- The work is heavily dependent on FedBN. The main difference is that author of this work designed an adaptive interpolation parameter estimation method. This jeopardizes the novelty and technical contribution of the whole work. -- I am a little conservative about Eq. 4. If Eq. 4 stands, does that mean the u^l in Eq.3 tends to be 1? -- The improvement of the designed solutions in Table 5, is not significant on some datasets. For example, on OfficeHome, the CSAC achieves 64.35, and the proposed solution achieves 64.71, which is a marginal improvement. | 4. If Eq. 4 stands, does that mean the u^l in Eq.3 tends to be 1? -- The improvement of the designed solutions in Table 5, is not significant on some datasets. For example, on OfficeHome, the CSAC achieves 64.35, and the proposed solution achieves 64.71, which is a marginal improvement. |
NIPS_2022_2373 | NIPS_2022 | weakness in He et al., and proposes a more invisible watermarking algorithm, making their method more appealing to the community. 2. Instead of using a heuristic search, the authors elegantly cast the watermark search issue into an optimization problem and provide rigorous proof. 3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation. 4. This work theoretically proves that CATER is resilient to statistical reverse-engineering, which is also verified by their experiments. In addition, they show that CATER can defend against ONION, an effective approach for backdoor removal.
Weakness: 1. The authors assume that all training data are from the API response, but what if the adversary only uses the part of the API response? 2. Figure 5 is hard to comprehend. I would like to see more details about the two baselines presented in Figure 5.
The authors only study CATER for the English-centric datasets. However, as we know, the widespread text generation APIs are for translation, which supports multiple languages. Probably, the authors could extend CATER to other languages in the future. | 3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation. |
ARR_2022_219_review | ARR_2022 | - The paper hypothesizes that SimCSE suffers from the cue of sentence length and syntax. However, the experiments only targets sentence length but not syntax. - The writing of this paper can benefit by some work (see more below). Specifically, I find Section 3 difficult to understand as someone who does not directly work on this task. Specifically, a good mount of terminologies are introduced without explanations. I suggest a good effort of rewriting this section to be easier understandable by general NLP researchers.
- Though a universal issue in related papers and should not be blamed on the authors, why only consider BERT-base? It is known that other models such as BERT-large, RoBERTa, DeBERTa, etc. could produce better embeddings, and that the observations in these works do not necessarily hold in those larger and better models. - The introduction of the SRL-based discrete augmentation approach (line 434 onwards) is unclear and cannot be possibly understood by readers without any experience in SRL. I suggest at least discussing the following: - Intuitively why relying on semantic roles is better than work like CLEAR - What SRL model you use - What the sequence "[ARG0, PRED, ARGM − NEG, ARG1]" mean, and what these PropBank labels mean - What is your reason of using this sequence as opposed to alternatives
- (Line 3-6, and similarly in the Intro) The claim is a finding of the paper, so best prefix the sentence with "We find that". Or, if it has been discussed elsewhere, provide citations. - (7): semantics-aware or semantically-aware - (9-10): explore -> exploit - (42): works on -> works by - Figure 1 caption: embeddings o the : typo - Figure 1 caption: "In a realistic scenario, negative examples have the same length and structure, while positive examples act in the opposite way." I don't think this is true. Positive or negative examples should have similar distribution of length and structure, so that they don't become a cue during inference. - (99): the first mention of "momentum-encoder" in the paper body should immediately come with citation or explanation. - (136): abbreviations like "MoCo" should not appear in the section header, since a reader might not know what it means. - (153): what is a "key"?
- (180): same -> the same - (186-198): I feel that this is a better paragraph describing existing issue and motivation than (63-76). I would suggest moving it to the Intro, and just briefly re-mention the issue here in Related Work. - (245-246): could be -> should be - (248): a -> another - (252-55): Isn't it obvious that "positive examples in SimCSE have the same length", since SimCSE enocdes the same sentence differently as positive examples? How does this need "Through careful observation"?
- (288): "textual similarity" should be "sentence length and structure"? Because the models are predicting textual similarity, after all.
- (300-301): I don't understand "unchangeable syntax" - (310-314): I don't understand "query", "key and value". What do they mean here? Same for "core", "pseudo syntactic". - (376): It might worth mentioning SimCSE is the state-of-the-art method mention in the Abstract. - (392): Remove "In this subsection" | - (136): abbreviations like "MoCo" should not appear in the section header, since a reader might not know what it means. |
Subsets and Splits