title
stringlengths 16
162
| url
stringlengths 108
108
| authors
stringlengths 7
427
| detail_url
stringlengths 108
108
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 104
104
⌀ | Supplemental
stringlengths 111
111
⌀ | abstract
stringlengths 1
2.47k
⌀ | Paper_Errata
stringclasses 1
value | Supplemental_Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
DivBO: Diversity-aware CASH for Ensemble Learning | https://papers.nips.cc/paper_files/paper/2022/hash/13b2f88be223cd2b4d6be67b56e02fa8-Abstract-Conference.html | Yu Shen, Yupeng Lu, Yang Li, Yaofeng Tu, Wentao Zhang, Bin CUI | https://papers.nips.cc/paper_files/paper/2022/hash/13b2f88be223cd2b4d6be67b56e02fa8-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18216-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/13b2f88be223cd2b4d6be67b56e02fa8-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/13b2f88be223cd2b4d6be67b56e02fa8-Supplemental-Conference.zip | The Combined Algorithm Selection and Hyperparameters optimization (CASH) problem is one of the fundamental problems in Automated Machine Learning (AutoML). Motivated by the success of ensemble learning, recent AutoML systems build post-hoc ensembles to output the final predictions instead of using the best single learner. However, while most CASH methods focus on searching for a single learner with the best performance, they neglect the diversity among base learners (i.e., they may suggest similar configurations to previously evaluated ones), which is also a crucial consideration when building an ensemble. To tackle this issue and further enhance the ensemble performance, we propose DivBO, a diversity-aware framework to inject explicit search of diversity into the CASH problems. In the framework, we propose to use a diversity surrogate to predict the pair-wise diversity of two unseen configurations. Furthermore, we introduce a temporary pool and a weighted acquisition function to guide the search of both performance and diversity based on Bayesian optimization. Empirical results on 15 public datasets show that DivBO achieves the best average ranks (1.82 and 1.73) on both validation and test errors among 10 compared methods, including post-hoc designs in recent AutoML systems and state-of-the-art baselines for ensemble learning on CASH problems. | null | null |
Revisiting Graph Contrastive Learning from the Perspective of Graph Spectrum | https://papers.nips.cc/paper_files/paper/2022/hash/13b45b44e26c353c64cba9529bf4724f-Abstract-Conference.html | Nian Liu, Xiao Wang, Deyu Bo, Chuan Shi, Jian Pei | https://papers.nips.cc/paper_files/paper/2022/hash/13b45b44e26c353c64cba9529bf4724f-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19434-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/13b45b44e26c353c64cba9529bf4724f-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/13b45b44e26c353c64cba9529bf4724f-Supplemental-Conference.pdf | Graph Contrastive Learning (GCL), learning the node representations by augmenting graphs, has attracted considerable attentions. Despite the proliferation of various graph augmentation strategies, there are still some fundamental questions unclear: what information is essentially learned by GCL? Are there some general augmentation rules behind different augmentations? If so, what are they and what insights can they bring? In this paper, we answer these questions by establishing the connection between GCL and graph spectrum. By an experimental investigation in spectral domain, we firstly find the General grAph augMEntation (GAME) rule for GCL, i.e., the difference of the high-frequency parts between two augmented graphs should be larger than that of low-frequency parts. This rule reveals the fundamental principle to revisit the current graph augmentations and design new effective graph augmentations. Then we theoretically prove that GCL is able to learn the invariance information by contrastive invariance theorem, together with our GAME rule, for the first time, we uncover that the learned representations by GCL essentially encode the low-frequency information, which explains why GCL works. Guided by this rule, we propose a spectral graph contrastive learning module (SpCo), which is a general and GCL-friendly plug-in. We combine it with different existing GCL models, and extensive experiments well demonstrate that it can further improve the performances of a wide variety of different GCL methods. | null | null |
Functional Indirection Neural Estimator for Better Out-of-distribution Generalization | https://papers.nips.cc/paper_files/paper/2022/hash/13b8d8fb8d05369480c2c344f2ce3f25-Abstract-Conference.html | Kha Pham, Thai Hung Le, Man Ngo, Truyen Tran | https://papers.nips.cc/paper_files/paper/2022/hash/13b8d8fb8d05369480c2c344f2ce3f25-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17335-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/13b8d8fb8d05369480c2c344f2ce3f25-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/13b8d8fb8d05369480c2c344f2ce3f25-Supplemental-Conference.zip | The capacity to achieve out-of-distribution (OOD) generalization is a hallmark of human intelligence and yet remains out of reach for machines. This remarkable capability has been attributed to our abilities to make conceptual abstraction and analogy, and to a mechanism known as indirection, which binds two representations and uses one representation to refer to the other. Inspired by these mechanisms, we hypothesize that OOD generalization may be achieved by performing analogy-making and indirection in the functional space instead of the data space as in current methods. To realize this, we design FINE (Functional Indirection Neural Estimator), a neural framework that learns to compose functions that map data input to output on-the-fly. FINE consists of a backbone network and a trainable semantic memory of basis weight matrices. Upon seeing a new input-output data pair, FINE dynamically constructs the backbone weights by mixing the basis weights. The mixing coefficients are indirectly computed through querying a separate corresponding semantic memory using the data pair. We demonstrate empirically that FINE can strongly improve out-of-distribution generalization on IQ tasks that involve geometric transformations. In particular, we train FINE and competing models on IQ tasks using images from the MNIST, Omniglot and CIFAR100 datasets and test on tasks with unseen image classes from one or different datasets and unseen transformation rules. FINE not only achieves the best performance on all tasks but also is able to adapt to small-scale data scenarios. | null | null |
Combinatorial Bandits with Linear Constraints: Beyond Knapsacks and Fairness | https://papers.nips.cc/paper_files/paper/2022/hash/13f17f74ec061f1e3e231aca9a43ff23-Abstract-Conference.html | Qingsong Liu, Weihang Xu, Siwei Wang, Zhixuan Fang | https://papers.nips.cc/paper_files/paper/2022/hash/13f17f74ec061f1e3e231aca9a43ff23-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18091-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/13f17f74ec061f1e3e231aca9a43ff23-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/13f17f74ec061f1e3e231aca9a43ff23-Supplemental-Conference.pdf | This paper proposes and studies for the first time the problem of combinatorial multi-armed bandits with linear long-term constraints. Our model generalizes and unifies several prominent lines of work, including bandits with fairness constraints, bandits with knapsacks (BwK), etc. We propose an upper-confidence bound LP-style algorithm for this problem, called UCB-LP, and prove that it achieves a logarithmic problem-dependent regret bound and zero constraint violations in expectation. In the special case of fairness constraints, we further provide a sharper constant regret bound for UCB-LP. Our regret bounds outperform the existing literature on BwK and bandits with fairness constraints simultaneously. We also develop another low-complexity version of UCB-LP and show that it yields $\tilde{O}(\sqrt{T})$ problem-independent regret and zero constraint violations with high-probability. Finally, we conduct numerical experiments to validate our theoretical results. | null | null |
Will Bilevel Optimizers Benefit from Loops | https://papers.nips.cc/paper_files/paper/2022/hash/1413947ef79a733e4b839d339e3dffa7-Abstract-Conference.html | Kaiyi Ji, Mingrui Liu, Yingbin Liang, Lei Ying | https://papers.nips.cc/paper_files/paper/2022/hash/1413947ef79a733e4b839d339e3dffa7-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18840-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1413947ef79a733e4b839d339e3dffa7-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1413947ef79a733e4b839d339e3dffa7-Supplemental-Conference.pdf | Bilevel optimization has arisen as a powerful tool for solving a variety of machine learning problems. Two current popular bilevel optimizers AID-BiO and ITD-BiO naturally involve solving one or two sub-problems, and consequently, whether we solve these problems with loops (that take many iterations) or without loops (that take only a few iterations) can significantly affect the overall computational efficiency. Existing studies in the literature cover only some of those implementation choices, and the complexity bounds available are not refined enough to enable rigorous comparison among different implementations. In this paper, we first establish unified convergence analysis for both AID-BiO and ITD-BiO that are applicable to all implementation choices of loops. We then specialize our results to characterize the computational complexity for all implementations, which enable an explicit comparison among them. Our result indicates that for AID-BiO, the loop for estimating the optimal point of the inner function is beneficial for overall efficiency, although it causes higher complexity for each update step, and the loop for approximating the outer-level Hessian-inverse-vector product reduces the gradient complexity. For ITD-BiO, the two loops always coexist, and our convergence upper and lower bounds show that such loops are necessary to guarantee a vanishing convergence error, whereas the no-loop scheme suffers from an unavoidable non-vanishing convergence error. Our numerical experiments further corroborate our theoretical results. | null | null |
Combining Explicit and Implicit Regularization for Efficient Learning in Deep Networks | https://papers.nips.cc/paper_files/paper/2022/hash/1419d8554191a65ea4f2d8e1057973e4-Abstract-Conference.html | Dan Zhao | https://papers.nips.cc/paper_files/paper/2022/hash/1419d8554191a65ea4f2d8e1057973e4-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18086-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1419d8554191a65ea4f2d8e1057973e4-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1419d8554191a65ea4f2d8e1057973e4-Supplemental-Conference.zip | Works on implicit regularization have studied gradient trajectories during the optimization process to explain why deep networks favor certain kinds of solutions over others. In deep linear networks, it has been shown that gradient descent implicitly regularizes toward low-rank solutions on matrix completion/factorization tasks. Adding depth not only improves performance on these tasks but also acts as an accelerative pre-conditioning that further enhances this bias towards low-rankedness. Inspired by this, we propose an explicit penalty to mirror this implicit bias which only takes effect with certain adaptive gradient optimizers (e.g. Adam). This combination can enable a degenerate single-layer network to achieve low-rank approximations with generalization error comparable to deep linear networks, making depth no longer necessary for learning. The single-layer network also performs competitively or out-performs various approaches for matrix completion over a range of parameter and data regimes despite its simplicity. Together with an optimizer’s inductive bias, our findings suggest that explicit regularization can play a role in designing different, desirable forms of regularization and that a more nuanced understanding of this interplay may be necessary. | null | null |
On A Mallows-type Model For (Ranked) Choices | https://papers.nips.cc/paper_files/paper/2022/hash/145c28cd4b1df9b426990fd68045f4f7-Abstract-Conference.html | Yifan Feng, Yuxuan Tang | https://papers.nips.cc/paper_files/paper/2022/hash/145c28cd4b1df9b426990fd68045f4f7-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17358-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/145c28cd4b1df9b426990fd68045f4f7-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/145c28cd4b1df9b426990fd68045f4f7-Supplemental-Conference.pdf | We consider a preference learning setting where every participant chooses an ordered list of $k$ most preferred items among a displayed set of candidates. (The set can be different for every participant.) We identify a distance-based ranking model for the population's preferences and their (ranked) choice behavior. The ranking model resembles the Mallows model but uses a new distance function called Reverse Major Index (RMJ). We find that despite the need to sum over all permutations, the RMJ-based ranking distribution aggregates into (ranked) choice probabilities with simple closed-form expression. We develop effective methods to estimate the model parameters and showcase their generalization power using real data, especially when there is a limited variety of display sets. | null | null |
(De-)Randomized Smoothing for Decision Stump Ensembles | https://papers.nips.cc/paper_files/paper/2022/hash/146b4bab3f8536a07905f25d367b4924-Abstract-Conference.html | Miklós Horváth, Mark Müller, Marc Fischer, Martin Vechev | https://papers.nips.cc/paper_files/paper/2022/hash/146b4bab3f8536a07905f25d367b4924-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18820-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/146b4bab3f8536a07905f25d367b4924-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/146b4bab3f8536a07905f25d367b4924-Supplemental-Conference.pdf | Tree-based models are used in many high-stakes application domains such as finance and medicine, where robustness and interpretability are of utmost importance. Yet, methods for improving and certifying their robustness are severely under-explored, in contrast to those focusing on neural networks. Targeting this important challenge, we propose deterministic smoothing for decision stump ensembles. Whereas most prior work on randomized smoothing focuses on evaluating arbitrary base models approximately under input randomization, the key insight of our work is that decision stump ensembles enable exact yet efficient evaluation via dynamic programming. Importantly, we obtain deterministic robustness certificates, even jointly over numerical and categorical features, a setting ubiquitous in the real world. Further, we derive an MLE-optimal training method for smoothed decision stumps under randomization and propose two boosting approaches to improve their provable robustness. An extensive experimental evaluation on computer vision and tabular data tasks shows that our approach yields significantly higher certified accuracies than the state-of-the-art for tree-based models. We release all code and trained models at https://github.com/eth-sri/drs. | null | null |
Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation | https://papers.nips.cc/paper_files/paper/2022/hash/148c0aeea1c5da82f4fa86a09d4190da-Abstract-Conference.html | Jin Xu, Xiaojiang Liu, Jianhao Yan, Deng Cai, Huayang Li, Jian Li | https://papers.nips.cc/paper_files/paper/2022/hash/148c0aeea1c5da82f4fa86a09d4190da-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18967-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/148c0aeea1c5da82f4fa86a09d4190da-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/148c0aeea1c5da82f4fa86a09d4190da-Supplemental-Conference.pdf | While large-scale neural language models, such as GPT2 and BART,have achieved impressive results on various text generation tasks, they tend to get stuck in undesirable sentence-level loops with maximization-based decoding algorithms (\textit{e.g.}, greedy search). This phenomenon is counter-intuitive since there are few consecutive sentence-level repetitions in the human corpus (e.g., 0.02\% in Wikitext-103). To investigate the underlying reasons for generating consecutive sentence-level repetitions, we study the relationship between the probability of repetitive tokens and their previous repetitions in context. Through our quantitative experiments, we find that 1) Models have a preference to repeat the previous sentence; 2) The sentence-level repetitions have a \textit{self-reinforcement effect}: the more times a sentence is repeated in the context, the higher the probability of continuing to generate that sentence; 3) The sentences with higher initial probabilities usually have a stronger self-reinforcement effect. Motivated by our findings, we propose a simple and effective training method \textbf{DITTO} (Pseu\underline{D}o-Repet\underline{IT}ion Penaliza\underline{T}i\underline{O}n), where the model learns to penalize probabilities of sentence-level repetitions from synthetic repetitive data. Although our method is motivated by mitigating repetitions, our experiments show that DITTO not only mitigates the repetition issue without sacrificing perplexity, but also achieves better generation quality. Extensive experiments on open-ended text generation (Wikitext-103) and text summarization (CNN/DailyMail) demonstrate the generality and effectiveness of our method. | null | null |
Debiased Machine Learning without Sample-Splitting for Stable Estimators | https://papers.nips.cc/paper_files/paper/2022/hash/1498a03a04f9bcd3a7d44058fc5dc639-Abstract-Conference.html | Qizhao Chen, Vasilis Syrgkanis, Morgane Austern | https://papers.nips.cc/paper_files/paper/2022/hash/1498a03a04f9bcd3a7d44058fc5dc639-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17407-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1498a03a04f9bcd3a7d44058fc5dc639-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1498a03a04f9bcd3a7d44058fc5dc639-Supplemental-Conference.zip | Estimation and inference on causal parameters is typically reduced to a generalized method of moments problem, which involves auxiliary functions that correspond to solutions to a regression or classification problem. Recent line of work on debiased machine learning shows how one can use generic machine learning estimators for these auxiliary problems, while maintaining asymptotic normality and root-$n$ consistency of the target parameter of interest, while only requiring mean-squared-error guarantees from the auxiliary estimation algorithms. The literature typically requires that these auxiliary problems are fitted on a separate sample or in a cross-fitting manner. We show that when these auxiliary estimation algorithms satisfy natural leave-one-out stability properties, then sample splitting is not required. This allows for sample re-use, which can be beneficial in moderately sized sample regimes. For instance, we show that the stability properties that we propose are satisfied for ensemble bagged estimators, built via sub-sampling without replacement, a popular technique in machine learning practice. | null | null |
Near-Optimal Sample Complexity Bounds for Constrained MDPs | https://papers.nips.cc/paper_files/paper/2022/hash/14a5ebc9cd2e507cd811df78c15bf5d7-Abstract-Conference.html | Sharan Vaswani, Lin Yang, Csaba Szepesvari | https://papers.nips.cc/paper_files/paper/2022/hash/14a5ebc9cd2e507cd811df78c15bf5d7-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17645-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/14a5ebc9cd2e507cd811df78c15bf5d7-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/14a5ebc9cd2e507cd811df78c15bf5d7-Supplemental-Conference.pdf | In contrast to the advances in characterizing the sample complexity for solving Markov decision processes (MDPs), the optimal statistical complexity for solving constrained MDPs (CMDPs) remains unknown. We resolve this question by providing minimax upper and lower bounds on the sample complexity for learning near-optimal policies in a discounted CMDP with access to a generative model (simulator). In particular, we design a model-based algorithm that addresses two settings: (i) relaxed feasibility, where small constraint violations are allowed, and (ii) strict feasibility, where the output policy is required to satisfy the constraint. For (i), we prove that our algorithm returns an $\epsilon$-optimal policy with probability $1 - \delta$, by making $\tilde{O}\left(\frac{S A \log(1/\delta)}{(1 - \gamma)^3 \epsilon^2}\right)$ queries to the generative model, thus matching the sample-complexity for unconstrained MDPs. For (ii), we show that the algorithm's sample complexity is upper-bounded by $\tilde{O} \left(\frac{S A \, \log(1/\delta)}{(1 - \gamma)^5 \, \epsilon^2 \zeta^2} \right)$ where $\zeta$ is the problem-dependent Slater constant that characterizes the size of the feasible region. Finally, we prove a matching lower-bound for the strict feasibility setting, thus obtaining the first near minimax optimal bounds for discounted CMDPs. Our results show that learning CMDPs is as easy as MDPs when small constraint violations are allowed, but inherently more difficult when we demand zero constraint violation. | null | null |
Integral Probability Metrics PAC-Bayes Bounds | https://papers.nips.cc/paper_files/paper/2022/hash/14da7aea05debb963b3d8d46449d51a0-Abstract-Conference.html | Ron Amit, Baruch Epstein, Shay Moran, Ron Meir | https://papers.nips.cc/paper_files/paper/2022/hash/14da7aea05debb963b3d8d46449d51a0-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18131-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/14da7aea05debb963b3d8d46449d51a0-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/14da7aea05debb963b3d8d46449d51a0-Supplemental-Conference.pdf | We present a PAC-Bayes-style generalization bound which enables the replacement of the KL-divergence with a variety of Integral Probability Metrics (IPM). We provide instances of this bound with the IPM being the total variation metric and the Wasserstein distance. A notable feature of the obtained bounds is that they naturally interpolate between classical uniform convergence bounds in the worst case (when the prior and posterior are far away from each other), and improved bounds in favorable cases (when the posterior and prior are close). This illustrates the possibility of reinforcing classical generalization bounds with algorithm- and data-dependent components, thus making them more suitable to analyze algorithms that use a large hypothesis space. | null | null |
Bellman Residual Orthogonalization for Offline Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2022/hash/14ecbfb2216bab76195b60bfac7efb1f-Abstract-Conference.html | Andrea Zanette, Martin J Wainwright | https://papers.nips.cc/paper_files/paper/2022/hash/14ecbfb2216bab76195b60bfac7efb1f-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19058-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/14ecbfb2216bab76195b60bfac7efb1f-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/14ecbfb2216bab76195b60bfac7efb1f-Supplemental-Conference.pdf | We propose and analyze a reinforcement learning principle thatapproximates the Bellman equations by enforcing their validity onlyalong a user-defined space of test functions. Focusing onapplications to model-free offline RL with function approximation, weexploit this principle to derive confidence intervals for off-policyevaluation, as well as to optimize over policies within a prescribedpolicy class. We prove an oracle inequality on our policyoptimization procedure in terms of a trade-off between the value anduncertainty of an arbitrary comparator policy. Different choices oftest function spaces allow us to tackle different problems within acommon framework. We characterize the loss of efficiency in movingfrom on-policy to off-policy data using our procedures, and establishconnections to concentrability coefficients studied in past work. Weexamine in depth the implementation of our methods with linearfunction approximation, and provide theoretical guarantees withpolynomial-time implementations even when Bellman closure does nothold. | null | null |
Quantum Speedups of Optimizing Approximately Convex Functions with Applications to Logarithmic Regret Stochastic Convex Bandits | https://papers.nips.cc/paper_files/paper/2022/hash/14f75513f0f1ca01de1e826b52e6b840-Abstract-Conference.html | Tongyang Li, Ruizhe Zhang | https://papers.nips.cc/paper_files/paper/2022/hash/14f75513f0f1ca01de1e826b52e6b840-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18095-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/14f75513f0f1ca01de1e826b52e6b840-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/14f75513f0f1ca01de1e826b52e6b840-Supplemental-Conference.pdf | We initiate the study of quantum algorithms for optimizing approximately convex functions. Given a convex set $\mathcal{K}\subseteq\mathbb{R}^{n}$ and a function $F\colon\mathbb{R}^{n}\to\mathbb{R}$ such that there exists a convex function $f\colon\mathcal{K}\to\mathbb{R}$ satisfying $\sup_{x\in\mathcal{K}}|F(x)-f(x)|\leq \epsilon/n$, our quantum algorithm finds an $x^{*}\in\mathcal{K}$ such that $F(x^{*})-\min_{x\in\mathcal{K}} F(x)\leq\epsilon$ using $\tilde{O}(n^{3})$ quantum evaluation queries to $F$. This achieves a polynomial quantum speedup compared to the best-known classical algorithms. As an application, we give a quantum algorithm for zeroth-order stochastic convex bandits with $\tilde{O}(n^{5}\log^{2} T)$ regret, an exponential speedup in $T$ compared to the classical $\Omega(\sqrt{T})$ lower bound. Technically, we achieve quantum speedup in $n$ by exploiting a quantum framework of simulated annealing and adopting a quantum version of the hit-and-run walk. Our speedup in $T$ for zeroth-order stochastic convex bandits is due to a quadratic quantum speedup in multiplicative error of mean estimation. | null | null |
Learning Neural Acoustic Fields | https://papers.nips.cc/paper_files/paper/2022/hash/151f4dfc71f025ae387e2d7a4ea1639b-Abstract-Conference.html | Andrew Luo, Yilun Du, Michael Tarr, Josh Tenenbaum, Antonio Torralba, Chuang Gan | https://papers.nips.cc/paper_files/paper/2022/hash/151f4dfc71f025ae387e2d7a4ea1639b-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17515-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/151f4dfc71f025ae387e2d7a4ea1639b-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/151f4dfc71f025ae387e2d7a4ea1639b-Supplemental-Conference.zip | Our environment is filled with rich and dynamic acoustic information. When we walk into a cathedral, the reverberations as much as appearance inform us of the sanctuary's wide open space. Similarly, as an object moves around us, we expect the sound emitted to also exhibit this movement. While recent advances in learned implicit functions have led to increasingly higher quality representations of the visual world, there have not been commensurate advances in learning spatial auditory representations. To address this gap, we introduce Neural Acoustic Fields (NAFs), an implicit representation that captures how sounds propagate in a physical scene. By modeling acoustic propagation in a scene as a linear time-invariant system, NAFs learn to continuously map all emitter and listener location pairs to a neural impulse response function that can then be applied to arbitrary sounds. We demonstrate NAFs on both synthetic and real data, and show that the continuous nature of NAFs enables us to render spatial acoustics for a listener at arbitrary locations. We further show that the representation learned by NAFs can help improve visual learning with sparse views. Finally we show that a representation informative of scene structure emerges during the learning of NAFs. | null | null |
A Universal Error Measure for Input Predictions Applied to Online Graph Problems | https://papers.nips.cc/paper_files/paper/2022/hash/15212bd2265c4a3ab0dbc1b1982c1b69-Abstract-Conference.html | Giulia Bernardini, Alexander Lindermayr, Alberto Marchetti-Spaccamela, Nicole Megow, Leen Stougie, Michelle Sweering | https://papers.nips.cc/paper_files/paper/2022/hash/15212bd2265c4a3ab0dbc1b1982c1b69-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18001-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/15212bd2265c4a3ab0dbc1b1982c1b69-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/15212bd2265c4a3ab0dbc1b1982c1b69-Supplemental-Conference.zip | We introduce a novel measure for quantifying the error in input predictions. The error is based on a minimum-cost hyperedge cover in a suitably defined hypergraph and provides a general template which we apply to online graph problems. The measure captures errors due to absent predicted requests as well as unpredicted actual requests; hence, predicted and actual inputs can be of arbitrary size. We achieve refined performance guarantees for previously studied network design problems in the online-list model, such as Steiner tree and facility location. Further, we initiate the study of learning-augmented algorithms for online routing problems, such as the online traveling salesperson problem and the online dial-a-ride problem, where (transportation) requests arrive over time (online-time model). We provide a general algorithmic framework and we give error-dependent performance bounds that improve upon known worst-case barriers, when given accurate predictions, at the cost of slightly increased worst-case bounds when given predictions of arbitrary quality. | null | null |
Online Reinforcement Learning for Mixed Policy Scopes | https://papers.nips.cc/paper_files/paper/2022/hash/15349e1c554406b7719d047a498e7117-Abstract-Conference.html | Junzhe Zhang, Elias Bareinboim | https://papers.nips.cc/paper_files/paper/2022/hash/15349e1c554406b7719d047a498e7117-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17642-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/15349e1c554406b7719d047a498e7117-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/15349e1c554406b7719d047a498e7117-Supplemental-Conference.pdf | Combination therapy refers to the use of multiple treatments -- such as surgery, medication, and behavioral therapy - to cure a single disease, and has become a cornerstone for treating various conditions including cancer, HIV, and depression. All possible combinations of treatments lead to a collection of treatment regimens (i.e., policies) with mixed scopes, or what physicians could observe and which actions they should take depending on the context. In this paper, we investigate the online reinforcement learning setting for optimizing the policy space with mixed scopes. In particular, we develop novel online algorithms that achieve sublinear regret compared to an optimal agent deployed in the environment. The regret bound has a dependency on the maximal cardinality of the induced state-action space associated with mixed scopes. We further introduce a canonical representation for an arbitrary subset of interventional distributions given a causal diagram, which leads to a non-trivial, minimal representation of the model parameters. | null | null |
Self-explaining deep models with logic rule reasoning | https://papers.nips.cc/paper_files/paper/2022/hash/1548d98b62d3a4382a31ba77d89186cd-Abstract-Conference.html | Seungeon Lee, Xiting Wang, Sungwon Han, Xiaoyuan Yi, Xing Xie, Meeyoung Cha | https://papers.nips.cc/paper_files/paper/2022/hash/1548d98b62d3a4382a31ba77d89186cd-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17824-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1548d98b62d3a4382a31ba77d89186cd-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1548d98b62d3a4382a31ba77d89186cd-Supplemental-Conference.zip | We present SELOR, a framework for integrating self-explaining capabilities into a given deep model to achieve both high prediction performance and human precision. By “human precision”, we refer to the degree to which humans agree with the reasons models provide for their predictions. Human precision affects user trust and allows users to collaborate closely with the model. We demonstrate that logic rule explanations naturally satisfy them with the expressive power required for good predictive performance. We then illustrate how to enable a deep model to predict and explain with logic rules. Our method does not require predefined logic rule sets or human annotations and can be learned efficiently and easily with widely-used deep learning modules in a differentiable way. Extensive experiments show that our method gives explanations closer to human decision logic than other methods while maintaining the performance of the deep learning model. | null | null |
XTC: Extreme Compression for Pre-trained Transformers Made Simple and Efficient | https://papers.nips.cc/paper_files/paper/2022/hash/1579d5d8edacd85ac1a86aea28bdf32d-Abstract-Conference.html | Xiaoxia Wu, Zhewei Yao, Minjia Zhang, Conglong Li, Yuxiong He | https://papers.nips.cc/paper_files/paper/2022/hash/1579d5d8edacd85ac1a86aea28bdf32d-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16855-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1579d5d8edacd85ac1a86aea28bdf32d-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1579d5d8edacd85ac1a86aea28bdf32d-Supplemental-Conference.pdf | Extreme compression, particularly ultra-low bit precision (binary/ternary) quantization, has been proposed to fit large NLP models on resource-constraint devices. However, to preserve the accuracy for such aggressive compression schemes, cutting-edge methods usually introduce complicated compression pipelines, e.g., multi-stage expensive knowledge distillation with extensive hyperparameter tuning. Also, they oftentimes focus less on smaller transformer models that have already been heavily compressed via knowledge distillation and lack a systematic study to show the effectiveness of their methods.In this paper, we perform a very comprehensive systematic study to measure the impact of many key hyperparameters and training strategies from previous. As a result, we find out that previous baselines for ultra-low bit precision quantization are significantly under-trained. Based on our study, we propose a simple yet effective compression pipeline for extreme compression. Our simplified pipeline demonstrates that(1) we can skip the pre-training knowledge distillation to obtain a 5-layer \bert while achieving better performance than previous state-of-the-art methods, like TinyBERT; (2) extreme quantization plus layer reduction is able to reduce the model size by 50x, resulting in new state-of-the-art results on GLUE tasks. | null | null |
S3GC: Scalable Self-Supervised Graph Clustering | https://papers.nips.cc/paper_files/paper/2022/hash/15972a9575e0f03bf82f00aebeb40774-Abstract-Conference.html | Fnu Devvrit, Aditya Sinha, Inderjit Dhillon, Prateek Jain | https://papers.nips.cc/paper_files/paper/2022/hash/15972a9575e0f03bf82f00aebeb40774-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16657-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/15972a9575e0f03bf82f00aebeb40774-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/15972a9575e0f03bf82f00aebeb40774-Supplemental-Conference.pdf | We study the problem of clustering graphs with additional side-information of node features. The problem is extensively studied, and several existing methods exploit Graph Neural Networks to learn node representations. However, most of the existing methods focus on generic representations instead of their cluster-ability or do not scale to large scale graph datasets. In this work, we propose S3GC which uses contrastive learning along with Graph Neural Networks and node features to learn clusterable features. We empirically demonstrate that S3GC is able to learn the correct cluster structure even when graph information or node features are individually not informative enough to learn correct clusters. Finally, using extensive evaluation on a variety of benchmarks, we demonstrate that S3GC is able to significantly outperform state-of-the-art methods in terms of clustering accuracy -- with as much as 5% gain in NMI -- while being scalable to graphs of size 100M. | null | null |
Contrastive Neural Ratio Estimation | https://papers.nips.cc/paper_files/paper/2022/hash/159f7fe5b51ecd663b85337e8e28ce65-Abstract-Conference.html | Benjamin K Miller, Christoph Weniger, Patrick Forré | https://papers.nips.cc/paper_files/paper/2022/hash/159f7fe5b51ecd663b85337e8e28ce65-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18266-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/159f7fe5b51ecd663b85337e8e28ce65-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/159f7fe5b51ecd663b85337e8e28ce65-Supplemental-Conference.pdf | Likelihood-to-evidence ratio estimation is usually cast as either a binary (NRE-A) or a multiclass (NRE-B) classification task. In contrast to the binary classification framework, the current formulation of the multiclass version has an intrinsic and unknown bias term, making otherwise informative diagnostics unreliable. We propose a multiclass framework free from the bias inherent to NRE-B at optimum, leaving us in the position to run diagnostics that practitioners depend on. It also recovers NRE-A in one corner case and NRE-B in the limiting case. For fair comparison, we benchmark the behavior of all algorithms in both familiar and novel training regimes: when jointly drawn data is unlimited, when data is fixed but prior draws are unlimited, and in the commonplace fixed data and parameters setting. Our investigations reveal that the highest performing models are distant from the competitors (NRE-A, NRE-B) in hyperparameter space. We make a recommendation for hyperparameters distinct from the previous models. We suggest a bound on the mutual information as a performance metric for simulation-based inference methods, without the need for posterior samples, and provide experimental results. | null | null |
An Information-Theoretic Framework for Deep Learning | https://papers.nips.cc/paper_files/paper/2022/hash/15cc8e4a46565dab0c1a1220884bd503-Abstract-Conference.html | Hong Jun Jeon, Benjamin Van Roy | https://papers.nips.cc/paper_files/paper/2022/hash/15cc8e4a46565dab0c1a1220884bd503-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16905-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/15cc8e4a46565dab0c1a1220884bd503-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/15cc8e4a46565dab0c1a1220884bd503-Supplemental-Conference.pdf | Each year, deep learning demonstrate new and improved empirical results with deeper and wider neural networks. Meanwhile, with existing theoretical frameworks, it is difficult to analyze networks deeper than two layers without resorting to counting parameters or encountering sample complexity bounds that are exponential in depth. Perhaps it may be fruitful to try to analyze modern machine learning under a different lens. In this paper, we propose a novel information-theoretic framework with its own notions of regret and sample complexity for analyzing the data requirements of machine learning. We use this framework to study the sample complexity of learning from data generated by deep ReLU neural networks and deep networks that are infinitely wide but have a bounded sum of weights. We establish that the sample complexity of learning under these data generating processes is at most linear and quadratic, respectively, in network depth. | null | null |
Uncoupled Learning Dynamics with $O(\log T)$ Swap Regret in Multiplayer Games | https://papers.nips.cc/paper_files/paper/2022/hash/15d45097f9806983f0629a77e93ee60f-Abstract-Conference.html | Ioannis Anagnostides, Gabriele Farina, Christian Kroer, Chung-Wei Lee, Haipeng Luo, Tuomas Sandholm | https://papers.nips.cc/paper_files/paper/2022/hash/15d45097f9806983f0629a77e93ee60f-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17140-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/15d45097f9806983f0629a77e93ee60f-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/15d45097f9806983f0629a77e93ee60f-Supplemental-Conference.pdf | In this paper we establish efficient and \emph{uncoupled} learning dynamics so that, when employed by all players in a general-sum multiplayer game, the \emph{swap regret} of each player after $T$ repetitions of the game is bounded by $O(\log T)$, improving over the prior best bounds of $O(\log^4 (T))$. At the same time, we guarantee optimal $O(\sqrt{T})$ swap regret in the adversarial regime as well. To obtain these results, our primary contribution is to show that when all players follow our dynamics with a \emph{time-invariant} learning rate, the \emph{second-order path lengths} of the dynamics up to time $T$ are bounded by $O(\log T)$, a fundamental property which could have further implications beyond near-optimally bounding the (swap) regret. Our proposed learning dynamics combine in a novel way \emph{optimistic} regularized learning with the use of \emph{self-concordant barriers}. Further, our analysis is remarkably simple, bypassing the cumbersome framework of higher-order smoothness recently developed by Daskalakis, Fishelson, and Golowich (NeurIPS'21). | null | null |
Robust Semi-Supervised Learning when Not All Classes have Labels | https://papers.nips.cc/paper_files/paper/2022/hash/15dce910311b9bd82ca24f634148519a-Abstract-Conference.html | Lan-Zhe Guo, Yi-Ge Zhang, Zhi-Fan Wu, Jie-Jing Shao, Yu-Feng Li | https://papers.nips.cc/paper_files/paper/2022/hash/15dce910311b9bd82ca24f634148519a-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19049-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/15dce910311b9bd82ca24f634148519a-Paper-Conference.pdf | null | Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data. Existing SSL typically requires all classes have labels. However, in many real-world applications, there may exist some classes that are difficult to label or newly occurred classes that cannot be labeled in time, resulting in there are unseen classes in unlabeled data. Unseen classes will be misclassified as seen classes, causing poor classification performance. The performance of seen classes is also harmed by the existence of unseen classes. This limits the practical and wider application of SSL. To address this problem, this paper proposes a new SSL approach that can classify not only seen classes but also unseen classes. Our approach consists of two modules: unseen class classification and learning pace synchronization. Specifically, we first enable the SSL methods to classify unseen classes by exploiting pairwise similarity between examples and then synchronize the learning pace between seen and unseen classes by proposing an adaptive threshold with distribution alignment. Extensive empirical results show our approach achieves significant performance improvement in both seen and unseen classes compared with previous studies. | null | null |
Private Multiparty Perception for Navigation | https://papers.nips.cc/paper_files/paper/2022/hash/15ddb1773510075ef44981cdb204330b-Abstract-Conference.html | Hui Lu, Mia Chiquier, Carl Vondrick | https://papers.nips.cc/paper_files/paper/2022/hash/15ddb1773510075ef44981cdb204330b-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17367-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/15ddb1773510075ef44981cdb204330b-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/15ddb1773510075ef44981cdb204330b-Supplemental-Conference.pdf | We introduce a framework for navigating through cluttered environments by connecting multiple cameras together while simultanously preserving privacy. Occlusions and obstacles in large environments are often challenging situations for navigation agents because the environment is not fully observable from a single camera view. Given multiple camera views of an environment, our approach learns to produce a multiview scene representation that can only be used for navigation, provably preventing one party from inferring anything beyond the output task. On a new navigation dataset that we will publicly release, experiments show that private multiparty representations allow navigation through complex scenes and around obstacles while jointly preserving privacy. Our approach scales to an arbitrary number of camera viewpoints. We believe developing visual representations that preserve privacy is increasingly important for many applications such as navigation. | null | null |
Improving Task-Specific Generalization in Few-Shot Learning via Adaptive Vicinal Risk Minimization | https://papers.nips.cc/paper_files/paper/2022/hash/16063a1c0f0cddd4894585cf44cebb2c-Abstract-Conference.html | Long-Kai Huang, Ying Wei | https://papers.nips.cc/paper_files/paper/2022/hash/16063a1c0f0cddd4894585cf44cebb2c-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18123-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/16063a1c0f0cddd4894585cf44cebb2c-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/16063a1c0f0cddd4894585cf44cebb2c-Supplemental-Conference.pdf | Recent years have witnessed the rapid development of meta-learning in improving the meta generalization over tasks in few-shot learning. However, the task-specific level generalization is overlooked in most algorithms. For a novel few-shot learning task where the empirical distribution likely deviates from the true distribution, the model obtained via minimizing the empirical loss can hardly generalize to unseen data. A viable solution to improving the generalization comes as a more accurate approximation of the true distribution; that is, admitting a Gaussian-like vicinal distribution for each of the limited training samples. Thereupon we derive the resulting vicinal loss function over vicinities of all training samples and minimize it instead of the conventional empirical loss over training samples only, favorably free from the exhaustive sampling of all vicinal samples.It remains challenging to obtain the statistical parameters of the vicinal distribution for each sample. To tackle this challenge, we further propose to estimate the statistical parameters as the weighted mean and variance of a set of unlabeled data it passed by a random walk starting from training samples. To verify the performance of the proposed method, we conduct experiments on four standard few-shot learning benchmarks and consolidate the superiority of the proposed method over state-of-the-art few-shot learning baselines. | null | null |
C-Mixup: Improving Generalization in Regression | https://papers.nips.cc/paper_files/paper/2022/hash/1626be0ab7f3d7b3c639fbfd5951bc40-Abstract-Conference.html | Huaxiu Yao, Yiping Wang, Linjun Zhang, James Y. Zou, Chelsea Finn | https://papers.nips.cc/paper_files/paper/2022/hash/1626be0ab7f3d7b3c639fbfd5951bc40-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19339-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1626be0ab7f3d7b3c639fbfd5951bc40-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1626be0ab7f3d7b3c639fbfd5951bc40-Supplemental-Conference.pdf | Improving the generalization of deep networks is an important open challenge, particularly in domains without plentiful data. The mixup algorithm improves generalization by linearly interpolating a pair of examples and their corresponding labels. These interpolated examples augment the original training set. Mixup has shown promising results in various classification tasks, but systematic analysis of mixup in regression remains underexplored. Using mixup directly on regression labels can result in arbitrarily incorrect labels. In this paper, we propose a simple yet powerful algorithm, C-Mixup, to improve generalization on regression tasks. In contrast with vanilla mixup, which picks training examples for mixing with uniform probability, C-Mixup adjusts the sampling probability based on the similarity of the labels. Our theoretical analysis confirms that C-Mixup with label similarity obtains a smaller mean square error in supervised regression and meta-regression than vanilla mixup and using feature similarity. Another benefit of C-Mixup is that it can improve out-of-distribution robustness, where the test distribution is different from the training distribution. By selectively interpolating examples with similar labels, it mitigates the effects of domain-associated information and yields domain-invariant representations. We evaluate C-Mixup on eleven datasets, ranging from tabular to video data. Compared to the best prior approach, C-Mixup achieves 6.56%, 4.76%, 5.82% improvements in in-distribution generalization, task generalization, and out-of-distribution robustness, respectively. Code is released at https://github.com/huaxiuyao/C-Mixup. | null | null |
Generalised Mutual Information for Discriminative Clustering | https://papers.nips.cc/paper_files/paper/2022/hash/16294049ed8de15830ac0b569b97f74a-Abstract-Conference.html | Louis Ohl, Pierre-Alexandre Mattei, Charles Bouveyron, Warith HARCHAOUI, Mickaël Leclercq, Arnaud Droit, Frederic Precioso | https://papers.nips.cc/paper_files/paper/2022/hash/16294049ed8de15830ac0b569b97f74a-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19069-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/16294049ed8de15830ac0b569b97f74a-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/16294049ed8de15830ac0b569b97f74a-Supplemental-Conference.pdf | In the last decade, recent successes in deep clustering majorly involved the mutual information (MI) as an unsupervised objective for training neural networks with increasing regularisations. While the quality of the regularisations have been largely discussed for improvements, little attention has been dedicated to the relevance of MI as a clustering objective. In this paper, we first highlight how the maximisation of MI does not lead to satisfying clusters. We identified the Kullback-Leibler divergence as the main reason of this behaviour. Hence, we generalise the mutual information by changing its core distance, introducing the generalised mutual information (GEMINI): a set of metrics for unsupervised neural network training. Unlike MI, some GEMINIs do not require regularisations when training. Some of these metrics are geometry-aware thanks to distances or kernels in the data space. Finally, we highlight that GEMINIs can automatically select a relevant number of clusters, a property that has been little studied in deep clustering context where the number of clusters is a priori unknown. | null | null |
Consistent Interpolating Ensembles via the Manifold-Hilbert Kernel | https://papers.nips.cc/paper_files/paper/2022/hash/16371a9d5fed65d6d78ca3a7fa6e598c-Abstract-Conference.html | Yutong Wang, Clay Scott | https://papers.nips.cc/paper_files/paper/2022/hash/16371a9d5fed65d6d78ca3a7fa6e598c-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17765-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/16371a9d5fed65d6d78ca3a7fa6e598c-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/16371a9d5fed65d6d78ca3a7fa6e598c-Supplemental-Conference.pdf | Recent research in the theory of overparametrized learning has sought to establish generalization guarantees in the interpolating regime. Such results have been established for a few common classes of methods, but so far not for ensemble methods. We devise an ensemble classification method that simultaneously interpolates the training data, and is consistent for a broad class of data distributions. To this end, we define the manifold-Hilbert kernel for data distributed on a Riemannian manifold. We prove that kernel smoothing regression using the manifold-Hilbert kernel is weakly consistent in the setting of Devroye et al. 1998. For the sphere, we show that the manifold-Hilbert kernel can be realized as a weighted random partition kernel, which arises as an infinite ensemble of partition-based classifiers. | null | null |
Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction | https://papers.nips.cc/paper_files/paper/2022/hash/16415eed5a0a121bfce79924db05d3fe-Abstract-Conference.html | Qiancheng Fu, Qingshan Xu, Yew Soon Ong, Wenbing Tao | https://papers.nips.cc/paper_files/paper/2022/hash/16415eed5a0a121bfce79924db05d3fe-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19163-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/16415eed5a0a121bfce79924db05d3fe-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/16415eed5a0a121bfce79924db05d3fe-Supplemental-Conference.zip | Recently, neural implicit surfaces learning by volume rendering has become popular for multi-view reconstruction. However, one key challenge remains: existing approaches lack explicit multi-view geometry constraints, hence usually fail to generate geometry-consistent surface reconstruction. To address this challenge, we propose geometry-consistent neural implicit surfaces learning for multi-view reconstruction. We theoretically analyze that there exists a gap between the volume rendering integral and point-based signed distance function (SDF) modeling. To bridge this gap, we directly locate the zero-level set of SDF networks and explicitly perform multi-view geometry optimization by leveraging the sparse geometry from structure from motion (SFM) and photometric consistency in multi-view stereo. This makes our SDF optimization unbiased and allows the multi-view geometry constraints to focus on the true surface optimization. Extensive experiments show that our proposed method achieves high-quality surface reconstruction in both complex thin structures and large smooth regions, thus outperforming the state-of-the-arts by a large margin. | null | null |
Sublinear Algorithms for Hierarchical Clustering | https://papers.nips.cc/paper_files/paper/2022/hash/16466b6c95c5924784486ac5a3feeb65-Abstract-Conference.html | Arpit Agarwal, Sanjeev Khanna, Huan Li, Prathamesh Patil | https://papers.nips.cc/paper_files/paper/2022/hash/16466b6c95c5924784486ac5a3feeb65-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18611-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/16466b6c95c5924784486ac5a3feeb65-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/16466b6c95c5924784486ac5a3feeb65-Supplemental-Conference.pdf | Hierarchical clustering over graphs is a fundamental task in data mining and machine learning with applications in many domains including phylogenetics, social network analysis, and information retrieval. Specifically, we consider the recently popularized objective function for hierarchical clustering due to Dasgupta~\cite{Dasgupta16}, namely, minimum cost hierarchical partitioning. Previous algorithms for (approximately) minimizing this objective function require linear time/space complexity. In many applications the underlying graph can be massive in size making it computationally challenging to process the graph even using a linear time/space algorithm. As a result, there is a strong interest in designing algorithms that can perform global computation using only sublinear resources (space, time, and communication). The focus of this work is to study hierarchical clustering for massive graphs under three well-studied models of sublinear computation which focus on space, time, and communication, respectively, as the primary resources to optimize: (1) (dynamic) streaming model where edges are presented as a stream, (2) query model where the graph is queried using neighbor and degree queries, (3) massively parallel computation (MPC) model where the edges of the graph are partitioned over several machines connected via a communication channel.We design sublinear algorithms for hierarchical clustering in all three models above. At the heart of our algorithmic results is a view of the objective in terms of cuts in the graph, which allows us to use a relaxed notion of cut sparsifiers to do hierarchical clustering while introducing only a small distortion in the objective function. Our main algorithmic contributions are then to show how cut sparsifiers of the desired form can be efficiently constructed in the query model and the MPC model. We complement our algorithmic results by establishing nearly matching lower bounds that rule out the possibility of designing algorithms with better performance guarantees in each of these models. | null | null |
Is Sortition Both Representative and Fair? | https://papers.nips.cc/paper_files/paper/2022/hash/165bbd0a0a1b9470ec34d5afec582d2e-Abstract-Conference.html | Soroush Ebadian, Gregory Kehne, Evi Micha, Ariel D. Procaccia, Nisarg Shah | https://papers.nips.cc/paper_files/paper/2022/hash/165bbd0a0a1b9470ec34d5afec582d2e-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16949-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/165bbd0a0a1b9470ec34d5afec582d2e-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/165bbd0a0a1b9470ec34d5afec582d2e-Supplemental-Conference.zip | Sortition is a form of democracy built on random selection of representatives. Two of the key arguments in favor of sortition are that it provides representation (a random panel reflects the composition of the population) and fairness (everyone has a chance to participate). Uniformly random selection is perfectly fair, but is it representative? Towards answering this question, we introduce the notion of a representation metric on the space of individuals, and assume that the cost of an individual for a panel is determined by the $q$-th closest representative; the representation of a (random) panel is measured by the ratio between the (expected) sum of costs of the optimal panel for the individuals and that of the given panel. For $k/2 < q \le k-\Omega(k)$, where $k$ is the panel size, we show that uniform random selection is indeed representative by establishing a constant lower bound on this ratio. By contrast, for $q \leq k/2$, no random selection algorithm that is almost fair can give such a guarantee. We therefore consider relaxed fairness guarantees and develop a new random selection algorithm that sheds light on the tradeoff between representation and fairness. | null | null |
Beyond Rewards: a Hierarchical Perspective on Offline Multiagent Behavioral Analysis | https://papers.nips.cc/paper_files/paper/2022/hash/1663fba7b56da1e96bed6e30546a07b0-Abstract-Conference.html | Shayegan Omidshafiei, Andrei Kapishnikov, Yannick Assogba, Lucas Dixon, Been Kim | https://papers.nips.cc/paper_files/paper/2022/hash/1663fba7b56da1e96bed6e30546a07b0-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18937-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1663fba7b56da1e96bed6e30546a07b0-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1663fba7b56da1e96bed6e30546a07b0-Supplemental-Conference.pdf | Each year, expert-level performance is attained in increasingly-complex multiagent domains, where notable examples include Go, Poker, and StarCraft II. This rapid progression is accompanied by a commensurate need to better understand how such agents attain this performance, to enable their safe deployment, identify limitations, and reveal potential means of improving them. In this paper we take a step back from performance-focused multiagent learning, and instead turn our attention towards agent behavior analysis. We introduce a model-agnostic method for discovery of behavior clusters in multiagent domains, using variational inference to learn a hierarchy of behaviors at the joint and local agent levels. Our framework makes no assumption about agents' underlying learning algorithms, does not require access to their latent states or policies, and is trained using only offline observational data. We illustrate the effectiveness of our method for enabling the coupled understanding of behaviors at the joint and local agent level, detection of behavior changepoints throughout training, discovery of core behavioral concepts, demonstrate the approach's scalability to a high-dimensional multiagent MuJoCo control domain, and also illustrate that the approach can disentangle previously-trained policies in OpenAI's hide-and-seek domain. | null | null |
Dynamic pricing and assortment under a contextual MNL demand | https://papers.nips.cc/paper_files/paper/2022/hash/1673a54332b2afc905722048c26f5a4c-Abstract-Conference.html | Noemie Perivier, Vineet Goyal | https://papers.nips.cc/paper_files/paper/2022/hash/1673a54332b2afc905722048c26f5a4c-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17247-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1673a54332b2afc905722048c26f5a4c-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1673a54332b2afc905722048c26f5a4c-Supplemental-Conference.pdf | We consider dynamic multi-product pricing and assortment problems under an unknown demand over T periods, where in each period, the seller decides on the price for each product or the assortment of products to offer to a customer who chooses according to an unknown Multinomial Logit Model (MNL). Such problems arise in many applications, including online retail and advertising. We propose a randomized dynamic pricing policy based on a variant of the Online Newton Step algorithm (ONS) that achieves a $O(d\sqrt{T}\log(T))$ regret guarantee under an adversarial arrival model. We also present a new optimistic algorithm for the adversarial MNL contextual bandits problem, which achieves a better dependency than the state-of-the-art algorithms in a problem-dependent constant $\kappa$ (potentially exponentially small). Our regret upper bound scales as $\tilde{O}(d\sqrt{\kappa T}+ \log(T)/\kappa)$, which gives a stronger bound than the existing $\tilde{O}(d\sqrt{T}/\kappa)$ guarantees. | null | null |
DGD^2: A Linearly Convergent Distributed Algorithm For High-dimensional Statistical Recovery | https://papers.nips.cc/paper_files/paper/2022/hash/1687466683649e8bdcdec0e3f5c8de64-Abstract-Conference.html | Marie Maros, Gesualdo Scutari | https://papers.nips.cc/paper_files/paper/2022/hash/1687466683649e8bdcdec0e3f5c8de64-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17436-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1687466683649e8bdcdec0e3f5c8de64-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1687466683649e8bdcdec0e3f5c8de64-Supplemental-Conference.pdf | We study linear regression from data distributed over a network of agents (with no master node) under high-dimensional scaling, which allows the ambient dimension to grow faster than the sample size. We propose a novel decentralization of the projected gradient algorithm whereby agents iteratively update their local estimates by a “double-mixing” mechanism, which suitably combines averages of iterates and gradients of neighbouring nodes. Under standard assumptions on the statistical model and network connectivity, the proposed method enjoys global linear convergence up to the statistical precision of the model. This improves on guarantees of (plain) DGD algorithms, whose iteration complexity grows undesirably with the ambient dimension. Our technical contribution is a novel convergence analysis that resembles (albeit different) algorithmic stability arguments extended to high-dimensions and distributed setting, which is of independent interest. | null | null |
Pseudo-Riemannian Graph Convolutional Networks | https://papers.nips.cc/paper_files/paper/2022/hash/16c628ab12dc4caca8e7712affa6c767-Abstract-Conference.html | Bo Xiong, Shichao Zhu, Nico Potyka, Shirui Pan, Chuan Zhou, Steffen Staab | https://papers.nips.cc/paper_files/paper/2022/hash/16c628ab12dc4caca8e7712affa6c767-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17698-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/16c628ab12dc4caca8e7712affa6c767-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/16c628ab12dc4caca8e7712affa6c767-Supplemental-Conference.pdf | Graph Convolutional Networks (GCNs) are powerful frameworks for learning embeddings of graph-structured data. GCNs are traditionally studied through the lens of Euclidean geometry. Recent works find that non-Euclidean Riemannian manifolds provide specific inductive biases for embedding hierarchical or spherical data. However, they cannot align well with data of mixed graph topologies. We consider a larger class of pseudo-Riemannian manifolds that generalize hyperboloid and sphere. We develop new geodesic tools that allow for extending neural network operations into geodesically disconnected pseudo-Riemannian manifolds. As a consequence, we derive a pseudo-Riemannian GCN that models data in pseudo-Riemannian manifolds of constant nonzero curvature in the context of graph neural networks. Our method provides a geometric inductive bias that is sufficiently flexible to model mixed heterogeneous topologies like hierarchical graphs with cycles. We demonstrate the representational capabilities of this method by applying it to the tasks of graph reconstruction, node classification, and link prediction on a series of standard graphs with mixed topologies. Empirical results demonstrate that our method outperforms Riemannian counterparts when embedding graphs of complex topologies. | null | null |
CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion | https://papers.nips.cc/paper_files/paper/2022/hash/16e71d1a24b98a02c17b1be1f634f979-Abstract-Conference.html | Philippe Weinzaepfel, Vincent Leroy, Thomas Lucas, Romain BRÉGIER, Yohann Cabon, Vaibhav ARORA, Leonid Antsfeld, Boris Chidlovskii, Gabriela Csurka, Jerome Revaud | https://papers.nips.cc/paper_files/paper/2022/hash/16e71d1a24b98a02c17b1be1f634f979-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17679-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/16e71d1a24b98a02c17b1be1f634f979-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/16e71d1a24b98a02c17b1be1f634f979-Supplemental-Conference.zip | Masked Image Modeling (MIM) has recently been established as a potent pre-training paradigm. A pretext task is constructed by masking patches in an input image, and this masked content is then predicted by a neural network using visible patches as sole input. This pre-training leads to state-of-the-art performance when finetuned for high-level semantic tasks, e.g. image classification and object detection. In this paper we instead seek to learn representations that transfer well to a wide variety of 3D vision and lower-level geometric downstream tasks, such as depth prediction or optical flow estimation. Inspired by MIM, we propose an unsupervised representation learning task trained from pairs of images showing the same scene from different viewpoints. More precisely, we propose the pretext task of cross-view completion where the first input image is partially masked, and this masked content has to be reconstructed from the visible content and the second image. In single-view MIM, the masked content often cannot be inferred precisely from the visible portion only, so the model learns to act as a prior influenced by high-level semantics. In contrast, this ambiguity can be resolved with cross-view completion from the second unmasked image, on the condition that the model is able to understand the spatial relationship between the two images. Our experiments show that our pretext task leads to significantly improved performance for monocular 3D vision downstream tasks such as depth estimation. In addition, our model can be directly applied to binocular downstream tasks like optical flow or relative camera pose estimation, for which we obtain competitive results without bells and whistles, i.e., using a generic architecture without any task-specific design. | null | null |
Sound and Complete Verification of Polynomial Networks | https://papers.nips.cc/paper_files/paper/2022/hash/1700ad4e6252e8f2955909f96367b34d-Abstract-Conference.html | Elias Abad Rocamora, Mehmet Fatih Sahin, Fanghui Liu, Grigorios Chrysos, Volkan Cevher | https://papers.nips.cc/paper_files/paper/2022/hash/1700ad4e6252e8f2955909f96367b34d-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19349-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1700ad4e6252e8f2955909f96367b34d-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1700ad4e6252e8f2955909f96367b34d-Supplemental-Conference.zip | Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently. However, robustness of PNs is unclear and thus obtaining certificates becomes imperative for enabling their adoption in real-world applications. Existing verification algorithms on ReLU neural networks (NNs) based on classical branch and bound (BaB) techniques cannot be trivially applied to PN verification. In this work, we devise a new bounding method, equipped with BaB for global convergence guarantees, called Verification of Polynomial Networks or VPN for short. One key insight is that we obtain much tighter bounds than the interval bound propagation (IBP) and DeepT-Fast [Bonaert et al., 2021] baselines. This enables sound and complete PN verification with empirical validation on MNIST, CIFAR10 and STL10 datasets. We believe our method has its own interest to NN verification. The source code is publicly available at https://github.com/megaelius/PNVerification. | null | null |
Better SGD using Second-order Momentum | https://papers.nips.cc/paper_files/paper/2022/hash/1704fe7aaff33a54802b83a016050ab8-Abstract-Conference.html | Hoang Tran, Ashok Cutkosky | https://papers.nips.cc/paper_files/paper/2022/hash/1704fe7aaff33a54802b83a016050ab8-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18976-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1704fe7aaff33a54802b83a016050ab8-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1704fe7aaff33a54802b83a016050ab8-Supplemental-Conference.pdf | We develop a new algorithm for non-convex stochastic optimization that finds an $\epsilon$-critical point in the optimal $O(\epsilon^{-3})$ stochastic gradient and Hessian-vector product computations. Our algorithm uses Hessian-vector products to "correct'' a bias term in the momentum of SGD with momentum. This leads to better gradient estimates in a manner analogous to variance reduction methods. In contrast to prior work, we do not require excessively large batch sizes and are able to provide an adaptive algorithm whose convergence rate automatically improves with decreasing variance in the gradient estimates. We validate our results on a variety of large-scale deep learning architectures and benchmarks tasks. | null | null |
Learning Predictions for Algorithms with Predictions | https://papers.nips.cc/paper_files/paper/2022/hash/17061a94c3c7fda5fa24bbdd1832fa99-Abstract-Conference.html | Misha Khodak, Maria-Florina F. Balcan, Ameet Talwalkar, Sergei Vassilvitskii | https://papers.nips.cc/paper_files/paper/2022/hash/17061a94c3c7fda5fa24bbdd1832fa99-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16950-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/17061a94c3c7fda5fa24bbdd1832fa99-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/17061a94c3c7fda5fa24bbdd1832fa99-Supplemental-Conference.pdf | A burgeoning paradigm in algorithm design is the field of algorithms with predictions, in which algorithms can take advantage of a possibly-imperfect prediction of some aspect of the problem. While much work has focused on using predictions to improve competitive ratios, running times, or other performance measures, less effort has been devoted to the question of how to obtain the predictions themselves, especially in the critical online setting. We introduce a general design approach for algorithms that learn predictors: (1) identify a functional dependence of the performance measure on the prediction quality and (2) apply techniques from online learning to learn predictors, tune robustness-consistency trade-offs, and bound the sample complexity. We demonstrate the effectiveness of our approach by applying it to bipartite matching, ski-rental, page migration, and job scheduling. In several settings we improve upon multiple existing results while utilizing a much simpler analysis, while in the others we provide the first learning-theoretic guarantees. | null | null |
Unsupervised Point Cloud Completion and Segmentation by Generative Adversarial Autoencoding Network | https://papers.nips.cc/paper_files/paper/2022/hash/171846d7af5ea91e63db508154eaffe8-Abstract-Conference.html | Changfeng Ma, Yang Yang, Jie Guo, Fei Pan, Chongjun Wang, Yanwen Guo | https://papers.nips.cc/paper_files/paper/2022/hash/171846d7af5ea91e63db508154eaffe8-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18686-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/171846d7af5ea91e63db508154eaffe8-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/171846d7af5ea91e63db508154eaffe8-Supplemental-Conference.pdf | Most existing point cloud completion methods assume the input partial point cloud is clean, which is not practical in practice, and are Most existing point cloud completion methods assume the input partial point cloud is clean, which is not the case in practice, and are generally based on supervised learning. In this paper, we present an unsupervised generative adversarial autoencoding network, named UGAAN, which completes the partial point cloud contaminated by surroundings from real scenes and cutouts the object simultaneously, only using artificial CAD models as assistance. The generator of UGAAN learns to predict the complete point clouds on real data from both the discriminator and the autoencoding process of artificial data. The latent codes from generator are also fed to discriminator which makes encoder only extract object features rather than noises. We also devise a refiner for generating better complete cloud with a segmentation module to separate the object from background. We train our UGAAN with one real scene dataset and evaluate it with the other two. Extensive experiments and visualization demonstrate our superiority, generalization and robustness. Comparisons against the previous method show that our method achieves the state-of-the-art performance on unsupervised point cloud completion and segmentation on real data. | null | null |
CalFAT: Calibrated Federated Adversarial Training with Label Skewness | https://papers.nips.cc/paper_files/paper/2022/hash/171c3678c36e39fc0074f3e7332a9a66-Abstract-Conference.html | Chen Chen, Yuchen Liu, Xingjun Ma, Lingjuan Lyu | https://papers.nips.cc/paper_files/paper/2022/hash/171c3678c36e39fc0074f3e7332a9a66-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17543-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/171c3678c36e39fc0074f3e7332a9a66-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/171c3678c36e39fc0074f3e7332a9a66-Supplemental-Conference.pdf | Recent studies have shown that, like traditional machine learning, federated learning (FL) is also vulnerable to adversarial attacks.To improve the adversarial robustness of FL, federated adversarial training (FAT) methods have been proposed to apply adversarial training locally before global aggregation. Although these methods demonstrate promising results on independent identically distributed (IID) data, they suffer from training instability on non-IID data with label skewness, resulting in degraded natural accuracy. This tends to hinder the application of FAT in real-world applications where the label distribution across the clients is often skewed. In this paper, we study the problem of FAT under label skewness, and reveal one root cause of the training instability and natural accuracy degradation issues: skewed labels lead to non-identical class probabilities and heterogeneous local models. We then propose a Calibrated FAT (CalFAT) approach to tackle the instability issue by calibrating the logits adaptively to balance the classes. We show both theoretically and empirically that the optimization of CalFAT leads to homogeneous local models across the clients and better convergence points. | null | null |
Rethinking Generalization in Few-Shot Classification | https://papers.nips.cc/paper_files/paper/2022/hash/1734365bbf243480dbc491a327497cf1-Abstract-Conference.html | Markus Hiller, Rongkai Ma, Mehrtash Harandi, Tom Drummond | https://papers.nips.cc/paper_files/paper/2022/hash/1734365bbf243480dbc491a327497cf1-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17571-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1734365bbf243480dbc491a327497cf1-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1734365bbf243480dbc491a327497cf1-Supplemental-Conference.pdf | Single image-level annotations only correctly describe an often small subset of an image’s content, particularly when complex real-world scenes are depicted. While this might be acceptable in many classification scenarios, it poses a significant challenge for applications where the set of classes differs significantly between training and test time. In this paper, we take a closer look at the implications in the context of few-shot learning. Splitting the input samples into patches and encoding these via the help of Vision Transformers allows us to establish semantic correspondences between local regions across images and independent of their respective class. The most informative patch embeddings for the task at hand are then determined as a function of the support set via online optimization at inference time, additionally providing visual interpretability of ‘what matters most’ in the image. We build on recent advances in unsupervised training of networks via masked image modelling to overcome the lack of fine-grained labels and learn the more general statistical structure of the data while avoiding negative image-level annotation influence, aka supervision collapse. Experimental results show the competitiveness of our approach, achieving new state-of-the-art results on four popular few-shot classification benchmarks for 5-shot and 1-shot scenarios. | null | null |
Stimulative Training of Residual Networks: A Social Psychology Perspective of Loafing | https://papers.nips.cc/paper_files/paper/2022/hash/1757af1fe1429801bdf3abf5600f8bba-Abstract-Conference.html | Peng Ye, Shengji Tang, Baopu Li, Tao Chen, Wanli Ouyang | https://papers.nips.cc/paper_files/paper/2022/hash/1757af1fe1429801bdf3abf5600f8bba-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19190-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1757af1fe1429801bdf3abf5600f8bba-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1757af1fe1429801bdf3abf5600f8bba-Supplemental-Conference.pdf | Residual networks have shown great success and become indispensable in today’s deep models. In this work, we aim to re-investigate the training process of residual networks from a novel social psychology perspective of loafing, and further propose a new training strategy to strengthen the performance of residual networks. As residual networks can be viewed as ensembles of relatively shallow networks (i.e., unraveled view) in prior works, we also start from such view and consider that the final performance of a residual network is co-determined by a group of sub-networks. Inspired by the social loafing problem of social psychology, we find that residual networks invariably suffer from similar problem, where sub-networks in a residual network are prone to exert less effort when working as part of the group compared to working alone. We define this previously overlooked problem as network loafing. As social loafing will ultimately cause the low individual productivity and the reduced overall performance, network loafing will also hinder the performance of a given residual network and its sub-networks. Referring to the solutions of social psychology, we propose stimulative training, which randomly samples a residual sub-network and calculates the KL-divergence loss between the sampled sub-network and the given residual network, to act as extra supervision for sub-networks and make the overall goal consistent. Comprehensive empirical results and theoretical analyses verify that stimulative training can well handle the loafing problem, and improve the performance of a residual network by improving the performance of its sub-networks. The code is available at https://github.com/Sunshine-Ye/NIPS22-ST. | null | null |
EGSDE: Unpaired Image-to-Image Translation via Energy-Guided Stochastic Differential Equations | https://papers.nips.cc/paper_files/paper/2022/hash/177d68f4adef163b7b123b5c5adb3c60-Abstract-Conference.html | Min Zhao, Fan Bao, Chongxuan LI, Jun Zhu | https://papers.nips.cc/paper_files/paper/2022/hash/177d68f4adef163b7b123b5c5adb3c60-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17041-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/177d68f4adef163b7b123b5c5adb3c60-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/177d68f4adef163b7b123b5c5adb3c60-Supplemental-Conference.pdf | Score-based diffusion models (SBDMs) have achieved the SOTA FID results in unpaired image-to-image translation (I2I). However, we notice that existing methods totally ignore the training data in the source domain, leading to sub-optimal solutions for unpaired I2I. To this end, we propose energy-guided stochastic differential equations (EGSDE) that employs an energy function pretrained on both the source and target domains to guide the inference process of a pretrained SDE for realistic and faithful unpaired I2I. Building upon two feature extractors, we carefully design the energy function such that it encourages the transferred image to preserve the domain-independent features and discard domain-specific ones. Further, we provide an alternative explanation of the EGSDE as a product of experts, where each of the three experts (corresponding to the SDE and two feature extractors) solely contributes to faithfulness or realism. Empirically, we compare EGSDE to a large family of baselines on three widely-adopted unpaired I2I tasks under four metrics. EGSDE not only consistently outperforms existing SBDMs-based methods in almost all settings but also achieves the SOTA realism results without harming the faithful performance. Furthermore, EGSDE allows for flexible trade-offs between realism and faithfulness and we improve the realism results further (e.g., FID of 51.04 in Cat $\to$ Dog and FID of 50.43 in Wild $\to$ Dog on AFHQ) by tuning hyper-parameters. The code is available at https://github.com/ML-GSAI/EGSDE. | null | null |
Cryptographic Hardness of Learning Halfspaces with Massart Noise | https://papers.nips.cc/paper_files/paper/2022/hash/17826a22eb8b58494dfdfca61e772c39-Abstract-Conference.html | Ilias Diakonikolas, Daniel Kane, Pasin Manurangsi, Lisheng Ren | https://papers.nips.cc/paper_files/paper/2022/hash/17826a22eb8b58494dfdfca61e772c39-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18677-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/17826a22eb8b58494dfdfca61e772c39-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/17826a22eb8b58494dfdfca61e772c39-Supplemental-Conference.pdf | We study the complexity of PAC learning halfspaces in the presence of Massart noise. In this problem, we are given i.i.d. labeled examples $(\mathbf{x}, y) \in \mathbb{R}^N \times \{ \pm 1\}$, where the distribution of $\mathbf{x}$ is arbitrary and the label $y$ is a Massart corruption of $f(\mathbf{x})$, for an unknown halfspace $f: \mathbb{R}^N \to \{ \pm 1\}$, with flipping probability $\eta(\mathbf{x}) \leq \eta < 1/2$. The goal of the learner is to compute a hypothesis with small 0-1 error. Our main result is the first computational hardness result for this learning problem. Specifically, assuming the (widely believed) subexponential-time hardness of the Learning with Errors (LWE) problem, we show that no polynomial-time Massart halfspace learner can achieve error better than $\Omega(\eta)$, even if the optimal 0-1 error is small, namely $\mathrm{OPT} = 2^{-\log^{c} (N)}$ for any universal constant $c \in (0, 1)$. Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible. | null | null |
Frank-Wolfe-based Algorithms for Approximating Tyler's M-estimator | https://papers.nips.cc/paper_files/paper/2022/hash/1787533e171dcc8549cc2eb5a4840eec-Abstract-Conference.html | Lior Danon, Dan Garber | https://papers.nips.cc/paper_files/paper/2022/hash/1787533e171dcc8549cc2eb5a4840eec-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19389-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1787533e171dcc8549cc2eb5a4840eec-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1787533e171dcc8549cc2eb5a4840eec-Supplemental-Conference.pdf | Tyler's M-estimator is a well known procedure for robust and heavy-tailed covariance estimation. Tyler himself suggested an iterative fixed-point algorithm for computing his estimator however, it requires super-linear (in the size of the data) runtime per iteration, which maybe prohibitive in large scale. In this work we propose, to the best of our knowledge, the first Frank-Wolfe-based algorithms for computing Tyler's estimator. One variant uses standard Frank-Wolfe steps, the second also considers \textit{away-steps} (AFW), and the third is a \textit{geodesic} version of AFW (GAFW). AFW provably requires, up to a log factor, only linear time per iteration, while GAFW runs in linear time (up to a log factor) in a large $n$ (number of data-points) regime. All three variants are shown to provably converge to the optimal solution with sublinear rate, under standard assumptions, despite the fact that the underlying optimization problem is not convex nor smooth. Under an additional fairly mild assumption, that holds with probability 1 when the (normalized) data-points are i.i.d. samples from a continuous distribution supported on the entire unit sphere, AFW and GAFW are proved to converge with linear rates. Importantly, all three variants are parameter-free and use adaptive step-sizes. | null | null |
Reinforcement Learning with Non-Exponential Discounting | https://papers.nips.cc/paper_files/paper/2022/hash/178b306c7ee66a66db2171646e17da36-Abstract-Conference.html | Matthias Schultheis, Constantin A. Rothkopf, Heinz Koeppl | https://papers.nips.cc/paper_files/paper/2022/hash/178b306c7ee66a66db2171646e17da36-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17907-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/178b306c7ee66a66db2171646e17da36-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/178b306c7ee66a66db2171646e17da36-Supplemental-Conference.zip | Commonly in reinforcement learning (RL), rewards are discounted over time using an exponential function to model time preference, thereby bounding the expected long-term reward. In contrast, in economics and psychology, it has been shown that humans often adopt a hyperbolic discounting scheme, which is optimal when a specific task termination time distribution is assumed. In this work, we propose a theory for continuous-time model-based reinforcement learning generalized to arbitrary discount functions. This formulation covers the case in which there is a non-exponential random termination time. We derive a Hamilton–Jacobi–Bellman (HJB) equation characterizing the optimal policy and describe how it can be solved using a collocation method, which uses deep learning for function approximation. Further, we show how the inverse RL problem can be approached, in which one tries to recover properties of the discount function given decision data. We validate the applicability of our proposed approach on two simulated problems. Our approach opens the way for the analysis of human discounting in sequential decision-making tasks. | null | null |
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? | https://papers.nips.cc/paper_files/paper/2022/hash/17a234c91f746d9625a75cf8a8731ee2-Abstract-Conference.html | Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy S. Liang | https://papers.nips.cc/paper_files/paper/2022/hash/17a234c91f746d9625a75cf8a8731ee2-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17123-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/17a234c91f746d9625a75cf8a8731ee2-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/17a234c91f746d9625a75cf8a8731ee2-Supplemental-Conference.zip | As the scope of machine learning broadens, we observe a recurring theme of algorithmic monoculture: the same systems, or systems that share components (e.g. datasets, models), are deployed by multiple decision-makers. While sharing offers advantages like amortizing effort, it also has risks. We introduce and formalize one such risk, outcome homogenization: the extent to which particular individuals or groups experience the same outcomes across different deployments. If the same individuals or groups exclusively experience undesirable outcomes, this may institutionalize systemic exclusion and reinscribe social hierarchy. We relate algorithmic monoculture and outcome homogenization by proposing the component sharing hypothesis: if algorithmic systems are increasingly built on the same data or models, then they will increasingly homogenize outcomes. We test this hypothesis on algorithmic fairness benchmarks, demonstrating that increased data-sharing reliably exacerbates homogenization and individual-level effects generally exceed group-level effects. Further, given the current regime in AI of foundation models, i.e. pretrained models that can be adapted to myriad downstream tasks, we test whether model-sharing homogenizes outcomes across tasks. We observe mixed results: we find that for both vision and language settings, the specific methods for adapting a foundation model significantly influence the degree of outcome homogenization. We also identify societal challenges that inhibit the measurement, diagnosis, and rectification of outcome homogenization in deployed machine learning systems. | null | null |
Causal Identification under Markov equivalence: Calculus, Algorithm, and Completeness | https://papers.nips.cc/paper_files/paper/2022/hash/17a9ab4190289f0e1504bbb98d1d111a-Abstract-Conference.html | Amin Jaber, Adele Ribeiro, Jiji Zhang, Elias Bareinboim | https://papers.nips.cc/paper_files/paper/2022/hash/17a9ab4190289f0e1504bbb98d1d111a-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18251-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/17a9ab4190289f0e1504bbb98d1d111a-Paper-Conference.pdf | null | One common task in many data sciences applications is to answer questions about the effect of new interventions, like: `what would happen to $Y$ if we make $X$ equal to $x$ while observing covariates $Z=z$?'. Formally, this is known as conditional effect identification, where the goal is to determine whether a post-interventional distribution is computable from the combination of an observational distribution and assumptions about the underlying domain represented by a causal diagram. A plethora of methods was developed for solving this problem, including the celebrated do-calculus [Pearl, 1995]. In practice, these results are not always applicable since they require a fully specified causal diagram as input, which is usually not available. In this paper, we assume as the input of the task a less informative structure known as a partial ancestral graph (PAG), which represents a Markov equivalence class of causal diagrams, learnable from observational data. We make the following contributions under this relaxed setting. First, we introduce a new causal calculus, which subsumes the current state-of-the-art, PAG-calculus. Second, we develop an algorithm for conditional effect identification given a PAG and prove it to be both sound and complete. In words, failure of the algorithm to identify a certain effect implies that this effect is not identifiable by any method. Third, we prove the proposed calculus to be complete for the same task. | null | null |
Dynamic Fair Division with Partial Information | https://papers.nips.cc/paper_files/paper/2022/hash/17bb0edcc02bd1f74e771e23b2aa1501-Abstract-Conference.html | Gerdus Benade, Daniel Halpern, Alexandros Psomas | https://papers.nips.cc/paper_files/paper/2022/hash/17bb0edcc02bd1f74e771e23b2aa1501-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17662-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/17bb0edcc02bd1f74e771e23b2aa1501-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/17bb0edcc02bd1f74e771e23b2aa1501-Supplemental-Conference.pdf | We consider the fundamental problem of fairly and efficiently allocating $T$ indivisible items among $n$ agents with additive preferences. The items become available over a sequence of rounds, and every item must be allocated immediately and irrevocably before the next one arrives. Previous work shows that when the agents' valuations for the items are drawn from known distributions, it is possible (under mild technical assumptions) to find allocations that are envy-free with high probability and Pareto efficient ex-post. We study a \emph{partial-information} setting, where it is possible to elicit ordinal but not cardinal information. When a new item arrives, the algorithm can query each agent for the relative rank of this item with respect to a subset of the past items. When values are drawn from i.i.d.\ distributions, we give an algorithm that is envy-free and $(1-\epsilon)$-welfare-maximizing with high probability. We provide similar guarantees (envy-freeness and a constant approximation to welfare with high probability) even with minimally expressive queries that ask for a comparison to a single previous item. For independent but non-identical agents, we obtain envy-freeness and a constant approximation to Pareto efficiency with high probability. We prove that all our results are asymptotically tight. | null | null |
Generalized Variational Inference in Function Spaces: Gaussian Measures meet Bayesian Deep Learning | https://papers.nips.cc/paper_files/paper/2022/hash/18210aa6209b9adfc97b8c17c3741d95-Abstract-Conference.html | Veit David Wild, Robert Hu, Dino Sejdinovic | https://papers.nips.cc/paper_files/paper/2022/hash/18210aa6209b9adfc97b8c17c3741d95-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18870-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/18210aa6209b9adfc97b8c17c3741d95-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/18210aa6209b9adfc97b8c17c3741d95-Supplemental-Conference.pdf | We develop a framework for generalized variational inference in infinite-dimensional function spaces and use it to construct a method termed Gaussian Wasserstein inference (GWI). GWI leverages the Wasserstein distance between Gaussian measures on the Hilbert space of square-integrable functions in order to determine a variational posterior using a tractable optimization criterion. It avoids pathologies arising in standard variational function space inference. An exciting application of GWI is the ability to use deep neural networks in the variational parametrization of GWI, combining their superior predictive performance with the principled uncertainty quantification analogous to that of Gaussian processes. The proposed method obtains state-of-the-art performance on several benchmark datasets. | null | null |
A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases | https://papers.nips.cc/paper_files/paper/2022/hash/184c1e18d00d7752805324da48ad25be-Abstract-Conference.html | James Harrison, Luke Metz, Jascha Sohl-Dickstein | https://papers.nips.cc/paper_files/paper/2022/hash/184c1e18d00d7752805324da48ad25be-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17411-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/184c1e18d00d7752805324da48ad25be-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/184c1e18d00d7752805324da48ad25be-Supplemental-Conference.pdf | Learned optimizers---neural networks that are trained to act as optimizers---have the potential to dramatically accelerate training of machine learning models. However, even when meta-trained across thousands of tasks at huge computational expense, blackbox learned optimizers often struggle with stability and generalization when applied to tasks unlike those in their meta-training set. In this paper, we use tools from dynamical systems to investigate the inductive biases and stability properties of optimization algorithms, and apply the resulting insights to designing inductive biases for blackbox optimizers. Our investigation begins with a noisy quadratic model, where we characterize conditions in which optimization is stable, in terms of eigenvalues of the training dynamics. We then introduce simple modifications to a learned optimizer's architecture and meta-training procedure which lead to improved stability, and improve the optimizer's inductive bias. We apply the resulting learned optimizer to a variety of neural network training tasks, where it outperforms the current state of the art learned optimizer---at matched optimizer computational overhead---with regard to optimization performance and meta-training speed, and is capable of generalization to tasks far different from those it was meta-trained on. | null | null |
"Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach | https://papers.nips.cc/paper_files/paper/2022/hash/185087ea328b4f03ea8fd0c8aa96f747-Abstract-Conference.html | lingyu gu, Yongqi Du, yuan zhang, Di Xie, Shiliang Pu, Robert Qiu, Zhenyu Liao | https://papers.nips.cc/paper_files/paper/2022/hash/185087ea328b4f03ea8fd0c8aa96f747-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16866-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/185087ea328b4f03ea8fd0c8aa96f747-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/185087ea328b4f03ea8fd0c8aa96f747-Supplemental-Conference.pdf | Modern deep neural networks (DNNs) are extremely powerful; however, this comes at the price of increased depth and having more parameters per layer, making their training and inference more computationally challenging. In an attempt to address this key limitation, efforts have been devoted to the compression (e.g., sparsification and/or quantization) of these large-scale machine learning models, so that they can be deployed on low-power IoT devices.In this paper, building upon recent research advances in the neural tangent kernel (NTK) and random matrix theory, we provide a novel compression approach to wide and fully-connected \emph{deep} neural nets. Specifically, we demonstrate that in the high-dimensional regime where the number of data points $n$ and their dimension $p$ are both large, and under a Gaussian mixture model for the data, there exists \emph{asymptotic spectral equivalence} between the NTK matrices for a large family of DNN models. This theoretical result enables ''lossless'' compression of a given DNN to be performed, in the sense that the compressed network yields asymptotically the same NTK as the original (dense and unquantized) network, with its weights and activations taking values \emph{only} in $\{ 0, \pm 1 \}$ up to scaling. Experiments on both synthetic and real-world data are conducted to support the advantages of the proposed compression scheme, with code available at https://github.com/Model-Compression/Lossless_Compression. | null | null |
Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss | https://papers.nips.cc/paper_files/paper/2022/hash/18561617ca0b4ffa293166b3186e04b0-Abstract-Conference.html | Jason Altschuler, Kunal Talwar | https://papers.nips.cc/paper_files/paper/2022/hash/18561617ca0b4ffa293166b3186e04b0-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17309-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/18561617ca0b4ffa293166b3186e04b0-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/18561617ca0b4ffa293166b3186e04b0-Supplemental-Conference.pdf | A central issue in machine learning is how to train models on sensitive user data. Industry has widely adopted a simple algorithm: Stochastic Gradient Descent with noise (a.k.a. Stochastic Gradient Langevin Dynamics). However, foundational theoretical questions about this algorithm's privacy loss remain open---even in the seemingly simple setting of smooth convex losses over a bounded domain. Our main result resolves these questions: for a large range of parameters, we characterize the differential privacy up to a constant. This result reveals that all previous analyses for this setting have the wrong qualitative behavior. Specifically, while previous privacy analyses increase ad infinitum in the number of iterations, we show that after a small burn-in period, running SGD longer leaks no further privacy. Our analysis departs from previous approaches based on fast mixing, instead using techniques based on optimal transport (namely, Privacy Amplification by Iteration) and the Sampled Gaussian Mechanism (namely, Privacy Amplification by Sampling). Our techniques readily extend to other settings. | null | null |
Theseus: A Library for Differentiable Nonlinear Optimization | https://papers.nips.cc/paper_files/paper/2022/hash/185969291540b3cd86e70c51e8af5d08-Abstract-Conference.html | Luis Pineda, Taosha Fan, Maurizio Monge, Shobha Venkataraman, Paloma Sodhi, Ricky T. Q. Chen, Joseph Ortiz, Daniel DeTone, Austin Wang, Stuart Anderson, Jing Dong, Brandon Amos, Mustafa Mukadam | https://papers.nips.cc/paper_files/paper/2022/hash/185969291540b3cd86e70c51e8af5d08-Abstract-Conference.html | NIPS 2022 | null | null | null | null | null | null |
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again | https://papers.nips.cc/paper_files/paper/2022/hash/187d94b3c93343f0e925b5cf729eadd5-Abstract-Conference.html | Xin-Chun Li, Wen-shu Fan, Shaoming Song, Yinchuan Li, bingshuai Li, Shao Yunfeng, De-Chuan Zhan | https://papers.nips.cc/paper_files/paper/2022/hash/187d94b3c93343f0e925b5cf729eadd5-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18637-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/187d94b3c93343f0e925b5cf729eadd5-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/187d94b3c93343f0e925b5cf729eadd5-Supplemental-Conference.pdf | Knowledge Distillation (KD) aims at transferring the knowledge of a well-performed neural network (the {\it teacher}) to a weaker one (the {\it student}). A peculiar phenomenon is that a more accurate model doesn't necessarily teach better, and temperature adjustment can neither alleviate the mismatched capacity. To explain this, we decompose the efficacy of KD into three parts: {\it correct guidance}, {\it smooth regularization}, and {\it class discriminability}. The last term describes the distinctness of {\it wrong class probabilities} that the teacher provides in KD. Complex teachers tend to be over-confident and traditional temperature scaling limits the efficacy of {\it class discriminability}, resulting in less discriminative wrong class probabilities. Therefore, we propose {\it Asymmetric Temperature Scaling (ATS)}, which separately applies a higher/lower temperature to the correct/wrong class. ATS enlarges the variance of wrong class probabilities in the teacher's label and makes the students grasp the absolute affinities of wrong classes to the target class as discriminative as possible. Both theoretical analysis and extensive experimental results demonstrate the effectiveness of ATS. The demo developed in Mindspore is available at \url{https://gitee.com/lxcnju/ats-mindspore} and will be available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/ats}. | null | null |
Solving Quantitative Reasoning Problems with Language Models | https://papers.nips.cc/paper_files/paper/2022/hash/18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html | Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, Vedant Misra | https://papers.nips.cc/paper_files/paper/2022/hash/18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18305-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/18abbeef8cfe9203fdf9053c9c4fe191-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/18abbeef8cfe9203fdf9053c9c4fe191-Supplemental-Conference.zip | Language models have achieved remarkable performance on a wide range of tasks that require natural language understanding. Nevertheless, state-of-the-art models have generally struggled with tasks that require quantitative reasoning, such as solving mathematics, science, and engineering questions at the college level. To help close this gap, we introduce Minerva, a large language model pretrained on general natural language data and further trained on technical content. The model achieves strong performance in a variety of evaluations, including state-of-the-art performance on the MATH dataset. We also evaluate our model on over two hundred undergraduate-level problems in physics, biology, chemistry, economics, and other sciences that require quantitative reasoning, and find that the model can correctly answer nearly a quarter of them. | null | null |
Structural Knowledge Distillation for Object Detection | https://papers.nips.cc/paper_files/paper/2022/hash/18c0102cb7f1a02c14f0929089b2e576-Abstract-Conference.html | Philip de Rijk, Lukas Schneider, Marius Cordts, Dariu Gavrila | https://papers.nips.cc/paper_files/paper/2022/hash/18c0102cb7f1a02c14f0929089b2e576-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18398-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/18c0102cb7f1a02c14f0929089b2e576-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/18c0102cb7f1a02c14f0929089b2e576-Supplemental-Conference.zip | Knowledge Distillation (KD) is a well-known training paradigm in deep neural networks where knowledge acquired by a large teacher model is transferred to a small student.KD has proven to be an effective technique to significantly improve the student's performance for various tasks including object detection. As such, KD techniques mostly rely on guidance at the intermediate feature level, which is typically implemented by minimizing an $\ell_{p}$-norm distance between teacher and student activations during training. In this paper, we propose a replacement for the pixel-wise independent $\ell_{p}$-norm based on the structural similarity (SSIM).By taking into account additional contrast and structural cues, more information within intermediate feature maps can be preserved. Extensive experiments on MSCOCO demonstrate the effectiveness of our method across different training schemes and architectures. Our method adds only little computational overhead, is straightforward to implement and at the same time it significantly outperforms the standard $\ell_p$-norms.Moreover, more complex state-of-the-art KD methods using attention-based sampling mechanisms are outperformed, including a +3.5 AP gain using a Faster R-CNN R-50 compared to a vanilla model. | null | null |
Thompson Sampling Efficiently Learns to Control Diffusion Processes | https://papers.nips.cc/paper_files/paper/2022/hash/18c54ed6e0cc390d750f64927dbc4e93-Abstract-Conference.html | Mohamad Kazem Shirani Faradonbeh, Mohamad Sadegh Shirani Faradonbeh, Mohsen Bayati | https://papers.nips.cc/paper_files/paper/2022/hash/18c54ed6e0cc390d750f64927dbc4e93-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18200-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/18c54ed6e0cc390d750f64927dbc4e93-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/18c54ed6e0cc390d750f64927dbc4e93-Supplemental-Conference.pdf | Diffusion processes that evolve according to linear stochastic differential equations are an important family of continuous-time dynamic decision-making models. Optimal policies are well-studied for them, under full certainty about the drift matrices. However, little is known about data-driven control of diffusion processes with uncertain drift matrices as conventional discrete-time analysis techniques are not applicable. In addition, while the task can be viewed as a reinforcement learning problem involving exploration and exploitation trade-off, ensuring system stability is a fundamental component of designing optimal policies. We establish that the popular Thompson sampling algorithm learns optimal actions fast, incurring only a square-root of time regret, and also stabilizes the system in a short time period. To the best of our knowledge, this is the first such result for Thompson sampling in a diffusion process control problem. We validate our theoretical results through empirical simulations with real matrices. Moreover, we observe that Thompson sampling significantly improves (worst-case) regret, compared to the state-of-the-art algorithms, suggesting Thompson sampling explores in a more guarded fashion. Our theoretical analysis involves characterization of a certain \emph{optimality manifold} that ties the local geometry of the drift parameters to the optimal control of the diffusion process. We expect this technique to be of broader interest. | null | null |
Discrete Compositional Representations as an Abstraction for Goal Conditioned Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2022/hash/18ddfb199d71a8a24f83abc1ced077b7-Abstract-Conference.html | Riashat Islam, Hongyu Zang, Anirudh Goyal, Alex M. Lamb, Kenji Kawaguchi, Xin Li, Romain Laroche, Yoshua Bengio, Remi Tachet des Combes | https://papers.nips.cc/paper_files/paper/2022/hash/18ddfb199d71a8a24f83abc1ced077b7-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17570-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/18ddfb199d71a8a24f83abc1ced077b7-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/18ddfb199d71a8a24f83abc1ced077b7-Supplemental-Conference.pdf | Goal-conditioned reinforcement learning (RL) is a promising direction for training agents that are capable of solving multiple tasks and reach a diverse set of objectives. How to \textit{specify} and \textit{ground} these goals in such a way that we can both reliably reach goals during training as well as generalize to new goals during evaluation remains an open area of research. Defining goals in the space of noisy, high-dimensional sensory inputs is one possibility, yet this poses a challenge for training goal-conditioned agents, or even for generalization to novel goals. We propose to address this by learning compositional representations of goals and processing the resulting representation via a discretization bottleneck, for coarser specification of goals, through an approach we call DGRL. We show that discretizing outputs from goal encoders through a bottleneck can work well in goal-conditioned RL setups, by experimentally evaluating this method on tasks ranging from maze environments to complex robotic navigation and manipulation tasks. Additionally, we show a theoretical result which bounds the expected return for goals not observed during training, while still allowing for specifying goals with expressive combinatorial structure. | null | null |
Graph Convolution Network based Recommender Systems: Learning Guarantee and Item Mixture Powered Strategy | https://papers.nips.cc/paper_files/paper/2022/hash/18fd48d9cbbf9a20e434c9d3db6973c5-Abstract-Conference.html | Leyan Deng, Defu Lian, Chenwang Wu, Enhong Chen | https://papers.nips.cc/paper_files/paper/2022/hash/18fd48d9cbbf9a20e434c9d3db6973c5-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18445-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/18fd48d9cbbf9a20e434c9d3db6973c5-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/18fd48d9cbbf9a20e434c9d3db6973c5-Supplemental-Conference.zip | Inspired by their powerful representation ability on graph-structured data, Graph Convolution Networks (GCNs) have been widely applied to recommender systems, and have shown superior performance. Despite their empirical success, there is a lack of theoretical explorations such as generalization properties. In this paper, we take a first step towards establishing a generalization guarantee for GCN-based recommendation models under inductive and transductive learning. We mainly investigate the roles of graph normalization and non-linear activation, providing some theoretical understanding, and construct extensive experiments to further verify these findings empirically. Furthermore, based on the proven generalization bound and the challenge of existing models in discrete data learning, we propose Item Mixture (IMix) to enhance recommendation. It models discrete spaces in a continuous manner by mixing the embeddings of positive-negative item pairs, and its effectiveness can be strictly guaranteed from empirical and theoretical aspects. | null | null |
Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous Actions | https://papers.nips.cc/paper_files/paper/2022/hash/18fee39e2666f43cf44425138bae9def-Abstract-Conference.html | Haanvid Lee, Jongmin Lee, Yunseon Choi, Wonseok Jeon, Byung-Jun Lee, Yung-Kyun Noh, Kee-Eung Kim | https://papers.nips.cc/paper_files/paper/2022/hash/18fee39e2666f43cf44425138bae9def-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19107-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/18fee39e2666f43cf44425138bae9def-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/18fee39e2666f43cf44425138bae9def-Supplemental-Conference.pdf | We consider local kernel metric learning for off-policy evaluation (OPE) of deterministic policies in contextual bandits with continuous action spaces. Our work is motivated by practical scenarios where the target policy needs to be deterministic due to domain requirements, such as prescription of treatment dosage and duration in medicine. Although importance sampling (IS) provides a basic principle for OPE, it is ill-posed for the deterministic target policy with continuous actions. Our main idea is to relax the target policy and pose the problem as kernel-based estimation, where we learn the kernel metric in order to minimize the overall mean squared error (MSE). We present an analytic solution for the optimal metric, based on the analysis of bias and variance. Whereas prior work has been limited to scalar action spaces or kernel bandwidth selection, our work takes a step further being capable of vector action spaces and metric optimization. We show that our estimator is consistent, and significantly reduces the MSE compared to baseline OPE methods through experiments on various domains. | null | null |
Quasi-Newton Methods for Saddle Point Problems | https://papers.nips.cc/paper_files/paper/2022/hash/191ebdfc96f43928e278fcf5902be405-Abstract-Conference.html | Chengchang Liu, Luo Luo | https://papers.nips.cc/paper_files/paper/2022/hash/191ebdfc96f43928e278fcf5902be405-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16872-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/191ebdfc96f43928e278fcf5902be405-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/191ebdfc96f43928e278fcf5902be405-Supplemental-Conference.pdf | This paper studies quasi-Newton methods for strongly-convex-strongly-concave saddle point problems. We propose random Broyden family updates, which have explicit local superlinear convergence rate of ${\mathcal O}\big(\big(1-1/(d\varkappa^2)\big)^{k(k-1)/2}\big)$, where $d$ is the dimension of the problem, $\varkappa$ is the condition number and $k$ is the number of iterations. The design and analysis of proposed algorithm are based on estimating the square of indefinite Hessian matrix, which is different from classical quasi-Newton methods in convex optimization. We also present two specific Broyden family algorithms with BFGS-type and SR1-type updates, which enjoy the faster local convergence rate of $\mathcal O\big(\big(1-1/d\big)^{k(k-1)/2}\big)$. Our numerical experiments show proposed algorithms outperform classical first-order methods. | null | null |
Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency | https://papers.nips.cc/paper_files/paper/2022/hash/194b8dac525581c346e30a2cebe9a369-Abstract-Conference.html | Xiang Zhang, Ziyuan Zhao, Theodoros Tsiligkaridis, Marinka Zitnik | https://papers.nips.cc/paper_files/paper/2022/hash/194b8dac525581c346e30a2cebe9a369-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19189-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/194b8dac525581c346e30a2cebe9a369-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/194b8dac525581c346e30a2cebe9a369-Supplemental-Conference.pdf | Pre-training on time series poses a unique challenge due to the potential mismatch between pre-training and target domains, such as shifts in temporal dynamics, fast-evolving trends, and long-range and short-cyclic effects, which can lead to poor downstream performance. While domain adaptation methods can mitigate these shifts, most methods need examples directly from the target domain, making them suboptimal for pre-training. To address this challenge, methods need to accommodate target domains with different temporal dynamics and be capable of doing so without seeing any target examples during pre-training. Relative to other modalities, in time series, we expect that time-based and frequency-based representations of the same example are located close together in the time-frequency space. To this end, we posit that time-frequency consistency (TF-C) --- embedding a time-based neighborhood of an example close to its frequency-based neighborhood --- is desirable for pre-training. Motivated by TF-C, we define a decomposable pre-training model, where the self-supervised signal is provided by the distance between time and frequency components, each individually trained by contrastive estimation. We evaluate the new method on eight datasets, including electrodiagnostic testing, human activity recognition, mechanical fault detection, and physical status monitoring. Experiments against eight state-of-the-art methods show that TF-C outperforms baselines by 15.4% (F1 score) on average in one-to-one settings (e.g., fine-tuning an EEG-pretrained model on EMG data) and by 8.4% (precision) in challenging one-to-many settings (e.g., fine-tuning an EEG-pretrained model for either hand-gesture recognition or mechanical fault prediction), reflecting the breadth of scenarios that arise in real-world applications. The source code and datasets are available at https://github.com/mims-harvard/TFC-pretraining. | null | null |
Uncalibrated Models Can Improve Human-AI Collaboration | https://papers.nips.cc/paper_files/paper/2022/hash/1968ea7d985aa377e3a610b05fc79be0-Abstract-Conference.html | Kailas Vodrahalli, Tobias Gerstenberg, James Y. Zou | https://papers.nips.cc/paper_files/paper/2022/hash/1968ea7d985aa377e3a610b05fc79be0-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18173-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1968ea7d985aa377e3a610b05fc79be0-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1968ea7d985aa377e3a610b05fc79be0-Supplemental-Conference.zip | In many practical applications of AI, an AI model is used as a decision aid for human users. The AI provides advice that a human (sometimes) incorporates into their decision-making process. The AI advice is often presented with some measure of "confidence" that the human can use to calibrate how much they depend on or trust the advice. In this paper, we present an initial exploration that suggests showing AI models as more confident than they actually are, even when the original AI is well-calibrated, can improve human-AI performance (measured as the accuracy and confidence of the human's final prediction after seeing the AI advice). We first train a model to predict human incorporation of AI advice using data from thousands of human-AI interactions. This enables us to explicitly estimate how to transform the AI's prediction confidence, making the AI uncalibrated, in order to improve the final human prediction. We empirically validate our results across four different tasks---dealing with images, text and tabular data---involving hundreds of human participants. We further support our findings with simulation analysis. Our findings suggest the importance of jointly optimizing the human-AI system as opposed to the standard paradigm of optimizing the AI model alone. | null | null |
Learning Dynamical Systems via Koopman Operator Regression in Reproducing Kernel Hilbert Spaces | https://papers.nips.cc/paper_files/paper/2022/hash/196c4e02b7464c554f0f5646af5d502e-Abstract-Conference.html | Vladimir Kostic, Pietro Novelli, Andreas Maurer, Carlo Ciliberto, Lorenzo Rosasco, Massimiliano Pontil | https://papers.nips.cc/paper_files/paper/2022/hash/196c4e02b7464c554f0f5646af5d502e-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17354-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/196c4e02b7464c554f0f5646af5d502e-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/196c4e02b7464c554f0f5646af5d502e-Supplemental-Conference.pdf | We study a class of dynamical systems modelled as stationary Markov chains that admit an invariant distribution via the corresponding transfer or Koopman operator. While data-driven algorithms to reconstruct such operators are well known, their relationship with statistical learning is largely unexplored. We formalize a framework to learn the Koopman operator from finite data trajectories of the dynamical system. We consider the restriction of this operator to a reproducing kernel Hilbert space and introduce a notion of risk, from which different estimators naturally arise. We link the risk with the estimation of the spectral decomposition of the Koopman operator. These observations motivate a reduced-rank operator regression (RRR) estimator. We derive learning bounds for the proposed estimator, holding both in i.i.d and non i.i.d. settings, the latter in terms of mixing coefficients. Our results suggest RRR might be beneficial over other widely used estimators as confirmed in numerical experiments both for forecasting and mode decomposition. | null | null |
Self-supervised surround-view depth estimation with volumetric feature fusion | https://papers.nips.cc/paper_files/paper/2022/hash/19a0a55fcb8fc0c31db093941fccd707-Abstract-Conference.html | Jung-Hee Kim, Junhwa Hur, Tien Phuoc Nguyen, Seong-Gyun Jeong | https://papers.nips.cc/paper_files/paper/2022/hash/19a0a55fcb8fc0c31db093941fccd707-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17761-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/19a0a55fcb8fc0c31db093941fccd707-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/19a0a55fcb8fc0c31db093941fccd707-Supplemental-Conference.pdf | We present a self-supervised depth estimation approach using a unified volumetric feature fusion for surround-view images. Given a set of surround-view images, our method constructs a volumetric feature map by extracting image feature maps from surround-view images and fuse the feature maps into a shared, unified 3D voxel space. The volumetric feature map then can be used for estimating a depth map at each surround view by projecting it into an image coordinate. A volumetric feature contains 3D information at its local voxel coordinate; thus our method can also synthesize a depth map at arbitrary rotated viewpoints by projecting the volumetric feature map into the target viewpoints. Furthermore, assuming static camera extrinsics in the multi-camera system, we propose to estimate a canonical camera motion from the volumetric feature map. Our method leverages 3D spatio- temporal context to learn metric-scale depth and the canonical camera motion in a self-supervised manner. Our method outperforms the prior arts on DDAD and nuScenes datasets, especially estimating more accurate metric-scale depth and consistent depth between neighboring views. | null | null |
On Enforcing Better Conditioned Meta-Learning for Rapid Few-Shot Adaptation | https://papers.nips.cc/paper_files/paper/2022/hash/1a000ee0f122d0bbd3edb9bf55170ea3-Abstract-Conference.html | Markus Hiller, Mehrtash Harandi, Tom Drummond | https://papers.nips.cc/paper_files/paper/2022/hash/1a000ee0f122d0bbd3edb9bf55170ea3-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16684-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1a000ee0f122d0bbd3edb9bf55170ea3-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1a000ee0f122d0bbd3edb9bf55170ea3-Supplemental-Conference.pdf | Inspired by the concept of preconditioning, we propose a novel method to increase adaptation speed for gradient-based meta-learning methods without incurring extra parameters. We demonstrate that recasting the optimisation problem to a non-linear least-squares formulation provides a principled way to actively enforce a well-conditioned parameter space for meta-learning models based on the concepts of the condition number and local curvature. Our comprehensive evaluations show that the proposed method significantly outperforms its unconstrained counterpart especially during initial adaptation steps, while achieving comparable or better overall results on several few-shot classification tasks – creating the possibility of dynamically choosing the number of adaptation steps at inference time. | null | null |
Oracle-Efficient Online Learning for Smoothed Adversaries | https://papers.nips.cc/paper_files/paper/2022/hash/1a04df6a405210aab4986994b873db9b-Abstract-Conference.html | Nika Haghtalab, Yanjun Han, Abhishek Shetty, Kunhe Yang | https://papers.nips.cc/paper_files/paper/2022/hash/1a04df6a405210aab4986994b873db9b-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19178-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1a04df6a405210aab4986994b873db9b-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1a04df6a405210aab4986994b873db9b-Supplemental-Conference.pdf | We study the design of computationally efficient online learning algorithms under smoothed analysis. In this setting, at every step, an adversary generates a sample from an adaptively chosen distribution whose density is upper bounded by $1/\sigma$ times the uniform density. Given access to an offline optimization (ERM) oracle, we give the first computationally efficient online algorithms whose sublinear regret depends only on the pseudo/VC dimension $d$ of the class and the smoothness parameter $\sigma$. In particular, we achieve \emph{oracle-efficient} regret bounds of $ O ( \sqrt{T d\sigma^{-1}} ) $ for learning real-valued functions and $ O ( \sqrt{T d\sigma^{-\frac{1}{2}} } )$ for learning binary-valued functions. Our results establish that online learning is computationally as easy as offline learning, under the smoothed analysis framework. This contrasts the computational separation between online learning with worst-case adversaries and offline learning established by [HK16].Our algorithms also achieve improved bounds for some settings with binary-valued functions and worst-case adversaries. These include an oracle-efficient algorithm with $O ( \sqrt{T(d |\mathcal{X}|)^{1/2} })$ regret that refines the earlier $O ( \sqrt{T|\mathcal{X}|})$ bound of [DS16] for finite domains, and an oracle-efficient algorithm with $O(T^{3/4} d^{1/2})$ regret for the transductive setting. | null | null |
A Policy-Guided Imitation Approach for Offline Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2022/hash/1a0755b249b772ed5529796b0a7cc9bd-Abstract-Conference.html | Haoran Xu, Li Jiang, Li Jianxiong, Xianyuan Zhan | https://papers.nips.cc/paper_files/paper/2022/hash/1a0755b249b772ed5529796b0a7cc9bd-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17683-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1a0755b249b772ed5529796b0a7cc9bd-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1a0755b249b772ed5529796b0a7cc9bd-Supplemental-Conference.pdf | Offline reinforcement learning (RL) methods can generally be categorized into two types: RL-based and Imitation-based. RL-based methods could in principle enjoy out-of-distribution generalization but suffer from erroneous off-policy evaluation. Imitation-based methods avoid off-policy evaluation but are too conservative to surpass the dataset. In this study, we propose an alternative approach, inheriting the training stability of imitation-style methods while still allowing logical out-of-distribution generalization. We decompose the conventional reward-maximizing policy in offline RL into a guide-policy and an execute-policy. During training, the guide-poicy and execute-policy are learned using only data from the dataset, in a supervised and decoupled manner. During evaluation, the guide-policy guides the execute-policy by telling where it should go so that the reward can be maximized, serving as the \textit{Prophet}. By doing so, our algorithm allows \textit{state-compositionality} from the dataset, rather than \textit{action-compositionality} conducted in prior imitation-style methods. We dumb this new approach Policy-guided Offline RL (\texttt{POR}). \texttt{POR} demonstrates the state-of-the-art performance on D4RL, a standard benchmark for offline RL. We also highlight the benefits of \texttt{POR} in terms of improving with supplementary suboptimal data and easily adapting to new tasks by only changing the guide-poicy. | null | null |
Sample-Efficient Learning of Correlated Equilibria in Extensive-Form Games | https://papers.nips.cc/paper_files/paper/2022/hash/1a17a06de88cf77f25cda0da91615a54-Abstract-Conference.html | Ziang Song, Song Mei, Yu Bai | https://papers.nips.cc/paper_files/paper/2022/hash/1a17a06de88cf77f25cda0da91615a54-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19166-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1a17a06de88cf77f25cda0da91615a54-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1a17a06de88cf77f25cda0da91615a54-Supplemental-Conference.pdf | Imperfect-Information Extensive-Form Games (IIEFGs) is a prevalent model for real-world games involving imperfect information and sequential plays. The Extensive-Form Correlated Equilibrium (EFCE) has been proposed as a natural solution concept for multi-player general-sum IIEFGs. However, existing algorithms for finding an EFCE require full feedback from the game, and it remains open how to efficiently learn the EFCE in the more challenging bandit feedback setting where the game can only be learned by observations from repeated playing. This paper presents the first sample-efficient algorithm for learning the EFCE from bandit feedback. We begin by proposing $K$-EFCE---a generalized definition that allows players to observe and deviate from the recommended actions for $K$ times. The $K$-EFCE includes the EFCE as a special case at $K=1$, and is an increasingly stricter notion of equilibrium as $K$ increases. We then design an uncoupled no-regret algorithm that finds an $\varepsilon$-approximate $K$-EFCE within $\widetilde{\mathcal{O}}(\max_{i}X_iA_i^{K}/\varepsilon^2)$ iterations in the full feedback setting, where $X_i$ and $A_i$ are the number of information sets and actions for the $i$-th player. Our algorithm works by minimizing a wide-range regret at each information set that takes into account all possible recommendation histories. Finally, we design a sample-based variant of our algorithm that learns an $\varepsilon$-approximate $K$-EFCE within $\widetilde{\mathcal{O}}(\max_{i}X_iA_i^{K+1}/\varepsilon^2)$ episodes of play in the bandit feedback setting. When specialized to $K=1$, this gives the first sample-efficient algorithm for learning EFCE from bandit feedback. | null | null |
VectorAdam for Rotation Equivariant Geometry Optimization | https://papers.nips.cc/paper_files/paper/2022/hash/1a774f3555593986d7d95e4780d9e4f4-Abstract-Conference.html | Selena Zihan Ling, Nicholas Sharp, Alec Jacobson | https://papers.nips.cc/paper_files/paper/2022/hash/1a774f3555593986d7d95e4780d9e4f4-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18164-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1a774f3555593986d7d95e4780d9e4f4-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1a774f3555593986d7d95e4780d9e4f4-Supplemental-Conference.zip | The Adam optimization algorithm has proven remarkably effective for optimization problems across machine learning and even traditional tasks in geometry processing. At the same time, the development of equivariant methods, which preserve their output under the action of rotation or some other transformation, has proven to be important for geometry problems across these domains. In this work, we observe that Adam — when treated as a function that maps initial conditions to optimized results — is not rotation equivariant for vector-valued parameters due to per-coordinate moment updates. This leads to significant artifacts and biases in practice. We propose to resolve this deficiency with VectorAdam, a simple modification which makes Adam rotation-equivariant by accounting for the vector structure of optimization variables. We demonstrate this approach on problems in machine learning and traditional geometric optimization, showing that equivariant VectorAdam resolves the artifacts and biases of traditional Adam when applied to vector-valued data, with equivalent or even improved rates of convergence. | null | null |
Matrix Multiplicative Weights Updates in Quantum Zero-Sum Games: Conservation Laws & Recurrence | https://papers.nips.cc/paper_files/paper/2022/hash/1a78459dbbcdc90783d183999e72176c-Abstract-Conference.html | Rahul Jain, Georgios Piliouras, Ryann Sim | https://papers.nips.cc/paper_files/paper/2022/hash/1a78459dbbcdc90783d183999e72176c-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16670-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1a78459dbbcdc90783d183999e72176c-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1a78459dbbcdc90783d183999e72176c-Supplemental-Conference.zip | Recent advances in quantum computing and in particular, the introduction of quantum GANs, have led to increased interest in quantum zero-sum game theory, extending the scope of learning algorithms for classical games into the quantum realm. In this paper, we focus on learning in quantum zero-sum games under Matrix Multiplicative Weights Update (a generalization of the multiplicative weights update method) and its continuous analogue, Quantum Replicator Dynamics. When each player selects their state according to quantum replicator dynamics, we show that the system exhibits conservation laws in a quantum-information theoretic sense. Moreover, we show that the system exhibits Poincare recurrence, meaning that almost all orbits return arbitrarily close to their initial conditions infinitely often. Our analysis generalizes previous results in the case of classical games. | null | null |
On the Convergence Theory for Hessian-Free Bilevel Algorithms | https://papers.nips.cc/paper_files/paper/2022/hash/1a82986c9f321217f2ed407a14dcfa0b-Abstract-Conference.html | Daouda Sow, Kaiyi Ji, Yingbin Liang | https://papers.nips.cc/paper_files/paper/2022/hash/1a82986c9f321217f2ed407a14dcfa0b-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16655-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1a82986c9f321217f2ed407a14dcfa0b-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1a82986c9f321217f2ed407a14dcfa0b-Supplemental-Conference.pdf | Bilevel optimization has arisen as a powerful tool in modern machine learning. However, due to the nested structure of bilevel optimization, even gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations, which can be costly and unscalable in practice. Recently, Hessian-free bilevel schemes have been proposed to resolve this issue, where the general idea is to use zeroth- or first-order methods to approximate the full hypergradient of the bilevel problem. However, we empirically observe that such approximation can lead to large variance and unstable training, but estimating only the response Jacobian matrix as a partial component of the hypergradient turns out to be extremely effective. To this end, we propose a new Hessian-free method, which adopts the zeroth-order-like method to approximate the response Jacobian matrix via taking difference between two optimization paths. Theoretically, we provide the convergence rate analysis for the proposed algorithms, where our key challenge is to characterize the approximation and smoothness properties of the trajectory-dependent estimator, which can be of independent interest. This is the first known convergence rate result for this type of Hessian-free bilevel algorithms. Experimentally, we demonstrate that the proposed algorithms outperform baseline bilevel optimizers on various bilevel problems. Particularly, in our experiment on few-shot meta-learning with ResNet-12 network over the miniImageNet dataset, we show that our algorithm outperforms baseline meta-learning algorithms, while other baseline bilevel optimizers do not solve such meta-learning problems within a comparable time frame. | null | null |
Equivariant Networks for Crystal Structures | https://papers.nips.cc/paper_files/paper/2022/hash/1abed6ee581b9ceb4e2ddf37822c7fcb-Abstract-Conference.html | Oumar Kaba, Siamak Ravanbakhsh | https://papers.nips.cc/paper_files/paper/2022/hash/1abed6ee581b9ceb4e2ddf37822c7fcb-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16628-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1abed6ee581b9ceb4e2ddf37822c7fcb-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1abed6ee581b9ceb4e2ddf37822c7fcb-Supplemental-Conference.pdf | Supervised learning with deep models has tremendous potential for applications in materials science. Recently, graph neural networks have been used in this context, drawing direct inspiration from models for molecules. However, materials are typically much more structured than molecules, which is a feature that these models do not leverage. In this work, we introduce a class of models that are equivariant with respect to crystalline symmetry groups. We do this by defining a generalization of the message passing operations that can be used with more general permutation groups, or that can alternatively be seen as defining an expressive convolution operation on the crystal graph. Empirically, these models achieve competitive results with state-of-the-art on the Materials Project dataset. | null | null |
A General Framework for Auditing Differentially Private Machine Learning | https://papers.nips.cc/paper_files/paper/2022/hash/1add3bbdbc20c403a383482a665eb5a4-Abstract-Conference.html | Fred Lu, Joseph Munoz, Maya Fuchs, Tyler LeBlond, Elliott Zaresky-Williams, Edward Raff, Francis Ferraro, Brian Testa | https://papers.nips.cc/paper_files/paper/2022/hash/1add3bbdbc20c403a383482a665eb5a4-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17922-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1add3bbdbc20c403a383482a665eb5a4-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1add3bbdbc20c403a383482a665eb5a4-Supplemental-Conference.zip | We present a framework to statistically audit the privacy guarantee conferred by a differentially private machine learner in practice. While previous works have taken steps toward evaluating privacy loss through poisoning attacks or membership inference, they have been tailored to specific models or have demonstrated low statistical power. Our work develops a general methodology to empirically evaluate the privacy of differentially private machine learning implementations, combining improved privacy search and verification methods with a toolkit of influence-based poisoning attacks. We demonstrate significantly improved auditing power over previous approaches on a variety of models including logistic regression, Naive Bayes, and random forest. Our method can be used to detect privacy violations due to implementation errors or misuse. When violations are not present, it can aid in understanding the amount of information that can be leaked from a given dataset, algorithm, and privacy specification. | null | null |
Generalization Analysis on Learning with a Concurrent Verifier | https://papers.nips.cc/paper_files/paper/2022/hash/1af83ab66b4f07a3f55788e67dab5782-Abstract-Conference.html | Masaaki Nishino, Kengo Nakamura, Norihito Yasuda | https://papers.nips.cc/paper_files/paper/2022/hash/1af83ab66b4f07a3f55788e67dab5782-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19439-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1af83ab66b4f07a3f55788e67dab5782-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1af83ab66b4f07a3f55788e67dab5782-Supplemental-Conference.zip | Machine learning technologies have been used in a wide range of practical systems.In practical situations, it is natural to expect the input-output pairs of a machine learning model to satisfy some requirements.However, it is difficult to obtain a model that satisfies requirements by just learning from examples.A simple solution is to add a module that checks whether the input-output pairs meet the requirements and then modifies the model's outputs. Such a module, which we call a {\em concurrent verifier} (CV), can give a certification, although how the generalizability of the machine learning model changes using a CV is unclear. This paper gives a generalization analysis of learning with a CV. We analyze how the learnability of a machine learning model changes with a CV and show a condition where we can obtain a guaranteed hypothesis using a verifier only in the inference time.We also show that typical error bounds based on Rademacher complexity will be no larger than that of the original model when using a CV in multi-class classification and structured prediction settings. | null | null |
Spartan: Differentiable Sparsity via Regularized Transportation | https://papers.nips.cc/paper_files/paper/2022/hash/1afb9ca4adf1d9cb3c87ff3e22a29049-Abstract-Conference.html | Kai Sheng Tai, Taipeng Tian, Ser Nam Lim | https://papers.nips.cc/paper_files/paper/2022/hash/1afb9ca4adf1d9cb3c87ff3e22a29049-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18052-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1afb9ca4adf1d9cb3c87ff3e22a29049-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1afb9ca4adf1d9cb3c87ff3e22a29049-Supplemental-Conference.pdf | We present Spartan, a method for training sparse neural network models with a predetermined level of sparsity. Spartan is based on a combination of two techniques: (1) soft top-k masking of low-magnitude parameters via a regularized optimal transportation problem and (2) dual averaging-based parameter updates with hard sparsification in the forward pass. This scheme realizes an exploration-exploitation tradeoff: early in training, the learner is able to explore various sparsity patterns, and as the soft top-k approximation is gradually sharpened over the course of training, the balance shifts towards parameter optimization with respect to a fixed sparsity mask. Spartan is sufficiently flexible to accommodate a variety of sparsity allocation policies, including both unstructured and block-structured sparsity, global and per-layer sparsity budgets, as well as general cost-sensitive sparsity allocation mediated by linear models of per-parameter costs. On ImageNet-1K classification, we demonstrate that training with Spartan yields 95% sparse ResNet-50 models and 90% block sparse ViT-B/16 models while incurring absolute top-1 accuracy losses of less than 1% compared to fully dense training. | null | null |
Focal Modulation Networks | https://papers.nips.cc/paper_files/paper/2022/hash/1b08f585b0171b74d1401a5195e986f1-Abstract-Conference.html | Jianwei Yang, Chunyuan Li, Xiyang Dai, Jianfeng Gao | https://papers.nips.cc/paper_files/paper/2022/hash/1b08f585b0171b74d1401a5195e986f1-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17838-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1b08f585b0171b74d1401a5195e986f1-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1b08f585b0171b74d1401a5195e986f1-Supplemental-Conference.zip | We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation module for modeling token interactions in vision. Focal modulation comprises three components: $(i)$ hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, $(ii)$ gated aggregation to selectively gather contexts for each query token based on its content, and $(iii)$ element-wise modulation or affine transformation to fuse the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational cost on the tasks of image classification, object detection, and semantic segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224$^2$ and 384$^2$, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1$\times$ outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3$\times$ schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. These results render focal modulation a favorable alternative to SA for effective and efficient visual modeling. Code is available at: https://github.com/microsoft/FocalNet. | null | null |
HSurf-Net: Normal Estimation for 3D Point Clouds by Learning Hyper Surfaces | https://papers.nips.cc/paper_files/paper/2022/hash/1b115b1feab2198dd0881c57b869ddb7-Abstract-Conference.html | Qing Li, Yu-Shen Liu, Jin-San Cheng, Cheng Wang, Yi Fang, Zhizhong Han | https://papers.nips.cc/paper_files/paper/2022/hash/1b115b1feab2198dd0881c57b869ddb7-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17775-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1b115b1feab2198dd0881c57b869ddb7-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1b115b1feab2198dd0881c57b869ddb7-Supplemental-Conference.pdf | We propose a novel normal estimation method called HSurf-Net, which can accurately predict normals from point clouds with noise and density variations. Previous methods focus on learning point weights to fit neighborhoods into a geometric surface approximated by a polynomial function with a predefined order, based on which normals are estimated. However, fitting surfaces explicitly from raw point clouds suffers from overfitting or underfitting issues caused by inappropriate polynomial orders and outliers, which significantly limits the performance of existing methods. To address these issues, we introduce hyper surface fitting to implicitly learn hyper surfaces, which are represented by multi-layer perceptron (MLP) layers that take point features as input and output surface patterns in a high dimensional feature space. We introduce a novel space transformation module, which consists of a sequence of local aggregation layers and global shift layers, to learn an optimal feature space, and a relative position encoding module to effectively convert point clouds into the learned feature space. Our model learns hyper surfaces from the noise-less features and directly predicts normal vectors. We jointly optimize the MLP weights and module parameters in a data-driven manner to make the model adaptively find the most suitable surface pattern for various points. Experimental results show that our HSurf-Net achieves the state-of-the-art performance on the synthetic shape dataset, the real-world indoor and outdoor scene datasets. The code, data and pretrained models are publicly available. | null | null |
Robust Streaming PCA | https://papers.nips.cc/paper_files/paper/2022/hash/1b11d918b08f781a6c194c6c522edfd6-Abstract-Conference.html | Daniel Bienstock, Minchan Jeong, Apurv Shukla, Se-Young Yun | https://papers.nips.cc/paper_files/paper/2022/hash/1b11d918b08f781a6c194c6c522edfd6-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16973-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1b11d918b08f781a6c194c6c522edfd6-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1b11d918b08f781a6c194c6c522edfd6-Supplemental-Conference.zip | We consider streaming principal component analysis when the stochastic data-generating model is subject to perturbations. While existing models assume a fixed covariance, we adopt a robust perspective where the covariance matrix belongs to a temporal uncertainty set. Under this setting, we provide fundamental limits on any algorithm recovering principal components. We analyze the convergence of the noisy power method and Oja’s algorithm, both studied for the stationary data generating model, and argue that the noisy power method is rate-optimal in our setting. Finally, we demonstrate the validity of our analysis through numerical experiments. | null | null |
NeMF: Neural Motion Fields for Kinematic Animation | https://papers.nips.cc/paper_files/paper/2022/hash/1b3750390ca8b931fb9ca988647940cb-Abstract-Conference.html | Chengan He, Jun Saito, James Zachary, Holly Rushmeier, Yi Zhou | https://papers.nips.cc/paper_files/paper/2022/hash/1b3750390ca8b931fb9ca988647940cb-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18876-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1b3750390ca8b931fb9ca988647940cb-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1b3750390ca8b931fb9ca988647940cb-Supplemental-Conference.zip | We present an implicit neural representation to learn the spatio-temporal space of kinematic motions. Unlike previous work that represents motion as discrete sequential samples, we propose to express the vast motion space as a continuous function over time, hence the name Neural Motion Fields (NeMF). Specifically, we use a neural network to learn this function for miscellaneous sets of motions, which is designed to be a generative model conditioned on a temporal coordinate $t$ and a random vector $z$ for controlling the style. The model is then trained as a Variational Autoencoder (VAE) with motion encoders to sample the latent space. We train our model with a diverse human motion dataset and quadruped dataset to prove its versatility, and finally deploy it as a generic motion prior to solve task-agnostic problems and show its superiority in different motion generation and editing applications, such as motion interpolation, in-betweening, and re-navigating. More details can be found on our project page: https://cs.yale.edu/homes/che/projects/nemf/. | null | null |
Global Normalization for Streaming Speech Recognition in a Modular Framework | https://papers.nips.cc/paper_files/paper/2022/hash/1b4839ff1f843b6be059bd0e8437e975-Abstract-Conference.html | Ehsan Variani, Ke Wu, Michael D Riley, David Rybach, Matt Shannon, Cyril Allauzen | https://papers.nips.cc/paper_files/paper/2022/hash/1b4839ff1f843b6be059bd0e8437e975-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17135-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1b4839ff1f843b6be059bd0e8437e975-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1b4839ff1f843b6be059bd0e8437e975-Supplemental-Conference.pdf | We introduce the Globally Normalized Autoregressive Transducer (GNAT) for addressing the label bias problem in streaming speech recognition. Our solution admits a tractable exact computation of the denominator for the sequence-level normalization. Through theoretical and empirical results, we demonstrate that by switching to a globally normalized model, the word error rate gap between streaming and non-streaming speech-recognition models can be greatly reduced (by more than 50% on the Librispeech dataset). This model is developed in a modular framework which encompasses all the common neural speech recognition models. The modularity of this framework enables controlled comparison of modelling choices and creation of new models. A JAX implementation of our models has been open sourced. | null | null |
Resource-Adaptive Federated Learning with All-In-One Neural Composition | https://papers.nips.cc/paper_files/paper/2022/hash/1b61ad02f2da8450e08bb015638a9007-Abstract-Conference.html | Yiqun Mei, Pengfei Guo, Mo Zhou, Vishal Patel | https://papers.nips.cc/paper_files/paper/2022/hash/1b61ad02f2da8450e08bb015638a9007-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19136-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1b61ad02f2da8450e08bb015638a9007-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1b61ad02f2da8450e08bb015638a9007-Supplemental-Conference.pdf | Conventional Federated Learning (FL) systems inherently assume a uniform processing capacity among clients for deployed models. However, diverse client hardware often leads to varying computation resources in practice. Such system heterogeneity results in an inevitable trade-off between model complexity and data accessibility as a bottleneck. To avoid such a dilemma and achieve resource-adaptive federated learning, we introduce a simple yet effective mechanism, termed All-In-One Neural Composition, to systematically support training complexity-adjustable models with flexible resource adaption. It is able to efficiently construct models at various complexities using one unified neural basis shared among clients, instead of pruning the global model into local ones. The proposed mechanism endows the system with unhindered access to the full range of knowledge scattered across clients and generalizes existing pruning-based solutions by allowing soft and learnable extraction of low footprint models. Extensive experiment results on popular FL benchmarks demonstrate the effectiveness of our approach. The resulting FL system empowered by our All-In-One Neural Composition, called FLANC, manifests consistent performance gains across diverse system/data heterogeneous setups while keeping high efficiency in computation and communication. | null | null |
SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression | https://papers.nips.cc/paper_files/paper/2022/hash/1b645a77cf48821afc3ee7e5b5d42617-Abstract-Conference.html | Zhize Li, Haoyu Zhao, Boyue Li, Yuejie Chi | https://papers.nips.cc/paper_files/paper/2022/hash/1b645a77cf48821afc3ee7e5b5d42617-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17742-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1b645a77cf48821afc3ee7e5b5d42617-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1b645a77cf48821afc3ee7e5b5d42617-Supplemental-Conference.pdf | To enable large-scale machine learning in bandwidth-hungry environments such as wireless networks, significant progress has been made recently in designing communication-efficient federated learning algorithms with the aid of communication compression. On the other end, privacy preserving, especially at the client level, is another important desideratum that has not been addressed simultaneously in the presence of advanced communication compression techniques yet. In this paper, we propose a unified framework that enhances the communication efficiency of private federated learning with communication compression. Exploiting both general compression operators and local differential privacy, we first examine a simple algorithm that applies compression directly to differentially-private stochastic gradient descent, and identify its limitations. We then propose a unified framework SoteriaFL for private federated learning, which accommodates a general family of local gradient estimators including popular stochastic variance-reduced gradient methods and the state-of-the-art shifted compression scheme. We provide a comprehensive characterization of its performance trade-offs in terms of privacy, utility, and communication complexity, where SoteriaFL is shown to achieve better communication complexity without sacrificing privacy nor utility than other private federated learning algorithms without communication compression. | null | null |
Your Transformer May Not be as Powerful as You Expect | https://papers.nips.cc/paper_files/paper/2022/hash/1ba5f64159d67775a251cf9ce386a2b9-Abstract-Conference.html | Shengjie Luo, Shanda Li, Shuxin Zheng, Tie-Yan Liu, Liwei Wang, Di He | https://papers.nips.cc/paper_files/paper/2022/hash/1ba5f64159d67775a251cf9ce386a2b9-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17885-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1ba5f64159d67775a251cf9ce386a2b9-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1ba5f64159d67775a251cf9ce386a2b9-Supplemental-Conference.zip | Relative Positional Encoding (RPE), which encodes the relative distance between any pair of tokens, is one of the most successful modifications to the original Transformer. As far as we know, theoretical understanding of the RPE-based Transformers is largely unexplored. In this work, we mathematically analyze the power of RPE-based Transformers regarding whether the model is capable of approximating any continuous sequence-to-sequence functions. One may naturally assume the answer is in the affirmative---RPE-based Transformers are universal function approximators. However, we present a negative result by showing there exist continuous sequence-to-sequence functions that RPE-based Transformers cannot approximate no matter how deep and wide the neural network is. One key reason lies in that most RPEs are placed in the softmax attention that always generates a right stochastic matrix. This restricts the network from capturing positional information in the RPEs and limits its capacity. To overcome the problem and make the model more powerful, we first present sufficient conditions for RPE-based Transformers to achieve universal function approximation. With the theoretical guidance, we develop a novel attention module, called Universal RPE-based (URPE) Attention, which satisfies the conditions. Therefore, the corresponding URPE-based Transformers become universal function approximators. Extensive experiments covering typical architectures and tasks demonstrate that our model is parameter-efficient and can achieve superior performance to strong baselines in a wide range of applications. The code will be made publicly available at https://github.com/lsj2408/URPE. | null | null |
Redundancy-Free Message Passing for Graph Neural Networks | https://papers.nips.cc/paper_files/paper/2022/hash/1bd6f17639876b4856026744932ec76f-Abstract-Conference.html | Rongqin Chen, Shenghui Zhang, Leong Hou U, Ye Li | https://papers.nips.cc/paper_files/paper/2022/hash/1bd6f17639876b4856026744932ec76f-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19416-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1bd6f17639876b4856026744932ec76f-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1bd6f17639876b4856026744932ec76f-Supplemental-Conference.zip | Graph Neural Networks (GNNs) resemble the Weisfeiler-Lehman (1-WL) test, which iteratively update the representation of each node by aggregating information from WL-tree. However, despite the computational superiority of the iterative aggregation scheme, it introduces redundant message flows to encode nodes. We found that the redundancy in message passing prevented conventional GNNs from propagating the information of long-length paths and learning graph similarities. In order to address this issue, we proposed Redundancy-Free Graph Neural Network (RFGNN), in which the information of each path (of limited length) in the original graph is propagated along a single message flow. Our rigorous theoretical analysis demonstrates the following advantages of RFGNN: (1) RFGNN is strictly more powerful than 1-WL; (2) RFGNN efficiently propagate structural information in original graphs, avoiding the over-squashing issue; and (3) RFGNN could capture subgraphs at multiple levels of granularity, and are more likely to encode graphs with closer graph edit distances into more similar representations. The experimental evaluation of graph-level prediction benchmarks confirmed our theoretical assertions, and the performance of the RFGNN can achieve the best results in most datasets. | null | null |
Diffusion-LM Improves Controllable Text Generation | https://papers.nips.cc/paper_files/paper/2022/hash/1be5bc25d50895ee656b8c2d9eb89d6a-Abstract-Conference.html | Xiang Li, John Thickstun, Ishaan Gulrajani, Percy S. Liang, Tatsunori B. Hashimoto | https://papers.nips.cc/paper_files/paper/2022/hash/1be5bc25d50895ee656b8c2d9eb89d6a-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18733-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1be5bc25d50895ee656b8c2d9eb89d6a-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1be5bc25d50895ee656b8c2d9eb89d6a-Supplemental-Conference.pdf | Controlling the behavior of language models (LMs) without re-training is a major open problem in natural language generation. While recent works have demonstrated successes on controlling simple sentence attributes (e.g., sentiment), there has been little progress on complex, fine-grained controls (e.g., syntactic structure). To address this challenge, we develop a new non-autoregressive language model based on continuous diffusions that we call Diffusion-LM. Building upon the recent successes of diffusion models in continuous domains, Diffusion-LM iteratively denoises a sequence of Gaussian vectors into word vectors, yielding a sequence of intermediate latent variables. The continuous, hierarchical nature of these intermediate variables enables a simple gradient-based algorithm to perform complex, controllable generation tasks. We demonstrate successful control of Diffusion-LM for six challenging fine-grained control tasks, significantly outperforming prior work. | null | null |
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure | https://papers.nips.cc/paper_files/paper/2022/hash/1bed04feb85e5f02a7407fa3b191630b-Abstract-Conference.html | Paul Novello, Thomas FEL, David Vigouroux | https://papers.nips.cc/paper_files/paper/2022/hash/1bed04feb85e5f02a7407fa3b191630b-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/19440-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1bed04feb85e5f02a7407fa3b191630b-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1bed04feb85e5f02a7407fa3b191630b-Supplemental-Conference.pdf | This paper presents a new efficient black-box attribution method built on Hilbert-Schmidt Independence Criterion (HSIC). Based on Reproducing Kernel Hilbert Spaces (RKHS), HSIC measures the dependence between regions of an input image and the output of a model using the kernel embedding of their distributions. It thus provides explanations enriched by RKHS representation capabilities. HSIC can be estimated very efficiently, significantly reducing the computational cost compared to other black-box attribution methods.Our experiments show that HSIC is up to 8 times faster than the previous best black-box attribution methods while being as faithful.Indeed, we improve or match the state-of-the-art of both black-box and white-box attribution methods for several fidelity metrics on Imagenet with various recent model architectures.Importantly, we show that these advances can be transposed to efficiently and faithfully explain object detection models such as YOLOv4. Finally, we extend the traditional attribution methods by proposing a new kernel enabling an ANOVA-like orthogonal decomposition of importance scores based on HSIC, allowing us to evaluate not only the importance of each image patch but also the importance of their pairwise interactions. Our implementation is available at \url{https://github.com/paulnovello/HSIC-Attribution-Method}. | null | null |
Energy-Based Contrastive Learning of Visual Representations | https://papers.nips.cc/paper_files/paper/2022/hash/1bf03a03ca8fc5918fdcacb22e14c374-Abstract-Conference.html | Beomsu Kim, Jong Chul Ye | https://papers.nips.cc/paper_files/paper/2022/hash/1bf03a03ca8fc5918fdcacb22e14c374-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18566-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1bf03a03ca8fc5918fdcacb22e14c374-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1bf03a03ca8fc5918fdcacb22e14c374-Supplemental-Conference.zip | Contrastive learning is a method of learning visual representations by training Deep Neural Networks (DNNs) to increase the similarity between representations of positive pairs (transformations of the same image) and reduce the similarity between representations of negative pairs (transformations of different images). Here we explore Energy-Based Contrastive Learning (EBCLR) that leverages the power of generative learning by combining contrastive learning with Energy-Based Models (EBMs). EBCLR can be theoretically interpreted as learning the joint distribution of positive pairs, and it shows promising results on small and medium-scale datasets such as MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100. Specifically, we find EBCLR demonstrates from $\times 4$ up to $\times 20$ acceleration compared to SimCLR and MoCo v2 in terms of training epochs. Furthermore, in contrast to SimCLR, we observe EBCLR achieves nearly the same performance with $254$ negative pairs (batch size $128$) and $30$ negative pairs (batch size $16$) per positive pair, demonstrating the robustness of EBCLR to small numbers of negative pairs. Hence, EBCLR provides a novel avenue for improving contrastive learning methods that usually require large datasets with a significant number of negative pairs per iteration to achieve reasonable performance on downstream tasks. Code: https://github.com/1202kbs/EBCLR | null | null |
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power | https://papers.nips.cc/paper_files/paper/2022/hash/1c0d1b0734b0b94eff0acf0bbedfc671-Abstract-Conference.html | Binghui Li, Jikai Jin, Han Zhong, John Hopcroft, Liwei Wang | https://papers.nips.cc/paper_files/paper/2022/hash/1c0d1b0734b0b94eff0acf0bbedfc671-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/18016-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1c0d1b0734b0b94eff0acf0bbedfc671-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1c0d1b0734b0b94eff0acf0bbedfc671-Supplemental-Conference.zip | It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate this problem, a series of robust learning algorithms have been proposed. However, although the robust training error can be near zero via some methods, all existing algorithms lead to a high robust generalization error. In this paper, we provide a theoretical understanding of this puzzling phenomenon from the perspective of expressive power for deep neural networks. Specifically, for binary classification problems with well-separated data, we show that, for ReLU networks, while mild over-parameterization is sufficient for high robust training accuracy, there exists a constant robust generalization gap unless the size of the neural network is exponential in the data dimension $d$. This result holds even if the data is linear separable (which means achieving standard generalization is easy), and more generally for any parameterized function classes as long as their VC dimension is at most polynomial in the number of parameters. Moreover, we establish an improved upper bound of $\exp({\mathcal{O}}(k))$ for the network size to achieve low robust generalization error when the data lies on a manifold with intrinsic dimension $k$ ($k \ll d$). Nonetheless, we also have a lower bound that grows exponentially with respect to $k$ --- the curse of dimensionality is inevitable. By demonstrating an exponential separation between the network size for achieving low robust training and generalization error, our results reveal that the hardness of robust generalization may stem from the expressive power of practical models. | null | null |
Asynchronous Actor-Critic for Multi-Agent Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2022/hash/1c153788756d35559c22d105d1182c30-Abstract-Conference.html | Yuchen Xiao, Weihao Tan, Christopher Amato | https://papers.nips.cc/paper_files/paper/2022/hash/1c153788756d35559c22d105d1182c30-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16907-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1c153788756d35559c22d105d1182c30-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1c153788756d35559c22d105d1182c30-Supplemental-Conference.zip | Synchronizing decisions across multiple agents in realistic settings is problematic since it requires agents to wait for other agents to terminate and communicate about termination reliably. Ideally, agents should learn and execute asynchronously instead. Such asynchronous methods also allow temporally extended actions that can take different amounts of time based on the situation and action executed. Unfortunately, current policy gradient methods are not applicable in asynchronous settings, as they assume that agents synchronously reason about action selection at every time step. To allow asynchronous learning and decision-making, we formulate a set of asynchronous multi-agent actor-critic methods that allow agents to directly optimize asynchronous policies in three standard training paradigms: decentralized learning, centralized learning, and centralized training for decentralized execution. Empirical results (in simulation and hardware) in a variety of realistic domains demonstrate the superiority of our approaches in large multi-agent problems and validate the effectiveness of our algorithms for learning high-quality and asynchronous solutions. | null | null |
Polynomial Neural Fields for Subband Decomposition and Manipulation | https://papers.nips.cc/paper_files/paper/2022/hash/1c364d98a5cdc426fd8c76fbb2c10e34-Abstract-Conference.html | Guandao Yang, Sagie Benaim, Varun Jampani, Kyle Genova, Jonathan Barron, Thomas Funkhouser, Bharath Hariharan, Serge Belongie | https://papers.nips.cc/paper_files/paper/2022/hash/1c364d98a5cdc426fd8c76fbb2c10e34-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17355-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1c364d98a5cdc426fd8c76fbb2c10e34-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1c364d98a5cdc426fd8c76fbb2c10e34-Supplemental-Conference.pdf | Neural fields have emerged as a new paradigm for representing signals, thanks to their ability to do it compactly while being easy to optimize. In most applications, however, neural fields are treated like a black box, which precludes many signal manipulation tasks. In this paper, we propose a new class of neural fields called basis-encoded polynomial neural fields (PNFs). The key advantage of a PNF is that it can represent a signal as a composition of a number of manipulable and interpretable components without losing the merits of neural fields representation. We develop a general theoretical framework to analyze and design PNFs. We use this framework to design Fourier PNFs, which match state-of-the-art performance in signal representation tasks that use neural fields. In addition, we empirically demonstrate that Fourier PNFs enable signal manipulation applications such as texture transfer and scale-space interpolation. Code is available at https://github.com/stevenygd/PNF. | null | null |
On the Generalizability and Predictability of Recommender Systems | https://papers.nips.cc/paper_files/paper/2022/hash/1c446a652e50b1ea5618b66c07bfc0c5-Abstract-Conference.html | Duncan McElfresh, Sujay Khandagale, Jonathan Valverde, John Dickerson, Colin White | https://papers.nips.cc/paper_files/paper/2022/hash/1c446a652e50b1ea5618b66c07bfc0c5-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17617-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1c446a652e50b1ea5618b66c07bfc0c5-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1c446a652e50b1ea5618b66c07bfc0c5-Supplemental-Conference.pdf | While other areas of machine learning have seen more and more automation, designing a high-performing recommender system still requires a high level of human effort. Furthermore, recent work has shown that modern recommender system algorithms do not always improve over well-tuned baselines. A natural follow-up question is, "how do we choose the right algorithm for a new dataset and performance metric?" In this work, we start by giving the first large-scale study of recommender system approaches by comparing 24 algorithms and 100 sets of hyperparameters across 85 datasets and 315 metrics. We find that the best algorithms and hyperparameters are highly dependent on the dataset and performance metric. However, there is also a strong correlation between the performance of each algorithm and various meta-features of the datasets. Motivated by these findings, we create RecZilla, a meta-learning approach to recommender systems that uses a model to predict the best algorithm and hyperparameters for new, unseen datasets. By using far more meta-training data than prior work, RecZilla is able to substantially reduce the level of human involvement when faced with a new recommender system application. We not only release our code and pretrained RecZilla models, but also all of our raw experimental results, so that practitioners can train a RecZilla model for their desired performance metric: https://github.com/naszilla/reczilla. | null | null |
Optimal Rates for Regularized Conditional Mean Embedding Learning | https://papers.nips.cc/paper_files/paper/2022/hash/1c71cd4032da425409d8ada8727bad42-Abstract-Conference.html | Zhu Li, Dimitri Meunier, Mattes Mollenhauer, Arthur Gretton | https://papers.nips.cc/paper_files/paper/2022/hash/1c71cd4032da425409d8ada8727bad42-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17083-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1c71cd4032da425409d8ada8727bad42-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1c71cd4032da425409d8ada8727bad42-Supplemental-Conference.pdf | We address the consistency of a kernel ridge regression estimate of the conditional mean embedding (CME), which is an embedding of the conditional distribution of $Y$ given $X$ into a target reproducing kernel Hilbert space $\mathcal{H}_Y$. The CME allows us to take conditional expectations of target RKHS functions, and has been employed in nonparametric causal and Bayesian inference.We address the misspecified setting, where the target CME isin the space of Hilbert-Schmidt operators acting from an input interpolation space between $\mathcal{H}_X$ and $L_2$, to $\mathcal{H}_Y$. This space of operators is shown to be isomorphic to a newly defined vector-valued interpolation space. Using this isomorphism, we derive a novel and adaptive statistical learning rate for the empirical CME estimator under the misspecified setting. Our analysis reveals that our rates match the optimal $O(\log n / n)$ rates without assuming $\mathcal{H}_Y$ to be finite dimensional. We further establish a lower bound on the learning rate, which shows that the obtained upper bound is optimal. | null | null |
Divert More Attention to Vision-Language Tracking | https://papers.nips.cc/paper_files/paper/2022/hash/1c8c87c36dc1e49e63555f95fa56b153-Abstract-Conference.html | Mingzhe Guo, Zhipeng Zhang, Heng Fan, Liping Jing | https://papers.nips.cc/paper_files/paper/2022/hash/1c8c87c36dc1e49e63555f95fa56b153-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17103-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1c8c87c36dc1e49e63555f95fa56b153-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1c8c87c36dc1e49e63555f95fa56b153-Supplemental-Conference.zip | Relying on Transformer for complex visual feature learning, object tracking has witnessed the new standard for state-of-the-arts (SOTAs). However, this advancement accompanies by larger training data and longer training period, making tracking increasingly expensive. In this paper, we demonstrate that the Transformer-reliance is not necessary and the pure ConvNets are still competitive and even better yet more economical and friendly in achieving SOTA tracking. Our solution is to unleash the power of multimodal vision-language (VL) tracking, simply using ConvNets. The essence lies in learning novel unified-adaptive VL representations with our modality mixer (ModaMixer) and asymmetrical ConvNet search. We show that our unified-adaptive VL representation, learned purely with the ConvNets, is a simple yet strong alternative to Transformer visual features, by unbelievably improving a CNN-based Siamese tracker by 14.5% in SUC on challenging LaSOT (50.7%$\rightarrow$65.2%), even outperforming several Transformer-based SOTA trackers. Besides empirical results, we theoretically analyze our approach to evidence its effectiveness. By revealing the potential of VL representation, we expect the community to divert more attention to VL tracking and hope to open more possibilities for future tracking beyond Transformer. Code and models are released at https://github.com/JudasDie/SOTS. | null | null |
Rethinking Image Restoration for Object Detection | https://papers.nips.cc/paper_files/paper/2022/hash/1cac8326ce3fbe79171db9754211530c-Abstract-Conference.html | Shangquan Sun, Wenqi Ren, Tao Wang, Xiaochun Cao | https://papers.nips.cc/paper_files/paper/2022/hash/1cac8326ce3fbe79171db9754211530c-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/16747-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1cac8326ce3fbe79171db9754211530c-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1cac8326ce3fbe79171db9754211530c-Supplemental-Conference.pdf | Although image restoration has achieved significant progress, its potential to assist object detectors in adverse imaging conditions lacks enough attention. It is reported that the existing image restoration methods cannot improve the object detector performance and sometimes even reduce the detection performance. To address the issue, we propose a targeted adversarial attack in the restoration procedure to boost object detection performance after restoration. Specifically, we present an ADAM-like adversarial attack to generate pseudo ground truth for restoration training. Resultant restored images are close to original sharp images, and at the same time, lead to better results of object detection. We conduct extensive experiments in image dehazing and low light enhancement and show the superiority of our method over conventional training and other domain adaptation and multi-task methods. The proposed pipeline can be applied to all restoration methods and detectors in both one- and two-stage. | null | null |
Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning | https://papers.nips.cc/paper_files/paper/2022/hash/1caf09c9f4e6b0150b06a07e77f2710c-Abstract-Conference.html | Elias Frantar, Dan Alistarh | https://papers.nips.cc/paper_files/paper/2022/hash/1caf09c9f4e6b0150b06a07e77f2710c-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17808-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1caf09c9f4e6b0150b06a07e77f2710c-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1caf09c9f4e6b0150b06a07e77f2710c-Supplemental-Conference.pdf | We consider the problem of model compression for deep neural networks (DNNs) in the challenging one-shot/post-training setting, in which we are given an accurate trained model, and must compress it without any retraining, based only on a small amount of calibration input data. This problem has become popular in view of the emerging software and hardware support for executing models compressed via pruning and/or quantization with speedup, and well-performing solutions have been proposed independently for both compression approaches.In this paper, we introduce a new compression framework which covers both weight pruning and quantization in a unified setting, is time- and space-efficient, and considerably improves upon the practical performance of existing post-training methods. At the technical level, our approach is based on an exact and efficient realization of the classical Optimal Brain Surgeon (OBS) framework of [LeCun, Denker, and Solla, 1990] extended to also cover weight quantization at the scale of modern DNNs. From the practical perspective, our experimental results show that it can improve significantly upon the compression-accuracy trade-offs of existing post-training methods, and that it can enable the accurate compound application of both pruning and quantization in a post-training setting. | null | null |
Challenging Common Assumptions in Convex Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2022/hash/1cb5b3d64bdf3c6642c8d9a8fbecd019-Abstract-Conference.html | Mirco Mutti, Riccardo De Santi, Piersilvio De Bartolomeis, Marcello Restelli | https://papers.nips.cc/paper_files/paper/2022/hash/1cb5b3d64bdf3c6642c8d9a8fbecd019-Abstract-Conference.html | NIPS 2022 | https://papers.nips.cc/paper_files/paper/17207-/bibtex | https://papers.nips.cc/paper_files/paper/2022/file/1cb5b3d64bdf3c6642c8d9a8fbecd019-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2022/file/1cb5b3d64bdf3c6642c8d9a8fbecd019-Supplemental-Conference.zip | The classic Reinforcement Learning (RL) formulation concerns the maximization of a scalar reward function. More recently, convex RL has been introduced to extend the RL formulation to all the objectives that are convex functions of the state distribution induced by a policy. Notably, convex RL covers several relevant applications that do not fall into the scalar formulation, including imitation learning, risk-averse RL, and pure exploration. In classic RL, it is common to optimize an infinite trials objective, which accounts for the state distribution instead of the empirical state visitation frequencies, even though the actual number of trajectories is always finite in practice. This is theoretically sound since the infinite trials and finite trials objectives are equivalent and thus lead to the same optimal policy. In this paper, we show that this hidden assumption does not hold in convex RL. In particular, we prove that erroneously optimizing the infinite trials objective in place of the actual finite trials one, as it is usually done, can lead to a significant approximation error. Since the finite trials setting is the default in both simulated and real-world RL, we believe shedding light on this issue will lead to better approaches and methodologies for convex RL, impacting relevant research areas such as imitation learning, risk-averse RL, and pure exploration among others. | null | null |
Subsets and Splits