title
stringlengths
13
150
url
stringlengths
97
97
authors
stringlengths
8
467
detail_url
stringlengths
97
97
tags
stringclasses
1 value
AuthorFeedback
stringlengths
102
102
Bibtex
stringlengths
53
54
MetaReview
stringlengths
99
99
Paper
stringlengths
93
93
Review
stringlengths
95
95
Supplemental
stringlengths
100
100
abstract
stringlengths
53
2k
A graph similarity for deep learning
https://papers.nips.cc/paper_files/paper/2020/hash/0004d0b59e19461ff126e3a08a814c33-Abstract.html
Seongmin Ok
https://papers.nips.cc/paper_files/paper/2020/hash/0004d0b59e19461ff126e3a08a814c33-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9725-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Supplemental.pdf
Graph neural networks (GNNs) have been successful in learning representations from graphs. Many popular GNNs follow the pattern of aggregate-transform: they aggregate the neighbors' attributes and then transform the results of aggregation with a learnable function. Analyses of these GNNs explain which pairs of non-identical graphs have different representations. However, we still lack an understanding of how similar these representations will be. We adopt kernel distance and propose transform-sum-cat as an alternative to aggregate-transform to reflect the continuous similarity between the node neighborhoods in the neighborhood aggregation. The idea leads to a simple and efficient graph similarity, which we name Weisfeiler-Leman similarity (WLS). In contrast to existing graph kernels, WLS is easy to implement with common deep learning frameworks. In graph classification experiments, transform-sum-cat significantly outperforms other neighborhood aggregation methods from popular GNN models. We also develop a simple and fast GNN model based on transform-sum-cat, which obtains, in comparison with widely used GNN models, (1) a higher accuracy in node classification, (2) a lower absolute error in graph regression, and (3) greater stability in adversarial training of graph generation.
An Unsupervised Information-Theoretic Perceptual Quality Metric
https://papers.nips.cc/paper_files/paper/2020/hash/00482b9bed15a272730fcb590ffebddd-Abstract.html
Sangnie Bhardwaj, Ian Fischer, Johannes Ballé, Troy Chinen
https://papers.nips.cc/paper_files/paper/2020/hash/00482b9bed15a272730fcb590ffebddd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/00482b9bed15a272730fcb590ffebddd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9726-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/00482b9bed15a272730fcb590ffebddd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/00482b9bed15a272730fcb590ffebddd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/00482b9bed15a272730fcb590ffebddd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/00482b9bed15a272730fcb590ffebddd-Supplemental.pdf
Tractable models of human perception have proved to be challenging to build. Hand-designed models such as MS-SSIM remain popular predictors of human image quality judgements due to their simplicity and speed. Recent modern deep learning approaches can perform better, but they rely on supervised data which can be costly to gather: large sets of class labels such as ImageNet, image quality ratings, or both. We combine recent advances in information-theoretic objective functions with a computational architecture informed by the physiology of the human visual system and unsupervised training on pairs of video frames, yielding our Perceptual Information Metric (PIM). We show that PIM is competitive with supervised metrics on the recent and challenging BAPPS image quality assessment dataset and outperforms them in predicting the ranking of image compression methods in CLIC 2020. We also perform qualitative experiments using the ImageNet-C dataset, and establish that PIM is robust with respect to architectural details.
Self-Supervised MultiModal Versatile Networks
https://papers.nips.cc/paper_files/paper/2020/hash/0060ef47b12160b9198302ebdb144dcf-Abstract.html
Jean-Baptiste Alayrac, Adria Recasens, Rosalia Schneider, Relja Arandjelović, Jason Ramapuram, Jeffrey De Fauw, Lucas Smaira, Sander Dieleman, Andrew Zisserman
https://papers.nips.cc/paper_files/paper/2020/hash/0060ef47b12160b9198302ebdb144dcf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0060ef47b12160b9198302ebdb144dcf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9727-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0060ef47b12160b9198302ebdb144dcf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0060ef47b12160b9198302ebdb144dcf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0060ef47b12160b9198302ebdb144dcf-Review.html
null
Videos are a rich source of multi-modal supervision. In this work, we learn representations using self-supervision by leveraging three modalities naturally present in videos: visual, audio and language streams. To this end, we introduce the notion of a multimodal versatile network -- a network that can ingest multiple modalities and whose representations enable downstream tasks in multiple modalities. In particular, we explore how best to combine the modalities, such that fine-grained representations of the visual and audio modalities can be maintained, whilst also integrating text into a common embedding. Driven by versatility, we also introduce a novel process of deflation, so that the networks can be effortlessly applied to the visual data in the form of video or a static image. We demonstrate how such networks trained on large collections of unlabelled video data can be applied on video, video-text, image and audio tasks. Equipped with these representations, we obtain state-of-the-art performance on multiple challenging benchmarks including UCF101, HMDB51, Kinetics600, AudioSet and ESC-50 when compared to previous self-supervised work. Our models are publicly available.
Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method
https://papers.nips.cc/paper_files/paper/2020/hash/007ff380ee5ac49ffc34442f5c2a2b86-Abstract.html
Simiao Ren, Willie Padilla, Jordan Malof
https://papers.nips.cc/paper_files/paper/2020/hash/007ff380ee5ac49ffc34442f5c2a2b86-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/007ff380ee5ac49ffc34442f5c2a2b86-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9728-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/007ff380ee5ac49ffc34442f5c2a2b86-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/007ff380ee5ac49ffc34442f5c2a2b86-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/007ff380ee5ac49ffc34442f5c2a2b86-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/007ff380ee5ac49ffc34442f5c2a2b86-Supplemental.pdf
We consider the task of solving generic inverse problems, where one wishes to determine the hidden parameters of a natural system that will give rise to a particular set of measurements. Recently many new approaches based upon deep learning have arisen, generating promising results. We conceptualize these models as different schemes for efficiently, but randomly, exploring the space of possible inverse solutions. As a result, the accuracy of each approach should be evaluated as a function of time rather than a single estimated solution, as is often done now. Using this metric, we compare several state-of-the-art inverse modeling approaches on four benchmark tasks: two existing tasks, a new 2-dimensional sinusoid task, and a challenging modern task of meta-material design. Finally, inspired by our conception of the inverse problem, we explore a simple solution that uses a deep neural network as a surrogate (i.e., approximation) for the forward model, and then uses backpropagation with respect to the model input to search for good inverse solutions. Variations of this approach - which we term the neural adjoint (NA) - have been explored recently on specific problems, and here we evaluate it comprehensively on our benchmark. We find that the addition of a simple novel loss term - which we term the boundary loss - dramatically improves the NA’s performance, and it consequentially achieves the best (or nearly best) performance in all of our benchmark scenarios.
Off-Policy Evaluation and Learning for External Validity under a Covariate Shift
https://papers.nips.cc/paper_files/paper/2020/hash/0084ae4bc24c0795d1e6a4f58444d39b-Abstract.html
Masatoshi Uehara, Masahiro Kato, Shota Yasui
https://papers.nips.cc/paper_files/paper/2020/hash/0084ae4bc24c0795d1e6a4f58444d39b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0084ae4bc24c0795d1e6a4f58444d39b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9729-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0084ae4bc24c0795d1e6a4f58444d39b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0084ae4bc24c0795d1e6a4f58444d39b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0084ae4bc24c0795d1e6a4f58444d39b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0084ae4bc24c0795d1e6a4f58444d39b-Supplemental.pdf
We consider the evaluation and training of a new policy for the evaluation data by using the historical data obtained from a different policy. The goal of off-policy evaluation (OPE) is to estimate the expected reward of a new policy over the evaluation data, and that of off-policy learning (OPL) is to find a new policy that maximizes the expected reward over the evaluation data. Although the standard OPE and OPL assume the same distribution of covariate between the historical and evaluation data, there often exists a problem of a covariate shift,i.e., the distribution of the covariate of the historical data is different from that of the evaluation data. In this paper, we derive the efficiency bound of OPE under a covariate shift. Then, we propose doubly robust and efficient estimators for OPE and OPL under a covariate shift by using an estimator of the density ratio between the distributions of the historical and evaluation data. We also discuss other possible estimators and compare their theoretical properties. Finally, we confirm the effectiveness of the proposed estimators through experiments.
Neural Methods for Point-wise Dependency Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/00a03ec6533ca7f5c644d198d815329c-Abstract.html
Yao-Hung Hubert Tsai, Han Zhao, Makoto Yamada, Louis-Philippe Morency, Russ R. Salakhutdinov
https://papers.nips.cc/paper_files/paper/2020/hash/00a03ec6533ca7f5c644d198d815329c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/00a03ec6533ca7f5c644d198d815329c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9730-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/00a03ec6533ca7f5c644d198d815329c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/00a03ec6533ca7f5c644d198d815329c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/00a03ec6533ca7f5c644d198d815329c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/00a03ec6533ca7f5c644d198d815329c-Supplemental.pdf
Since its inception, the neural estimation of mutual information (MI) has demonstrated the empirical success of modeling expected dependency between high-dimensional random variables. However, MI is an aggregate statistic and cannot be used to measure point-wise dependency between different events. In this work, instead of estimating the expected dependency, we focus on estimating point-wise dependency (PD), which quantitatively measures how likely two outcomes co-occur. We show that we can naturally obtain PD when we are optimizing MI neural variational bounds. However, optimizing these bounds is challenging due to its large variance in practice. To address this issue, we develop two methods (free of optimizing MI variational bounds): Probabilistic Classifier and Density-Ratio Fitting. We demonstrate the effectiveness of our approaches in 1) MI estimation, 2) self-supervised representation learning, and 3) cross-modal retrieval task.
Fast and Flexible Temporal Point Processes with Triangular Maps
https://papers.nips.cc/paper_files/paper/2020/hash/00ac8ed3b4327bdd4ebbebcb2ba10a00-Abstract.html
Oleksandr Shchur, Nicholas Gao, Marin Biloš, Stephan Günnemann
https://papers.nips.cc/paper_files/paper/2020/hash/00ac8ed3b4327bdd4ebbebcb2ba10a00-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9731-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-Supplemental.pdf
Temporal point process (TPP) models combined with recurrent neural networks provide a powerful framework for modeling continuous-time event data. While such models are flexible, they are inherently sequential and therefore cannot benefit from the parallelism of modern hardware. By exploiting the recent developments in the field of normalizing flows, we design TriTPP - a new class of non-recurrent TPP models, where both sampling and likelihood computation can be done in parallel. TriTPP matches the flexibility of RNN-based methods but permits several orders of magnitude faster sampling. This enables us to use the new model for variational inference in continuous-time discrete-state systems. We demonstrate the advantages of the proposed framework on synthetic and real-world datasets.
Backpropagating Linearly Improves Transferability of Adversarial Examples
https://papers.nips.cc/paper_files/paper/2020/hash/00e26af6ac3b1c1c49d7c3d79c60d000-Abstract.html
Yiwen Guo, Qizhang Li, Hao Chen
https://papers.nips.cc/paper_files/paper/2020/hash/00e26af6ac3b1c1c49d7c3d79c60d000-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/00e26af6ac3b1c1c49d7c3d79c60d000-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9732-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/00e26af6ac3b1c1c49d7c3d79c60d000-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/00e26af6ac3b1c1c49d7c3d79c60d000-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/00e26af6ac3b1c1c49d7c3d79c60d000-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/00e26af6ac3b1c1c49d7c3d79c60d000-Supplemental.pdf
The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community. In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs. We revisit a not so new but definitely noteworthy hypothesis of Goodfellow et al.'s and disclose that the transferability can be enhanced by improving the linearity of DNNs in an appropriate manner. We introduce linear backpropagation (LinBP), a method that performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients. More specifically, it calculates forward as normal but backpropagates loss as if some nonlinear activations are not encountered in the forward pass. Experimental results demonstrate that this simple yet effective method obviously outperforms current state-of-the-arts in crafting transferable adversarial examples on CIFAR-10 and ImageNet, leading to more effective attacks on a variety of DNNs. Code at: https://github.com/qizhangli/linbp-attack.
PyGlove: Symbolic Programming for Automated Machine Learning
https://papers.nips.cc/paper_files/paper/2020/hash/012a91467f210472fab4e11359bbfef6-Abstract.html
Daiyi Peng, Xuanyi Dong, Esteban Real, Mingxing Tan, Yifeng Lu, Gabriel Bender, Hanxiao Liu, Adam Kraft, Chen Liang, Quoc Le
https://papers.nips.cc/paper_files/paper/2020/hash/012a91467f210472fab4e11359bbfef6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/012a91467f210472fab4e11359bbfef6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9733-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/012a91467f210472fab4e11359bbfef6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/012a91467f210472fab4e11359bbfef6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/012a91467f210472fab4e11359bbfef6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/012a91467f210472fab4e11359bbfef6-Supplemental.pdf
In this paper, we introduce a new way of programming AutoML based on symbolic programming. Under this paradigm, ML programs are mutable, thus can be manipulated easily by another program. As a result, AutoML can be reformulated as an automated process of symbolic manipulation. With this formulation, we decouple the triangle of the search algorithm, the search space and the child program. This decoupling makes it easy to change the search space and search algorithm (without and with weight sharing), as well as to add search capabilities to existing code and implement complex search flows. We then introduce PyGlove, a new Python library that implements this paradigm. Through case studies on ImageNet and NAS-Bench-101, we show that with PyGlove users can easily convert a static program into a search space, quickly iterate on the search spaces and search algorithms, and craft complex search flows to achieve better results.
Fourier Sparse Leverage Scores and Approximate Kernel Learning
https://papers.nips.cc/paper_files/paper/2020/hash/012d9fe15b2493f21902cd55603382ec-Abstract.html
Tamas Erdelyi, Cameron Musco, Christopher Musco
https://papers.nips.cc/paper_files/paper/2020/hash/012d9fe15b2493f21902cd55603382ec-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/012d9fe15b2493f21902cd55603382ec-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9734-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/012d9fe15b2493f21902cd55603382ec-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/012d9fe15b2493f21902cd55603382ec-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/012d9fe15b2493f21902cd55603382ec-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/012d9fe15b2493f21902cd55603382ec-Supplemental.pdf
We prove new explicit upper bounds on the leverage scores of Fourier sparse functions under both the Gaussian and Laplace measures. In particular, we study s-sparse functions of the form $f(x) = \sum_{j=1}^s a_j e^{i \lambda_j x}$ for coefficients $a_j \in C$ and frequencies $\lambda_j \in R$. Bounding Fourier sparse leverage scores under various measures is of pure mathematical interest in approximation theory, and our work extends existing results for the uniform measure [Erd17,CP19a]. Practically, our bounds are motivated by two important applications in machine learning: 1. Kernel Approximation. They yield a new random Fourier features algorithm for approximating Gaussian and Cauchy (rational quadratic) kernel matrices. For low-dimensional data, our method uses a near optimal number of features, and its runtime is polynomial in the *statistical dimension* of the approximated kernel matrix. It is the first ``oblivious sketching method'' with this property for any kernel besides the polynomial kernel, resolving an open question of [AKM+17,AKK+20b]. 2. Active Learning. They can be used as non-uniform sampling distributions for robust active learning when data follows a Gaussian or Laplace distribution. Using the framework of [AKM+19], we provide essentially optimal results for bandlimited and multiband interpolation, and Gaussian process regression. These results generalize existing work that only applies to uniformly distributed data.
Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds
https://papers.nips.cc/paper_files/paper/2020/hash/0163cceb20f5ca7b313419c068abd9dc-Abstract.html
Nicholas Harvey, Christopher Liaw, Tasuku Soma
https://papers.nips.cc/paper_files/paper/2020/hash/0163cceb20f5ca7b313419c068abd9dc-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0163cceb20f5ca7b313419c068abd9dc-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9735-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0163cceb20f5ca7b313419c068abd9dc-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0163cceb20f5ca7b313419c068abd9dc-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0163cceb20f5ca7b313419c068abd9dc-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0163cceb20f5ca7b313419c068abd9dc-Supplemental.pdf
We consider the problem of nonnegative submodular maximization in the online setting. At time step t, an algorithm selects a set St ∈ C ⊆ 2^V where C is a feasible family of sets. An adversary then reveals a submodular function ft. The goal is to design an efficient algorithm for minimizing the expected approximate regret. In this work, we give a general approach for improving regret bounds in online submodular maximization by exploiting “first-order” regret bounds for online linear optimization. - For monotone submodular maximization subject to a matroid, we give an efficient algorithm which achieves a (1 − c/e − ε)-regret of O(√kT ln(n/k)) where n is the size of the ground set, k is the rank of the matroid, ε > 0 is a constant, and c is the average curvature. Even without assuming any curvature (i.e., taking c = 1), this regret bound improves on previous results of Streeter et al. (2009) and Golovin et al. (2014). - For nonmonotone, unconstrained submodular functions, we give an algorithm with 1/2-regret O(√ nT), improving on the results of Roughgarden and Wang (2018). Our approach is based on Blackwell approachability; in particular, we give a novel first-order regret bound for the Blackwell instances that arise in this setting
Synbols: Probing Learning Algorithms with Synthetic Datasets
https://papers.nips.cc/paper_files/paper/2020/hash/0169cf885f882efd795951253db5cdfb-Abstract.html
Alexandre Lacoste, Pau Rodríguez López, Frederic Branchaud-Charron, Parmida Atighehchian, Massimo Caccia, Issam Hadj Laradji, Alexandre Drouin, Matthew Craddock, Laurent Charlin, David Vázquez
https://papers.nips.cc/paper_files/paper/2020/hash/0169cf885f882efd795951253db5cdfb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0169cf885f882efd795951253db5cdfb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9736-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0169cf885f882efd795951253db5cdfb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0169cf885f882efd795951253db5cdfb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0169cf885f882efd795951253db5cdfb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0169cf885f882efd795951253db5cdfb-Supplemental.pdf
Progress in the field of machine learning has been fueled by the introduction of benchmark datasets pushing the limits of existing algorithms. Enabling the design of datasets to test specific properties and failure modes of learning algorithms is thus a problem of high interest, as it has a direct impact on innovation in the field. In this sense, we introduce Synbols — Synthetic Symbols — a tool for rapidly generating new datasets with a rich composition of latent features rendered in low resolution images. Synbols leverages the large amount of symbols available in the Unicode standard and the wide range of artistic font provided by the open font community. Our tool's high-level interface provides a language for rapidly generating new distributions on the latent features, including various types of textures and occlusions. To showcase the versatility of Synbols, we use it to dissect the limitations and flaws in standard learning algorithms in various learning setups including supervised learning, active learning, out of distribution generalization, unsupervised representation learning, and object counting.
Adversarially Robust Streaming Algorithms via Differential Privacy
https://papers.nips.cc/paper_files/paper/2020/hash/0172d289da48c48de8c5ebf3de9f7ee1-Abstract.html
Avinatan Hasidim, Haim Kaplan, Yishay Mansour, Yossi Matias, Uri Stemmer
https://papers.nips.cc/paper_files/paper/2020/hash/0172d289da48c48de8c5ebf3de9f7ee1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0172d289da48c48de8c5ebf3de9f7ee1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9737-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0172d289da48c48de8c5ebf3de9f7ee1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0172d289da48c48de8c5ebf3de9f7ee1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0172d289da48c48de8c5ebf3de9f7ee1-Review.html
null
A streaming algorithm is said to be adversarially robust if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an adaptive adversary. We establish a connection between adversarial robustness of streaming algorithms and the notion of differential privacy. This connection allows us to design new adversarially robust streaming algorithms that outperform the current state-of-the-art constructions for many interesting regimes of parameters.
Trading Personalization for Accuracy: Data Debugging in Collaborative Filtering
https://papers.nips.cc/paper_files/paper/2020/hash/019fa4fdf1c04cf73ba25aa2223769cd-Abstract.html
Long Chen, Yuan Yao, Feng Xu, Miao Xu, Hanghang Tong
https://papers.nips.cc/paper_files/paper/2020/hash/019fa4fdf1c04cf73ba25aa2223769cd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/019fa4fdf1c04cf73ba25aa2223769cd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9738-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/019fa4fdf1c04cf73ba25aa2223769cd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/019fa4fdf1c04cf73ba25aa2223769cd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/019fa4fdf1c04cf73ba25aa2223769cd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/019fa4fdf1c04cf73ba25aa2223769cd-Supplemental.pdf
Collaborative filtering has been widely used in recommender systems. Existing work has primarily focused on improving the prediction accuracy mainly via either building refined models or incorporating additional side information, yet has largely ignored the inherent distribution of the input rating data. In this paper, we propose a data debugging framework to identify overly personalized ratings whose existence degrades the performance of a given collaborative filtering model. The key idea of the proposed approach is to search for a small set of ratings whose editing (e.g., modification or deletion) would near-optimally improve the recommendation accuracy of a validation set. Experimental results demonstrate that the proposed approach can significantly improve the recommendation accuracy. Furthermore, we observe that the identified ratings significantly deviate from the average ratings of the corresponding items, and the proposed approach tends to modify them towards the average. This result sheds light on the design of future recommender systems in terms of balancing between the overall accuracy and personalization.
Cascaded Text Generation with Markov Transformers
https://papers.nips.cc/paper_files/paper/2020/hash/01a0683665f38d8e5e567b3b15ca98bf-Abstract.html
Yuntian Deng, Alexander Rush
https://papers.nips.cc/paper_files/paper/2020/hash/01a0683665f38d8e5e567b3b15ca98bf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/01a0683665f38d8e5e567b3b15ca98bf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9739-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/01a0683665f38d8e5e567b3b15ca98bf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/01a0683665f38d8e5e567b3b15ca98bf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/01a0683665f38d8e5e567b3b15ca98bf-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/01a0683665f38d8e5e567b3b15ca98bf-Supplemental.pdf
The two dominant approaches to neural text generation are fully autoregressive models, using serial beam search decoding, and non-autoregressive models, using parallel decoding with no output dependencies. This work proposes an autoregressive model with sub-linear parallel time generation. Noting that conditional random fields with bounded context can be decoded in parallel, we propose an efficient cascaded decoding approach for generating high-quality output. To parameterize this cascade, we introduce a Markov transformer, a variant of the popular fully autoregressive model that allows us to simultaneously decode with specific autoregressive context cutoffs. This approach requires only a small modification from standard autoregressive training, while showing competitive accuracy/speed tradeoff compared to existing methods on five machine translation datasets.
Improving Local Identifiability in Probabilistic Box Embeddings
https://papers.nips.cc/paper_files/paper/2020/hash/01c9d2c5b3ff5cbba349ec39a570b5e3-Abstract.html
Shib Dasgupta, Michael Boratko, Dongxu Zhang, Luke Vilnis, Xiang Li, Andrew McCallum
https://papers.nips.cc/paper_files/paper/2020/hash/01c9d2c5b3ff5cbba349ec39a570b5e3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9740-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-Supplemental.pdf
Geometric embeddings have recently received attention for their natural ability to represent transitive asymmetric relations via containment. Box embeddings, where objects are represented by n-dimensional hyperrectangles, are a particularly promising example of such an embedding as they are closed under intersection and their volume can be calculated easily, allowing them to naturally represent calibrated probability distributions. The benefits of geometric embeddings also introduce a problem of local identifiability, however, where whole neighborhoods of parameters result in equivalent loss which impedes learning. Prior work addressed some of these issues by using an approximation to Gaussian convolution over the box parameters, however this intersection operation also increases the sparsity of the gradient. In this work we model the box parameters with min and max Gumbel distributions, which were chosen such that the space is still closed under the operation of intersection. The calculation of the expected intersection volume involves all parameters, and we demonstrate experimentally that this drastically improves the ability of such models to learn.
Permute-and-Flip: A new mechanism for differentially private selection
https://papers.nips.cc/paper_files/paper/2020/hash/01e00f2f4bfcbb7505cb641066f2859b-Abstract.html
Ryan McKenna, Daniel R. Sheldon
https://papers.nips.cc/paper_files/paper/2020/hash/01e00f2f4bfcbb7505cb641066f2859b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9741-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-Supplemental.pdf
We consider the problem of differentially private selection. Given a finite set of candidate items, and a quality score for each item, our goal is to design a differentially private mechanism that returns an item with a score that is as high as possible. The most commonly used mechanism for this task is the exponential mechanism. In this work, we propose a new mechanism for this task based on a careful analysis of the privacy constraints. The expected score of our mechanism is always at least as large as the exponential mechanism, and can offer improvements up to a factor of two. Our mechanism is simple to implement and runs in linear time.
Deep reconstruction of strange attractors from time series
https://papers.nips.cc/paper_files/paper/2020/hash/021bbc7ee20b71134d53e20206bd6feb-Abstract.html
William Gilpin
https://papers.nips.cc/paper_files/paper/2020/hash/021bbc7ee20b71134d53e20206bd6feb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/021bbc7ee20b71134d53e20206bd6feb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9742-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/021bbc7ee20b71134d53e20206bd6feb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/021bbc7ee20b71134d53e20206bd6feb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/021bbc7ee20b71134d53e20206bd6feb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/021bbc7ee20b71134d53e20206bd6feb-Supplemental.pdf
Experimental measurements of physical systems often have a limited number of independent channels, causing essential dynamical variables to remain unobserved. However, many popular methods for unsupervised inference of latent dynamics from experimental data implicitly assume that the measurements have higher intrinsic dimensionality than the underlying system---making coordinate identification a dimensionality reduction problem. Here, we study the opposite limit, in which hidden governing coordinates must be inferred from only a low-dimensional time series of measurements. Inspired by classical analysis techniques for partial observations of chaotic attractors, we introduce a general embedding technique for univariate and multivariate time series, consisting of an autoencoder trained with a novel latent-space loss function. We show that our technique reconstructs the strange attractors of synthetic and real-world systems better than existing techniques, and that it creates consistent, predictive representations of even stochastic systems. We conclude by using our technique to discover dynamical attractors in diverse systems such as patient electrocardiograms, household electricity usage, neural spiking, and eruptions of the Old Faithful geyser---demonstrating diverse applications of our technique for exploratory data analysis.
Reciprocal Adversarial Learning via Characteristic Functions
https://papers.nips.cc/paper_files/paper/2020/hash/021f6dd88a11ca489936ae770e4634ad-Abstract.html
Shengxi Li, Zeyang Yu, Min Xiang, Danilo Mandic
https://papers.nips.cc/paper_files/paper/2020/hash/021f6dd88a11ca489936ae770e4634ad-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9743-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-Supplemental.pdf
Generative adversarial nets (GANs) have become a preferred tool for tasks involving complicated distributions. To stabilise the training and reduce the mode collapse of GANs, one of their main variants employs the integral probability metric (IPM) as the loss function. This provides extensive IPM-GANs with theoretical support for basically comparing moments in an embedded domain of the \textit{critic}. We generalise this by comparing the distributions rather than their moments via a powerful tool, i.e., the characteristic function (CF), which uniquely and universally comprising all the information about a distribution. For rigour, we first establish the physical meaning of the phase and amplitude in CF, and show that this provides a feasible way of balancing the accuracy and diversity of generation. We then develop an efficient sampling strategy to calculate the CFs. Within this framework, we further prove an equivalence between the embedded and data domains when a reciprocal exists, where we naturally develop the GAN in an auto-encoder structure, in a way of comparing everything in the embedded space (a semantically meaningful manifold). This efficient structure uses only two modules, together with a simple training strategy, to achieve bi-directionally generating clear images, which is referred to as the reciprocal CF GAN (RCF-GAN). Experimental results demonstrate the superior performances of the proposed RCF-GAN in terms of both generation and reconstruction.
Statistical Guarantees of Distributed Nearest Neighbor Classification
https://papers.nips.cc/paper_files/paper/2020/hash/022e0ee5162c13d9a7bb3bd00fb032ce-Abstract.html
Jiexin Duan, Xingye Qiao, Guang Cheng
https://papers.nips.cc/paper_files/paper/2020/hash/022e0ee5162c13d9a7bb3bd00fb032ce-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/022e0ee5162c13d9a7bb3bd00fb032ce-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9744-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/022e0ee5162c13d9a7bb3bd00fb032ce-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/022e0ee5162c13d9a7bb3bd00fb032ce-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/022e0ee5162c13d9a7bb3bd00fb032ce-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/022e0ee5162c13d9a7bb3bd00fb032ce-Supplemental.pdf
Nearest neighbor is a popular nonparametric method for classification and regression with many appealing properties. In the big data era, the sheer volume and spatial/temporal disparity of big data may prohibit centrally processing and storing the data. This has imposed considerable hurdle for nearest neighbor predictions since the entire training data must be memorized. One effective way to overcome this issue is the distributed learning framework. Through majority voting, the distributed nearest neighbor classifier achieves the same rate of convergence as its oracle version in terms of the regret, up to a multiplicative constant that depends solely on the data dimension. The multiplicative difference can be eliminated by replacing majority voting with the weighted voting scheme. In addition, we provide sharp theoretical upper bounds of the number of subsamples in order for the distributed nearest neighbor classifier to reach the optimal convergence rate. It is interesting to note that the weighted voting scheme allows a larger number of subsamples than the majority voting one. Our findings are supported by numerical studies.
Stein Self-Repulsive Dynamics: Benefits From Past Samples
https://papers.nips.cc/paper_files/paper/2020/hash/023d0a5671efd29e80b4deef8262e297-Abstract.html
Mao Ye, Tongzheng Ren, Qiang Liu
https://papers.nips.cc/paper_files/paper/2020/hash/023d0a5671efd29e80b4deef8262e297-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/023d0a5671efd29e80b4deef8262e297-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9745-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/023d0a5671efd29e80b4deef8262e297-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/023d0a5671efd29e80b4deef8262e297-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/023d0a5671efd29e80b4deef8262e297-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/023d0a5671efd29e80b4deef8262e297-Supplemental.pdf
We propose a new Stein self-repulsive dynamics for obtaining diversified samples from intractable un-normalized distributions. Our idea is to introduce Stein variational gradient as a repulsive force to push the samples of Langevin dynamics away from the past trajectories. This simple idea allows us to significantly decrease the auto-correlation in Langevin dynamics and hence increase the effective sample size. Importantly, as we establish in our theoretical analysis, the asymptotic stationary distribution remains correct even with the addition of the repulsive force, thanks to the special properties of the Stein variational gradient. We perform extensive empirical studies of our new algorithm, showing that our method yields much higher sample efficiency and better uncertainty estimation than vanilla Langevin dynamics.
The Statistical Complexity of Early-Stopped Mirror Descent
https://papers.nips.cc/paper_files/paper/2020/hash/024d2d699e6c1a82c9ba986386f4d824-Abstract.html
Tomas Vaskevicius, Varun Kanade, Patrick Rebeschini
https://papers.nips.cc/paper_files/paper/2020/hash/024d2d699e6c1a82c9ba986386f4d824-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/024d2d699e6c1a82c9ba986386f4d824-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9746-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/024d2d699e6c1a82c9ba986386f4d824-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/024d2d699e6c1a82c9ba986386f4d824-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/024d2d699e6c1a82c9ba986386f4d824-Review.html
null
Recently there has been a surge of interest in understanding implicit regularization properties of iterative gradient-based optimization algorithms. In this paper, we study the statistical guarantees on the excess risk achieved by early-stopped unconstrained mirror descent algorithms applied to the unregularized empirical risk with the squared loss for linear models and kernel methods. By completing an inequality that characterizes convexity for the squared loss, we identify an intrinsic link between offset Rademacher complexities and potential-based convergence analysis of mirror descent methods. Our observation immediately yields excess risk guarantees for the path traced by the iterates of mirror descent in terms of offset complexities of certain function classes depending only on the choice of the mirror map, initialization point, step-size, and the number of iterations. We apply our theory to recover, in a rather clean and elegant manner via rather short proofs, some of the recent results in the implicit regularization literature, while also showing how to improve upon them in some settings.
Algorithmic recourse under imperfect causal knowledge: a probabilistic approach
https://papers.nips.cc/paper_files/paper/2020/hash/02a3c7fb3f489288ae6942498498db20-Abstract.html
Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, Isabel Valera
https://papers.nips.cc/paper_files/paper/2020/hash/02a3c7fb3f489288ae6942498498db20-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/02a3c7fb3f489288ae6942498498db20-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9747-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/02a3c7fb3f489288ae6942498498db20-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/02a3c7fb3f489288ae6942498498db20-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/02a3c7fb3f489288ae6942498498db20-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/02a3c7fb3f489288ae6942498498db20-Supplemental.pdf
Recent work has discussed the limitations of counterfactual explanations to recommend actions for algorithmic recourse, and argued for the need of taking causal relationships between features into consideration. Unfortunately, in practice, the true underlying structural causal model is generally unknown. In this work, we first show that it is impossible to guarantee recourse without access to the true structural equations. To address this limitation, we propose two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge (e.g., only the causal graph). The first captures uncertainty over structural equations under additive Gaussian noise, and uses Bayesian model averaging to estimate the counterfactual distribution. The second removes any assumptions on the structural equations by instead computing the average effect of recourse actions on individuals similar to the person who seeks recourse, leading to a novel subpopulation-based interventional notion of recourse. We then derive a gradient-based procedure for selecting optimal recourse actions, and empirically show that the proposed approaches lead to more reliable recommendations under imperfect causal knowledge than non-probabilistic baselines.
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/02e74f10e0327ad868d138f2b4fdd6f0-Abstract.html
Valentin De Bortoli, Alain Durmus, Xavier Fontaine, Umut Simsekli
https://papers.nips.cc/paper_files/paper/2020/hash/02e74f10e0327ad868d138f2b4fdd6f0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/02e74f10e0327ad868d138f2b4fdd6f0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9748-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/02e74f10e0327ad868d138f2b4fdd6f0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/02e74f10e0327ad868d138f2b4fdd6f0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/02e74f10e0327ad868d138f2b4fdd6f0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/02e74f10e0327ad868d138f2b4fdd6f0-Supplemental.pdf
In this paper, we investigate the limiting behavior of a continuous-time counterpart of the Stochastic Gradient Descent (SGD) algorithm applied to two-layer overparameterized neural networks, as the number or neurons (i.e., the size of the hidden layer) $N \to \plusinfty$. Following a probabilistic approach, we show `propagation of chaos' for the particle system defined by this continuous-time dynamics under different scenarios, indicating that the statistical interaction between the particles asymptotically vanishes. In particular, we establish quantitative convergence with respect to $N$ of any particle to a solution of a mean-field McKean-Vlasov equation in the metric space endowed with the Wasserstein distance. In comparison to previous works on the subject, we consider settings in which the sequence of stepsizes in SGD can potentially depend on the number of neurons and the iterations. We then identify two regimes under which different mean-field limits are obtained, one of them corresponding to an implicitly regularized version of the minimization problem at hand. We perform various experiments on real datasets to validate our theoretical results, assessing the existence of these two regimes on classification problems and illustrating our convergence results.
A Causal View on Robustness of Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/02ed812220b0705fabb868ddbf17ea20-Abstract.html
Cheng Zhang, Kun Zhang, Yingzhen Li
https://papers.nips.cc/paper_files/paper/2020/hash/02ed812220b0705fabb868ddbf17ea20-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9749-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-Supplemental.pdf
We present a causal view on the robustness of neural networks against input manipulations, which applies not only to traditional classification tasks but also to general measurement data. Based on this view, we design a deep causal manipulation augmented model (deep CAMA) which explicitly models possible manipulations on certain causes leading to changes in the observed effect. We further develop data augmentation and test-time fine-tuning methods to improve deep CAMA's robustness. When compared with discriminative deep neural networks, our proposed model shows superior robustness against unseen manipulations. As a by-product, our model achieves disentangled representation which separates the representation of manipulations from those of other latent causes.
Minimax Classification with 0-1 Loss and Performance Guarantees
https://papers.nips.cc/paper_files/paper/2020/hash/02f657d55eaf1c4840ce8d66fcdaf90c-Abstract.html
Santiago Mazuelas, Andrea Zanoni, Aritz Pérez
https://papers.nips.cc/paper_files/paper/2020/hash/02f657d55eaf1c4840ce8d66fcdaf90c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/02f657d55eaf1c4840ce8d66fcdaf90c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9750-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/02f657d55eaf1c4840ce8d66fcdaf90c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/02f657d55eaf1c4840ce8d66fcdaf90c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/02f657d55eaf1c4840ce8d66fcdaf90c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/02f657d55eaf1c4840ce8d66fcdaf90c-Supplemental.pdf
Supervised classification techniques use training samples to find classification rules with small expected 0-1 loss. Conventional methods achieve efficient learning and out-of-sample generalization by minimizing surrogate losses over specific families of rules. This paper presents minimax risk classifiers (MRCs) that do not rely on a choice of surrogate loss and family of rules. MRCs achieve efficient learning and out-of-sample generalization by minimizing worst-case expected 0-1 loss w.r.t. uncertainty sets that are defined by linear constraints and include the true underlying distribution. In addition, MRCs' learning stage provides performance guarantees as lower and upper tight bounds for expected 0-1 loss. We also present MRCs' finite-sample generalization bounds in terms of training size and smallest minimax risk, and show their competitive classification performance w.r.t. state-of-the-art techniques using benchmark datasets.
How to Learn a Useful Critic? Model-based Action-Gradient-Estimator Policy Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/03255088ed63354a54e0e5ed957e9008-Abstract.html
Pierluca D'Oro, Wojciech Jaśkowski
https://papers.nips.cc/paper_files/paper/2020/hash/03255088ed63354a54e0e5ed957e9008-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/03255088ed63354a54e0e5ed957e9008-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9751-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/03255088ed63354a54e0e5ed957e9008-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/03255088ed63354a54e0e5ed957e9008-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/03255088ed63354a54e0e5ed957e9008-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/03255088ed63354a54e0e5ed957e9008-Supplemental.pdf
Deterministic-policy actor-critic algorithms for continuous control improve the actor by plugging its actions into the critic and ascending the action-value gradient, which is obtained by chaining the actor's Jacobian matrix with the gradient of the critic with respect to input actions. However, instead of gradients, the critic is, typically, only trained to accurately predict expected returns, which, on their own, are useless for policy optimization. In this paper, we propose MAGE, a model-based actor-critic algorithm, grounded in the theory of policy gradients, which explicitly learns the action-value gradient. MAGE backpropagates through the learned dynamics to compute gradient targets in temporal difference learning, leading to a critic tailored for policy improvement. On a set of MuJoCo continuous-control tasks, we demonstrate the efficiency of the algorithm in comparison to model-free and model-based state-of-the-art baselines.
Coresets for Regressions with Panel Data
https://papers.nips.cc/paper_files/paper/2020/hash/03287fcce194dbd958c2ec5b33705912-Abstract.html
Lingxiao Huang, K Sudhir, Nisheeth Vishnoi
https://papers.nips.cc/paper_files/paper/2020/hash/03287fcce194dbd958c2ec5b33705912-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/03287fcce194dbd958c2ec5b33705912-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9752-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/03287fcce194dbd958c2ec5b33705912-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/03287fcce194dbd958c2ec5b33705912-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/03287fcce194dbd958c2ec5b33705912-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/03287fcce194dbd958c2ec5b33705912-Supplemental.pdf
A panel dataset contains features or observations for multiple individuals over multiple time periods and regression problems with panel data are common in statistics and applied ML. When dealing with massive datasets, coresets have emerged as a valuable tool from a computational, storage and privacy perspective, as one needs to work with and share much smaller datasets. However, results on coresets for regression problems thus far have only been available for cross-sectional data ($N$ individuals each observed for a single time unit) or longitudinal data (a single individual observed for $T>1$ time units), but there are no results for panel data ($N>1$, $T>1$). This paper introduces the problem of coresets to panel data settings; we first define coresets for several variants of regression problems with panel data and then present efficient algorithms to construct coresets of size that are independent of $N$ and $T$, and only polynomially depend on $1/\varepsilon$ (where $\varepsilon$ is the error parameter) and the number of regression parameters. Our approach is based on the Feldman-Langberg framework in which a key step is to upper bound the “total sensitivity” that is roughly the sum of maximum influences of all individual-time pairs taken over all possible choices of regression parameters. Empirically, we assess our approach with a synthetic and a real-world datasets; the coreset sizes constructed using our approach are much smaller than the full dataset and coresets indeed accelerate the running time of computing the regression objective.
Learning Composable Energy Surrogates for PDE Order Reduction
https://papers.nips.cc/paper_files/paper/2020/hash/0332d694daab22e0e0eaf7a5e88433f9-Abstract.html
Alex Beatson, Jordan Ash, Geoffrey Roeder, Tianju Xue, Ryan P. Adams
https://papers.nips.cc/paper_files/paper/2020/hash/0332d694daab22e0e0eaf7a5e88433f9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0332d694daab22e0e0eaf7a5e88433f9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9753-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0332d694daab22e0e0eaf7a5e88433f9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0332d694daab22e0e0eaf7a5e88433f9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0332d694daab22e0e0eaf7a5e88433f9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0332d694daab22e0e0eaf7a5e88433f9-Supplemental.zip
Meta-materials are an important emerging class of engineered materials in which complex macroscopic behaviour--whether electromagnetic, thermal, or mechanical--arises from modular substructure. Simulation and optimization of these materials are computationally challenging, as rich substructures necessitate high-fidelity finite element meshes to solve the governing PDEs. To address this, we leverage parametric modular structure to learn component-level surrogates, enabling cheaper high-fidelity simulation. We use a neural network to model the stored potential energy in a component given boundary conditions. This yields a structured prediction task: macroscopic behavior is determined by the minimizer of the system's total potential energy, which can be approximated by composing these surrogate models. Composable energy surrogates thus permit simulation in the reduced basis of component boundaries. Costly ground-truth simulation of the full structure is avoided, as training data are generated by performing finite element analysis of individual components. Using dataset aggregation to choose training data allows us to learn energy surrogates which produce accurate macroscopic behavior when composed, accelerating simulation of parametric meta-materials.
Efficient Contextual Bandits with Continuous Actions
https://papers.nips.cc/paper_files/paper/2020/hash/033cc385728c51d97360020ed57776f0-Abstract.html
Maryam Majzoubi, Chicheng Zhang, Rajan Chari, Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins
https://papers.nips.cc/paper_files/paper/2020/hash/033cc385728c51d97360020ed57776f0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/033cc385728c51d97360020ed57776f0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9754-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/033cc385728c51d97360020ed57776f0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/033cc385728c51d97360020ed57776f0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/033cc385728c51d97360020ed57776f0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/033cc385728c51d97360020ed57776f0-Supplemental.zip
We create a computationally tractable learning algorithm for contextual bandits with continuous actions having unknown structure. The new reduction-style algorithm composes with most supervised learning representations. We prove that this algorithm works in a general sense and verify the new functionality with large-scale experiments.
Achieving Equalized Odds by Resampling Sensitive Attributes
https://papers.nips.cc/paper_files/paper/2020/hash/03593ce517feac573fdaafa6dcedef61-Abstract.html
Yaniv Romano, Stephen Bates, Emmanuel Candes
https://papers.nips.cc/paper_files/paper/2020/hash/03593ce517feac573fdaafa6dcedef61-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/03593ce517feac573fdaafa6dcedef61-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9755-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/03593ce517feac573fdaafa6dcedef61-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/03593ce517feac573fdaafa6dcedef61-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/03593ce517feac573fdaafa6dcedef61-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/03593ce517feac573fdaafa6dcedef61-Supplemental.pdf
We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness. This is achieved by introducing a general discrepancy functional that rigorously quantifies violations of this criterion. This differentiable functional is used as a penalty driving the model parameters towards equalized odds. To rigorously evaluate fitted models, we develop a formal hypothesis test to detect whether a prediction rule violates this property, the first such test in the literature. Both the model fitting and hypothesis testing leverage a resampled version of the sensitive attribute obeying equalized odds, by construction. We demonstrate the applicability and validity of the proposed framework both in regression and multi-class classification problems, reporting improved performance over state-of-the-art methods. Lastly, we show how to incorporate techniques for equitable uncertainty quantification---unbiased for each group under study---to communicate the results of the data analysis in exact terms.
Multi-Robot Collision Avoidance under Uncertainty with Probabilistic Safety Barrier Certificates
https://papers.nips.cc/paper_files/paper/2020/hash/03793ef7d06ffd63d34ade9d091f1ced-Abstract.html
Wenhao Luo, Wen Sun, Ashish Kapoor
https://papers.nips.cc/paper_files/paper/2020/hash/03793ef7d06ffd63d34ade9d091f1ced-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/03793ef7d06ffd63d34ade9d091f1ced-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9756-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/03793ef7d06ffd63d34ade9d091f1ced-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/03793ef7d06ffd63d34ade9d091f1ced-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/03793ef7d06ffd63d34ade9d091f1ced-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/03793ef7d06ffd63d34ade9d091f1ced-Supplemental.zip
Safety in terms of collision avoidance for multi-robot systems is a difficult challenge under uncertainty, non-determinism, and lack of complete information. This paper aims to propose a collision avoidance method that accounts for both measurement uncertainty and motion uncertainty. In particular, we propose Probabilistic Safety Barrier Certificates (PrSBC) using Control Barrier Functions to define the space of admissible control actions that are probabilistically safe with formally provable theoretical guarantee. By formulating the chance constrained safety set into deterministic control constraints with PrSBC, the method entails minimally modifying an existing controller to determine an alternative safe controller via quadratic programming constrained to PrSBC constraints. The key advantage of the approach is that no assumptions about the form of uncertainty are required other than finite support, also enabling worst-case guarantees. We demonstrate effectiveness of the approach through experiments on realistic simulation environments.
Hard Shape-Constrained Kernel Machines
https://papers.nips.cc/paper_files/paper/2020/hash/03fa2f7502f5f6b9169e67d17cbf51bb-Abstract.html
Pierre-Cyril Aubin-Frankowski, Zoltan Szabo
https://papers.nips.cc/paper_files/paper/2020/hash/03fa2f7502f5f6b9169e67d17cbf51bb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/03fa2f7502f5f6b9169e67d17cbf51bb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9757-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/03fa2f7502f5f6b9169e67d17cbf51bb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/03fa2f7502f5f6b9169e67d17cbf51bb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/03fa2f7502f5f6b9169e67d17cbf51bb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/03fa2f7502f5f6b9169e67d17cbf51bb-Supplemental.pdf
Shape constraints (such as non-negativity, monotonicity, convexity) play a central role in a large number of applications, as they usually improve performance for small sample size and help interpretability. However enforcing these shape requirements in a hard fashion is an extremely challenging problem. Classically, this task is tackled (i) in a soft way (without out-of-sample guarantees), (ii) by specialized transformation of the variables on a case-by-case basis, or (iii) by using highly restricted function classes, such as polynomials or polynomial splines. In this paper, we prove that hard affine shape constraints on function derivatives can be encoded in kernel machines which represent one of the most flexible and powerful tools in machine learning and statistics. Particularly, we present a tightened second-order cone constrained reformulation, that can be readily implemented in convex solvers. We prove performance guarantees on the solution, and demonstrate the efficiency of the approach in joint quantile regression with applications to economics and to the analysis of aircraft trajectories, among others.
A Closer Look at the Training Strategy for Modern Meta-Learning
https://papers.nips.cc/paper_files/paper/2020/hash/0415740eaa4d9decbc8da001d3fd805f-Abstract.html
JIAXIN CHEN, Xiao-Ming Wu, Yanke Li, Qimai LI, Li-Ming Zhan, Fu-lai Chung
https://papers.nips.cc/paper_files/paper/2020/hash/0415740eaa4d9decbc8da001d3fd805f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0415740eaa4d9decbc8da001d3fd805f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9758-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0415740eaa4d9decbc8da001d3fd805f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0415740eaa4d9decbc8da001d3fd805f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0415740eaa4d9decbc8da001d3fd805f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0415740eaa4d9decbc8da001d3fd805f-Supplemental.pdf
The support/query (S/Q) episodic training strategy has been widely used in modern meta-learning algorithms and is believed to improve their generalization ability to test environments. This paper conducts a theoretical investigation of this training strategy on generalization. From a stability perspective, we analyze the generalization error bound of generic meta-learning algorithms trained with such strategy. We show that the S/Q episodic training strategy naturally leads to a counterintuitive generalization bound of $O(1/\sqrt{n})$, which only depends on the task number $n$ but independent of the inner-task sample size $m$. Under the common assumption $m<
On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law
https://papers.nips.cc/paper_files/paper/2020/hash/045117b0e0a11a242b9765e79cbf113f-Abstract.html
Damien Teney, Ehsan Abbasnejad, Kushal Kafle, Robik Shrestha, Christopher Kanan, Anton van den Hengel
https://papers.nips.cc/paper_files/paper/2020/hash/045117b0e0a11a242b9765e79cbf113f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/045117b0e0a11a242b9765e79cbf113f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9759-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/045117b0e0a11a242b9765e79cbf113f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/045117b0e0a11a242b9765e79cbf113f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/045117b0e0a11a242b9765e79cbf113f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/045117b0e0a11a242b9765e79cbf113f-Supplemental.pdf
Out-of-distribution (OOD) testing is increasingly popular for evaluating a machine learning system's ability to generalize beyond the biases of a training set. OOD benchmarks are designed to present a different joint distribution of data and labels between training and test time. VQA-CP has become the standard OOD benchmark for visual question answering, but we discovered three troubling practices in its current use. First, most published methods rely on explicit knowledge of the construction of the OOD splits. They often rely on inverting'' the distribution of labels, e.g. answering mostlyyes'' when the common training answer was ``no''. Second, the OOD test set is used for model selection. Third, a model's in-domain performance is assessed after retraining it on in-domain splits (VQA v2) that exhibit a more balanced distribution of labels. These three practices defeat the objective of evaluating generalization, and put into question the value of methods specifically designed for this dataset. We show that embarrassingly-simple methods, including one that generates answers at random, surpass the state of the art on some question types. We provide short- and long-term solutions to avoid these pitfalls and realize the benefits of OOD evaluation.
Generalised Bayesian Filtering via Sequential Monte Carlo
https://papers.nips.cc/paper_files/paper/2020/hash/04ecb1fa28506ccb6f72b12c0245ddbc-Abstract.html
Ayman Boustati, Omer Deniz Akyildiz, Theodoros Damoulas, Adam Johansen
https://papers.nips.cc/paper_files/paper/2020/hash/04ecb1fa28506ccb6f72b12c0245ddbc-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/04ecb1fa28506ccb6f72b12c0245ddbc-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9760-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/04ecb1fa28506ccb6f72b12c0245ddbc-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/04ecb1fa28506ccb6f72b12c0245ddbc-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/04ecb1fa28506ccb6f72b12c0245ddbc-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/04ecb1fa28506ccb6f72b12c0245ddbc-Supplemental.pdf
We introduce a framework for inference in general state-space hidden Markov models (HMMs) under likelihood misspecification. In particular, we leverage the loss-theoretic perspective of Generalized Bayesian Inference (GBI) to define generalised filtering recursions in HMMs, that can tackle the problem of inference under model misspecification. In doing so, we arrive at principled procedures for robust inference against observation contamination by utilising the $\beta$-divergence. Operationalising the proposed framework is made possible via sequential Monte Carlo methods (SMC), where the standard particle methods, and their associated convergence results, are readily adapted to the new setting. We demonstrate our approach to object tracking and Gaussian process regression problems, and observe improved performance over standard filtering algorithms.
Deterministic Approximation for Submodular Maximization over a Matroid in Nearly Linear Time
https://papers.nips.cc/paper_files/paper/2020/hash/05128e44e27c36bdba71221bfccf735d-Abstract.html
Kai Han, zongmai Cao, Shuang Cui, Benwei Wu
https://papers.nips.cc/paper_files/paper/2020/hash/05128e44e27c36bdba71221bfccf735d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/05128e44e27c36bdba71221bfccf735d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9761-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/05128e44e27c36bdba71221bfccf735d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/05128e44e27c36bdba71221bfccf735d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/05128e44e27c36bdba71221bfccf735d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/05128e44e27c36bdba71221bfccf735d-Supplemental.pdf
We study the problem of maximizing a non-monotone, non-negative submodular function subject to a matroid constraint. The prior best-known deterministic approximation ratio for this problem is $\frac{1}{4}-\epsilon$ under $\mathcal{O}(({n^4}/{\epsilon})\log n)$ time complexity. We show that this deterministic ratio can be improved to $\frac{1}{4}$ under $\mathcal{O}(nr)$ time complexity, and then present a more practical algorithm dubbed TwinGreedyFast which achieves $\frac{1}{4}-\epsilon$ deterministic ratio in nearly-linear running time of $\mathcal{O}(\frac{n}{\epsilon}\log\frac{r}{\epsilon})$. Our approach is based on a novel algorithmic framework of simultaneously constructing two candidate solution sets through greedy search, which enables us to get improved performance bounds by fully exploiting the properties of independence systems. As a byproduct of this framework, we also show that TwinGreedyFast achieves $\frac{1}{2p+2}-\epsilon$ deterministic ratio under a $p$-set system constraint with the same time complexity. To showcase the practicality of our approach, we empirically evaluated the performance of TwinGreedyFast on two network applications, and observed that it outperforms the state-of-the-art deterministic and randomized algorithms with efficient implementations for our problem.
Flows for simultaneous manifold learning and density estimation
https://papers.nips.cc/paper_files/paper/2020/hash/051928341be67dcba03f0e04104d9047-Abstract.html
Johann Brehmer, Kyle Cranmer
https://papers.nips.cc/paper_files/paper/2020/hash/051928341be67dcba03f0e04104d9047-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/051928341be67dcba03f0e04104d9047-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9762-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/051928341be67dcba03f0e04104d9047-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/051928341be67dcba03f0e04104d9047-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/051928341be67dcba03f0e04104d9047-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/051928341be67dcba03f0e04104d9047-Supplemental.pdf
We introduce manifold-learning flows (ℳ-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining aspects of normalizing flows, GANs, autoencoders, and energy-based models, they have the potential to represent data sets with a manifold structure more faithfully and provide handles on dimensionality reduction, denoising, and out-of-distribution detection. We argue why such models should not be trained by maximum likelihood alone and present a new training algorithm that separates manifold and density updates. In a range of experiments we demonstrate how ℳ-flows learn the data manifold and allow for better inference than standard flows in the ambient data space.
Simultaneous Preference and Metric Learning from Paired Comparisons
https://papers.nips.cc/paper_files/paper/2020/hash/0561bc7ecba98e39ca7994f93311ba23-Abstract.html
Austin Xu, Mark Davenport
https://papers.nips.cc/paper_files/paper/2020/hash/0561bc7ecba98e39ca7994f93311ba23-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0561bc7ecba98e39ca7994f93311ba23-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9763-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0561bc7ecba98e39ca7994f93311ba23-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0561bc7ecba98e39ca7994f93311ba23-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0561bc7ecba98e39ca7994f93311ba23-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0561bc7ecba98e39ca7994f93311ba23-Supplemental.pdf
A popular model of preference in the context of recommendation systems is the so-called ideal point model. In this model, a user is represented as a vector u together with a collection of items x1 ... xN in a common low-dimensional space. The vector u represents the user's "ideal point," or the ideal combination of features that represents a hypothesized most preferred item. The underlying assumption in this model is that a smaller distance between u and an item xj indicates a stronger preference for xj. In the vast majority of the existing work on learning ideal point models, the underlying distance has been assumed to be Euclidean. However, this eliminates any possibility of interactions between features and a user's underlying preferences. In this paper, we consider the problem of learning an ideal point representation of a user's preferences when the distance metric is an unknown Mahalanobis metric. Specifically, we present a novel approach to estimate the user's ideal point u and the Mahalanobis metric from paired comparisons of the form "item xi is preferred to item xj.'' This can be viewed as a special case of a more general metric learning problem where the location of some points are unknown a priori. We conduct extensive experiments on synthetic and real-world datasets to exhibit the effectiveness of our algorithm.
Efficient Variational Inference for Sparse Deep Learning with Theoretical Guarantee
https://papers.nips.cc/paper_files/paper/2020/hash/05a624166c8eb8273b8464e8d9cb5bd9-Abstract.html
Jincheng Bai, Qifan Song, Guang Cheng
https://papers.nips.cc/paper_files/paper/2020/hash/05a624166c8eb8273b8464e8d9cb5bd9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9764-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-Supplemental.pdf
Sparse deep learning aims to address the challenge of huge storage consumption by deep neural networks, and to recover the sparse structure of target functions. Although tremendous empirical successes have been achieved, most sparse deep learning algorithms are lacking of theoretical supports. On the other hand, another line of works have proposed theoretical frameworks that are computationally infeasible. In this paper, we train sparse deep neural networks with a fully Bayesian treatment under spike-and-slab priors, and develop a set of computationally efficient variational inferences via continuous relaxation of Bernoulli distribution. The variational posterior contraction rate is provided, which justifies the consistency of the proposed variational Bayes method. Interestingly, our empirical results demonstrate that this variational procedure provides uncertainty quantification in terms of Bayesian predictive distribution and is also capable to accomplish consistent variable selection by training a sparse multi-layer neural network.
Learning Manifold Implicitly via Explicit Heat-Kernel Learning
https://papers.nips.cc/paper_files/paper/2020/hash/05e2a0647e260c355dd2b2175edb45b8-Abstract.html
Yufan Zhou, Changyou Chen, Jinhui Xu
https://papers.nips.cc/paper_files/paper/2020/hash/05e2a0647e260c355dd2b2175edb45b8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/05e2a0647e260c355dd2b2175edb45b8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9765-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/05e2a0647e260c355dd2b2175edb45b8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/05e2a0647e260c355dd2b2175edb45b8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/05e2a0647e260c355dd2b2175edb45b8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/05e2a0647e260c355dd2b2175edb45b8-Supplemental.pdf
Manifold learning is a fundamental problem in machine learning with numerous applications. Most of the existing methods directly learn the low-dimensional embedding of the data in some high-dimensional space, and usually lack the flexibility of being directly applicable to down-stream applications. In this paper, we propose the concept of implicit manifold learning, where manifold information is implicitly obtained by learning the associated heat kernel. A heat kernel is the solution of the corresponding heat equation, which describes how ``heat'' transfers on the manifold, thus containing ample geometric information of the manifold. We provide both practical algorithm and theoretical analysis of our framework. The learned heat kernel can be applied to various kernel-based machine learning models, including deep generative models (DGM) for data generation and Stein Variational Gradient Descent for Bayesian inference. Extensive experiments show that our framework can achieve the state-of-the-art results compared to existing methods for the two tasks.
Deep Relational Topic Modeling via Graph Poisson Gamma Belief Network
https://papers.nips.cc/paper_files/paper/2020/hash/05ee45de8d877c3949760a94fa691533-Abstract.html
Chaojie Wang, Hao Zhang, Bo Chen, Dongsheng Wang, Zhengjue Wang, Mingyuan Zhou
https://papers.nips.cc/paper_files/paper/2020/hash/05ee45de8d877c3949760a94fa691533-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/05ee45de8d877c3949760a94fa691533-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9766-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/05ee45de8d877c3949760a94fa691533-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/05ee45de8d877c3949760a94fa691533-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/05ee45de8d877c3949760a94fa691533-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/05ee45de8d877c3949760a94fa691533-Supplemental.pdf
To analyze a collection of interconnected documents, relational topic models (RTMs) have been developed to describe both the link structure and document content, exploring their underlying relationships via a single-layer latent representation with limited expressive capability. To better utilize the document network, we first propose graph Poisson factor analysis (GPFA) that constructs a probabilistic model for interconnected documents and also provides closed-form Gibbs sampling update equations, moving beyond sophisticated approximate assumptions of existing RTMs. Extending GPFA, we develop a novel hierarchical RTM named graph Poisson gamma belief network (GPGBN), and further introduce two different Weibull distribution based variational graph auto-encoders for efficient model inference and effective network information aggregation. Experimental results demonstrate that our models extract high-quality hierarchical latent document representations, leading to improved performance over baselines on various graph analytic tasks.
One-bit Supervision for Image Classification
https://papers.nips.cc/paper_files/paper/2020/hash/05f971b5ec196b8c65b75d2ef8267331-Abstract.html
Hengtong Hu, Lingxi Xie, Zewei Du, Richang Hong, Qi Tian
https://papers.nips.cc/paper_files/paper/2020/hash/05f971b5ec196b8c65b75d2ef8267331-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/05f971b5ec196b8c65b75d2ef8267331-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9767-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/05f971b5ec196b8c65b75d2ef8267331-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/05f971b5ec196b8c65b75d2ef8267331-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/05f971b5ec196b8c65b75d2ef8267331-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/05f971b5ec196b8c65b75d2ef8267331-Supplemental.zip
This paper presents one-bit supervision, a novel setting of learning from incomplete annotations, in the scenario of image classification. Instead of training a model upon the accurate label of each sample, our setting requires the model to query with a predicted label of each sample and learn from the answer whether the guess is correct. This provides one bit (yes or no) of information, and more importantly, annotating each sample becomes much easier than finding the accurate label from many candidate classes. There are two keys to training a model upon one-bit supervision: improving the guess accuracy and making use of incorrect guesses. For these purposes, we propose a multi-stage training paradigm which incorporates negative label suppression into an off-the-shelf semi-supervised learning algorithm. In three popular image classification benchmarks, our approach claims higher efficiency in utilizing the limited amount of annotations.
What is being transferred in transfer learning?
https://papers.nips.cc/paper_files/paper/2020/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html
Behnam Neyshabur, Hanie Sedghi, Chiyuan Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9768-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-Supplemental.zip
One desired capability for machines is the ability to transfer their understanding of one domain to another domain where data is (usually) scarce. Despite ample adaptation of transfer learning in many deep learning applications, we yet do not understand what enables a successful transfer and which part of the network is responsible for that. In this paper, we provide new tools and analysis to address these fundamental questions. Through a series of analysis on transferring to block-shuffled images, we separate the effect of feature reuse from learning high-level statistics of data and show that some benefit of transfer learning comes from the latter. We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
Submodular Maximization Through Barrier Functions
https://papers.nips.cc/paper_files/paper/2020/hash/061412e4a03c02f9902576ec55ebbe77-Abstract.html
Ashwinkumar Badanidiyuru, Amin Karbasi, Ehsan Kazemi, Jan Vondrak
https://papers.nips.cc/paper_files/paper/2020/hash/061412e4a03c02f9902576ec55ebbe77-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/061412e4a03c02f9902576ec55ebbe77-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9769-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/061412e4a03c02f9902576ec55ebbe77-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/061412e4a03c02f9902576ec55ebbe77-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/061412e4a03c02f9902576ec55ebbe77-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/061412e4a03c02f9902576ec55ebbe77-Supplemental.pdf
In this paper, we introduce a novel technique for constrained submodular maximization, inspired by barrier functions in continuous optimization. This connection not only improves the running time for constrained submodular maximization but also provides the state of the art guarantee. More precisely, for maximizing a monotone submodular function subject to the combination of a $k$-matchoid and $\ell$-knapsack constraints (for $\ell\leq k$), we propose a potential function that can be approximately minimized. Once we minimize the potential function up to an $\epsilon$ error, it is guaranteed that we have found a feasible set with a $2(k+1+\epsilon)$-approximation factor which can indeed be further improved to $(k+1+\epsilon)$ by an enumeration technique. We extensively evaluate the performance of our proposed algorithm over several real-world applications, including a movie recommendation system, summarization tasks for YouTube videos, Twitter feeds and Yelp business locations, and a set cover problem.
Neural Networks with Recurrent Generative Feedback
https://papers.nips.cc/paper_files/paper/2020/hash/0660895c22f8a14eb039bfb9beb0778f-Abstract.html
Yujia Huang, James Gornet, Sihui Dai, Zhiding Yu, Tan Nguyen, Doris Tsao, Anima Anandkumar
https://papers.nips.cc/paper_files/paper/2020/hash/0660895c22f8a14eb039bfb9beb0778f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0660895c22f8a14eb039bfb9beb0778f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9770-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0660895c22f8a14eb039bfb9beb0778f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0660895c22f8a14eb039bfb9beb0778f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0660895c22f8a14eb039bfb9beb0778f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0660895c22f8a14eb039bfb9beb0778f-Supplemental.pdf
Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment. Inspired by such hypothesis, we enforce self-consistency in neural networks by incorporating generative recurrent feedback. We instantiate this design on convolutional neural networks (CNNs). The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables to existing CNN architectures, where consistent predictions are made through alternating MAP inference under a Bayesian framework. In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction
https://papers.nips.cc/paper_files/paper/2020/hash/0663a4ddceacb40b095eda264a85f15c-Abstract.html
Jinheon Baek, Dong Bok Lee, Sung Ju Hwang
https://papers.nips.cc/paper_files/paper/2020/hash/0663a4ddceacb40b095eda264a85f15c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0663a4ddceacb40b095eda264a85f15c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9771-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0663a4ddceacb40b095eda264a85f15c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0663a4ddceacb40b095eda264a85f15c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0663a4ddceacb40b095eda264a85f15c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0663a4ddceacb40b095eda264a85f15c-Supplemental.zip
Many practical graph problems, such as knowledge graph construction and drug-drug interaction prediction, require to handle multi-relational graphs. However, handling real-world multi-relational graphs with Graph Neural Networks (GNNs) is often challenging due to their evolving nature, as new entities (nodes) can emerge over time. Moreover, newly emerged entities often have few links, which makes the learning even more difficult. Motivated by this challenge, we introduce a realistic problem of few-shot out-of-graph link prediction, where we not only predict the links between the seen and unseen nodes as in a conventional out-of-knowledge link prediction task but also between the unseen nodes, with only few edges per node. We tackle this problem with a novel transductive meta-learning framework which we refer to as Graph Extrapolation Networks (GEN). GEN meta-learns both the node embedding network for inductive inference (seen-to-unseen) and the link prediction network for transductive inference (unseen-to-unseen). For transductive link prediction, we further propose a stochastic embedding layer to model uncertainty in the link prediction between unseen entities. We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction. The results show that our model significantly outperforms relevant baselines for out-of-graph link prediction tasks.
Exploiting weakly supervised visual patterns to learn from partial annotations
https://papers.nips.cc/paper_files/paper/2020/hash/066ca7bf90807fcd8e4f1eaef4e4e8f7-Abstract.html
Kaustav Kundu, Joseph Tighe
https://papers.nips.cc/paper_files/paper/2020/hash/066ca7bf90807fcd8e4f1eaef4e4e8f7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/066ca7bf90807fcd8e4f1eaef4e4e8f7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9772-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/066ca7bf90807fcd8e4f1eaef4e4e8f7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/066ca7bf90807fcd8e4f1eaef4e4e8f7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/066ca7bf90807fcd8e4f1eaef4e4e8f7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/066ca7bf90807fcd8e4f1eaef4e4e8f7-Supplemental.pdf
As classifications datasets progressively get larger in terms of label space and number of examples, annotating them with all labels becomes non-trivial and expensive task. For example, annotating the entire OpenImage test set can cost $6.5M. Hence, in current large-scale benchmarks such as OpenImages and LVIS, less than 1\% of the labels are annotated across all images. Standard classification models are trained in a manner where these un-annotated labels are ignored. Ignoring these un-annotated labels result in loss of supervisory signal which reduces the performance of the classification models. Instead, in this paper, we exploit relationships among images and labels to derive more supervisory signal from the un-annotated labels. We study the effectiveness of our approach across several multi-label computer vision benchmarks, such as CIFAR100, MS-COCO panoptic segmentation, OpenImage and LVIS datasets. Our approach can outperform baselines by a margin of 2-10% across all the datasets on mean average precision (mAP) and mean F1 metrics.
Improving Inference for Neural Image Compression
https://papers.nips.cc/paper_files/paper/2020/hash/066f182b787111ed4cb65ed437f0855b-Abstract.html
Yibo Yang, Robert Bamler, Stephan Mandt
https://papers.nips.cc/paper_files/paper/2020/hash/066f182b787111ed4cb65ed437f0855b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/066f182b787111ed4cb65ed437f0855b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9773-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/066f182b787111ed4cb65ed437f0855b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/066f182b787111ed4cb65ed437f0855b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/066f182b787111ed4cb65ed437f0855b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/066f182b787111ed4cb65ed437f0855b-Supplemental.pdf
We consider the problem of lossy image compression with deep latent variable models. State-of-the-art methods build on hierarchical variational autoencoders (VAEs) and learn inference networks to predict a compressible latent representation of each data point. Drawing on the variational inference perspective on compression, we identify three approximation gaps which limit performance in the conventional approach: an amortization gap, a discretization gap, and a marginalization gap. We propose remedies for each of these three limitations based on ideas related to iterative inference, stochastic annealing for discrete optimization, and bits-back coding, resulting in the first application of bits-back coding to lossy compression. In our experiments, which include extensive baseline comparisons and ablation studies, we achieve new state-of-the-art performance on lossy image compression using an established VAE architecture, by changing only the inference method.
Neuron Merging: Compensating for Pruned Neurons
https://papers.nips.cc/paper_files/paper/2020/hash/0678ca2eae02d542cc931e81b74de122-Abstract.html
Woojeong Kim, Suhyun Kim, Mincheol Park, Geunseok Jeon
https://papers.nips.cc/paper_files/paper/2020/hash/0678ca2eae02d542cc931e81b74de122-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0678ca2eae02d542cc931e81b74de122-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9774-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0678ca2eae02d542cc931e81b74de122-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0678ca2eae02d542cc931e81b74de122-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0678ca2eae02d542cc931e81b74de122-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0678ca2eae02d542cc931e81b74de122-Supplemental.pdf
Network pruning is widely used to lighten and accelerate neural network models. Structured network pruning discards the whole neuron or filter, leading to accuracy loss. In this work, we propose a novel concept of neuron merging applicable to both fully connected layers and convolution layers, which compensates for the information loss due to the pruned neurons/filters. Neuron merging starts with decomposing the original weights into two matrices/tensors. One of them becomes the new weights for the current layer, and the other is what we name a scaling matrix, guiding the combination of neurons. If the activation function is ReLU, the scaling matrix can be absorbed into the next layer under certain conditions, compensating for the removed neurons. We also propose a data-free and inexpensive method to decompose the weights by utilizing the cosine similarity between neurons. Compared to the pruned model with the same topology, our merged model better preserves the output feature map of the original model; thus, it maintains the accuracy after pruning without fine-tuning. We demonstrate the effectiveness of our approach over network pruning for various model architectures and datasets. As an example, for VGG-16 on CIFAR-10, we achieve an accuracy of 93.16% while reducing 64% of total parameters, without any fine-tuning. The code can be found here: https://github.com/friendshipkim/neuron-merging
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://papers.nips.cc/paper_files/paper/2020/hash/06964dce9addb1c5cb5d6e3d9838f733-Abstract.html
Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A. Raffel, Ekin Dogus Cubuk, Alexey Kurakin, Chun-Liang Li
https://papers.nips.cc/paper_files/paper/2020/hash/06964dce9addb1c5cb5d6e3d9838f733-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9775-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-Supplemental.pdf
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model’s performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods. FixMatch first generates pseudo-labels using the model’s predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 – just 4 labels per class. We carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch’s success. The code is available at https://github.com/google-research/fixmatch.
Reinforcement Learning with Combinatorial Actions: An Application to Vehicle Routing
https://papers.nips.cc/paper_files/paper/2020/hash/06a9d51e04213572ef0720dd27a84792-Abstract.html
Arthur Delarue, Ross Anderson, Christian Tjandraatmadja
https://papers.nips.cc/paper_files/paper/2020/hash/06a9d51e04213572ef0720dd27a84792-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9776-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-Supplemental.zip
Value-function-based methods have long played an important role in reinforcement learning. However, finding the best next action given a value function of arbitrary complexity is nontrivial when the action space is too large for enumeration. We develop a framework for value-function-based deep reinforcement learning with a combinatorial action space, in which the action selection problem is explicitly formulated as a mixed-integer optimization problem. As a motivating example, we present an application of this framework to the capacitated vehicle routing problem (CVRP), a combinatorial optimization problem in which a set of locations must be covered by a single vehicle with limited capacity. On each instance, we model an action as the construction of a single route, and consider a deterministic policy which is improved through a simple policy iteration algorithm. Our approach is competitive with other reinforcement learning methods and achieves an average gap of 1.7% with state-of-the-art OR methods on standard library instances of medium size.
Towards Playing Full MOBA Games with Deep Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/06d5ae105ea1bea4d800bc96491876e9-Abstract.html
Deheng Ye, Guibin Chen, Wen Zhang, Sheng Chen, Bo Yuan, Bo Liu, Jia Chen, Zhao Liu, Fuhao Qiu, Hongsheng Yu, Yinyuting Yin, Bei Shi, Liang Wang, Tengfei Shi, Qiang Fu, Wei Yang, Lanxiao Huang, Wei Liu
https://papers.nips.cc/paper_files/paper/2020/hash/06d5ae105ea1bea4d800bc96491876e9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/06d5ae105ea1bea4d800bc96491876e9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9777-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/06d5ae105ea1bea4d800bc96491876e9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/06d5ae105ea1bea4d800bc96491876e9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/06d5ae105ea1bea4d800bc96491876e9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/06d5ae105ea1bea4d800bc96491876e9-Supplemental.pdf
MOBA games, e.g., Honor of Kings, League of Legends, and Dota 2, pose grand challenges to AI systems such as multi-agent, enormous state-action space, complex action control, etc. Developing AI for playing MOBA games has raised much attention accordingly. However, existing work falls short in handling the raw game complexity caused by the explosion of agent combinations, i.e., lineups, when expanding the hero pool in case that OpenAI's Dota AI limits the play to a pool of only 17 heroes. As a result, full MOBA games without restrictions are far from being mastered by any existing AI system. In this paper, we propose a MOBA AI learning paradigm that methodologically enables playing full MOBA games with deep reinforcement learning. Specifically, we develop a combination of novel and existing learning techniques, including off-policy adaption, multi-head value estimation, curriculum self-play learning, policy distillation, and Monte-Carlo tree-search, in training and playing a large pool of heroes, meanwhile addressing the scalability issue skillfully. Tested on Honor of Kings, a popular MOBA game, we show how to build superhuman AI agents that can defeat top esports players. The superiority of our AI is demonstrated by the first large-scale performance test of MOBA AI agent in the literature.
Rankmax: An Adaptive Projection Alternative to the Softmax Function
https://papers.nips.cc/paper_files/paper/2020/hash/070dbb6024b5ef93784428afc71f2146-Abstract.html
Weiwei Kong, Walid Krichene, Nicolas Mayoraz, Steffen Rendle, Li Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/070dbb6024b5ef93784428afc71f2146-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/070dbb6024b5ef93784428afc71f2146-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9778-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/070dbb6024b5ef93784428afc71f2146-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/070dbb6024b5ef93784428afc71f2146-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/070dbb6024b5ef93784428afc71f2146-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/070dbb6024b5ef93784428afc71f2146-Supplemental.pdf
Several machine learning models involve mapping a score vector to a probability vector. Usually, this is done by projecting the score vector onto a probability simplex, and such projections are often characterized as Lipschitz continuous approximations of the argmax function, whose Lipschitz constant is controlled by a parameter that is similar to a softmax temperature. The aforementioned parameter has been observed to affect the quality of these models and is typically either treated as a constant or decayed over time. In this work, we propose a method that adapts this parameter to individual training examples. The resulting method exhibits desirable properties, such as sparsity of its support and numerically efficient implementation, and we find that it significantly outperforms competing non-adaptive projection methods. In our analysis, we also derive the general solution of (Bregman) projections onto the (n, k)-simplex, a result which may be of independent interest.
Online Agnostic Boosting via Regret Minimization
https://papers.nips.cc/paper_files/paper/2020/hash/07168af6cb0ef9f78dae15739dd73255-Abstract.html
Nataly Brukhim, Xinyi Chen, Elad Hazan, Shay Moran
https://papers.nips.cc/paper_files/paper/2020/hash/07168af6cb0ef9f78dae15739dd73255-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/07168af6cb0ef9f78dae15739dd73255-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9779-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/07168af6cb0ef9f78dae15739dd73255-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/07168af6cb0ef9f78dae15739dd73255-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/07168af6cb0ef9f78dae15739dd73255-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/07168af6cb0ef9f78dae15739dd73255-Supplemental.pdf
Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules. While in statistical learning numerous boosting methods exist both in the realizable and agnostic settings, in online learning they exist only in the realizable case. In this work we provide the first agnostic online boosting algorithm; that is, given a weak learner with only marginally-better-than-trivial regret guarantees, our algorithm boosts it to a strong learner with sublinear regret. Our algorithm is based on an abstract (and simple) reduction to online convex optimization, which efficiently converts an arbitrary online convex optimizer to an online booster. Moreover, this reduction extends to the statistical as well as the online realizable settings, thus unifying the 4 cases of statistical/online and agnostic/realizable boosting.
Causal Intervention for Weakly-Supervised Semantic Segmentation
https://papers.nips.cc/paper_files/paper/2020/hash/07211688a0869d995947a8fb11b215d6-Abstract.html
Dong Zhang, Hanwang Zhang, Jinhui Tang, Xian-Sheng Hua, Qianru Sun
https://papers.nips.cc/paper_files/paper/2020/hash/07211688a0869d995947a8fb11b215d6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/07211688a0869d995947a8fb11b215d6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9780-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/07211688a0869d995947a8fb11b215d6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/07211688a0869d995947a8fb11b215d6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/07211688a0869d995947a8fb11b215d6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/07211688a0869d995947a8fb11b215d6-Supplemental.pdf
We present a causal inference framework to improve Weakly-Supervised Semantic Segmentation (WSSS). Specifically, we aim to generate better pixel-level pseudo-masks by using only image-level labels -- the most crucial step in WSSS. We attribute the cause of the ambiguous boundaries of pseudo-masks to the confounding context, e.g., the correct image-level classification of "horse" and "person" may be not only due to the recognition of each instance, but also their co-occurrence context, making the model inspection (e.g., CAM) hard to distinguish between the boundaries. Inspired by this, we propose a structural causal model to analyze the causalities among images, contexts, and class labels. Based on it, we develop a new method: Context Adjustment (CONTA), to remove the confounding bias in image-level classification and thus provide better pseudo-masks as ground-truth for the subsequent segmentation model. On PASCAL VOC 2012 and MS-COCO, we show that CONTA boosts various popular WSSS methods to new state-of-the-arts.
Belief Propagation Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/07217414eb3fbe24d4e5b6cafb91ca18-Abstract.html
Jonathan Kuck, Shuvam Chakraborty, Hao Tang, Rachel Luo, Jiaming Song, Ashish Sabharwal, Stefano Ermon
https://papers.nips.cc/paper_files/paper/2020/hash/07217414eb3fbe24d4e5b6cafb91ca18-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/07217414eb3fbe24d4e5b6cafb91ca18-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9781-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/07217414eb3fbe24d4e5b6cafb91ca18-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/07217414eb3fbe24d4e5b6cafb91ca18-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/07217414eb3fbe24d4e5b6cafb91ca18-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/07217414eb3fbe24d4e5b6cafb91ca18-Supplemental.pdf
Learned neural solvers have successfully been used to solve combinatorial optimization and decision problems. More general counting variants of these problems, however, are still largely solved with hand-crafted solvers. To bridge this gap, we introduce belief propagation neural networks (BPNNs), a class of parameterized operators that operate on factor graphs and generalize Belief Propagation (BP). In its strictest form, a BPNN layer (BPNN-D) is a learned iterative operator that provably maintains many of the desirable properties of BP for any choice of the parameters. Empirically, we show that by training BPNN-D learns to perform the task better than the original BP: it converges 1.7x faster on Ising models while providing tighter bounds. On challenging model counting problems, BPNNs compute estimates 100's of times faster than state-of-the-art handcrafted methods, while returning an estimate of comparable quality.
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
https://papers.nips.cc/paper_files/paper/2020/hash/0740bb92e583cd2b88ec7c59f985cb41-Abstract.html
Yi Zhang, Orestis Plevrakis, Simon S. Du, Xingguo Li, Zhao Song, Sanjeev Arora
https://papers.nips.cc/paper_files/paper/2020/hash/0740bb92e583cd2b88ec7c59f985cb41-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0740bb92e583cd2b88ec7c59f985cb41-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9782-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0740bb92e583cd2b88ec7c59f985cb41-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0740bb92e583cd2b88ec7c59f985cb41-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0740bb92e583cd2b88ec7c59f985cb41-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0740bb92e583cd2b88ec7c59f985cb41-Supplemental.pdf
Adversarial training is a popular method to give neural nets robustness against adversarial perturbations. In practice adversarial training leads to low robust training loss. However, a rigorous explanation for why this happens under natural conditions is still missing. Recently a convergence theory of standard (non-adversarial) supervised training was developed by various groups for {\em very overparametrized} nets. It is unclear how to extend these results to adversarial training because of the min-max objective. Recently, a first step towards this direction was made by Gao et al. using tools from online learning, but they require the width of the net to be \emph{exponential} in input dimension $d$, and with an unnatural activation function. Our work proves convergence to low robust training loss for \emph{polynomial} width instead of exponential, under natural assumptions and with ReLU activations. A key element of our proof is showing that ReLU networks near initialization can approximate the step function, which may be of independent interest.
Post-training Iterative Hierarchical Data Augmentation for Deep Networks
https://papers.nips.cc/paper_files/paper/2020/hash/074177d3eb6371e32c16c55a3b8f706b-Abstract.html
Adil Khan, Khadija Fraz
https://papers.nips.cc/paper_files/paper/2020/hash/074177d3eb6371e32c16c55a3b8f706b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/074177d3eb6371e32c16c55a3b8f706b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9783-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/074177d3eb6371e32c16c55a3b8f706b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/074177d3eb6371e32c16c55a3b8f706b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/074177d3eb6371e32c16c55a3b8f706b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/074177d3eb6371e32c16c55a3b8f706b-Supplemental.pdf
In this paper, we propose a new iterative hierarchical data augmentation (IHDA) method to fine-tune trained deep neural networks to improve their generalization performance. The IHDA is motivated by three key insights: (1) Deep networks (DNs) are good at learning multi-level representations from data. (2) Performing data augmentation (DA) in the learned feature spaces of DNs can significantly improve their performance. (3) Implementing DA in hard-to-learn regions of a feature space can effectively augment the dataset to improve generalization. Accordingly, the IHDA performs DA in a deep feature space, at level l, by transforming it into a distribution space and synthesizing new samples using the learned distributions for data points that lie in hard-to-classify regions, which is estimated by analyzing the neighborhood characteristics of each data point. The synthesized samples are used to fine-tune the parameters of the subsequent layers. The same procedure is then repeated for the feature space at level l+1. To avoid overfitting, the concept of dropout probability is employed, which is gradually relaxed as the IHDA works towards high-level feature spaces. IHDA provided a state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet for several DNs, and beat the performance of existing state-of-the-art DA approaches for the same networks on these datasets. Finally, to demonstrate its domain-agnostic properties, we show the significant improvements that IHDA provided for a deep neural network on a non-image wearable sensor-based activity recognition benchmark.
Debugging Tests for Model Explanations
https://papers.nips.cc/paper_files/paper/2020/hash/075b051ec3d22dac7b33f788da631fd4-Abstract.html
Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim
https://papers.nips.cc/paper_files/paper/2020/hash/075b051ec3d22dac7b33f788da631fd4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9784-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-Supplemental.pdf
We investigate whether post-hoc model explanations are effective for diagnosing model errors--model debugging. In response to the challenge of explaining a model's prediction, a vast array of explanation methods have been proposed. Despite increasing use, it is unclear if they are effective. To start, we categorize \textit{bugs}, based on their source, into: ~\textit{data, model, and test-time} contamination bugs. For several explanation methods, we assess their ability to: detect spurious correlation artifacts (data contamination), diagnose mislabeled training examples (data contamination), differentiate between a (partially) re-initialized model and a trained one (model contamination), and detect out-of-distribution inputs (test-time contamination). We find that the methods tested are able to diagnose a spurious background bug, but not conclusively identify mislabeled training examples. In addition, a class of methods, that modify the back-propagation algorithm are invariant to the higher layer parameters of a deep network; hence, ineffective for diagnosing model contamination. We complement our analysis with a human subject study, and find that subjects fail to identify defective models using attributions, but instead rely, primarily, on model predictions. Taken together, our results provide guidance for practitioners and researchers turning to explanations as tools for model debugging.
Robust compressed sensing using generative models
https://papers.nips.cc/paper_files/paper/2020/hash/07cb5f86508f146774a2fac4373a8e50-Abstract.html
Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis
https://papers.nips.cc/paper_files/paper/2020/hash/07cb5f86508f146774a2fac4373a8e50-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/07cb5f86508f146774a2fac4373a8e50-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9785-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/07cb5f86508f146774a2fac4373a8e50-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/07cb5f86508f146774a2fac4373a8e50-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/07cb5f86508f146774a2fac4373a8e50-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/07cb5f86508f146774a2fac4373a8e50-Supplemental.pdf
We consider estimating a high dimensional signal in $\R^n$ using a sublinear number of linear measurements. In analogy to classical compressed sensing, here we assume a generative model as a prior, that is, we assume the signal is represented by a deep generative model $G: \R^k \rightarrow \R^n$. Classical recovery approaches such as empirical risk minimization (ERM) are guaranteed to succeed when the measurement matrix is sub-Gaussian. However, when the measurement matrix and measurements are heavy tailed or have outliers, recovery may fail dramatically. In this paper we propose an algorithm inspired by the Median-of-Means (MOM). Our algorithm guarantees recovery for heavy tailed data, even in the presence of outliers. Theoretically, our results show our novel MOM-based algorithm enjoys the same sample complexity guarantees as ERM under sub-Gaussian assumptions. Our experiments validate both aspects of our claims: other algorithms are indeed fragile and fail under heavy tailed and/or corrupted data, while our approach exhibits the predicted robustness.
Fairness without Demographics through Adversarially Reweighted Learning
https://papers.nips.cc/paper_files/paper/2020/hash/07fc15c9d169ee48573edd749d25945d-Abstract.html
Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed Chi
https://papers.nips.cc/paper_files/paper/2020/hash/07fc15c9d169ee48573edd749d25945d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/07fc15c9d169ee48573edd749d25945d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9786-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/07fc15c9d169ee48573edd749d25945d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/07fc15c9d169ee48573edd749d25945d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/07fc15c9d169ee48573edd749d25945d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/07fc15c9d169ee48573edd749d25945d-Supplemental.zip
Much of the previous machine learning (ML) fairness literature assumes that protected features such as race and sex are present in the dataset, and relies upon them to mitigate fairness concerns. However, in practice factors like privacy and regulation often preclude the collection of protected features, or their use for training or inference, severely limiting the applicability of traditional fairness research. Therefore, we ask: How can we train a ML model to improve fairness when we do not even know the protected group memberships? In this work we address this problem by proposing Adversarially Reweighted Learning (ARL). In particular, we hypothesize that non-protected features and task labels are valuable for identifying fairness issues, and can be used to co-train an adversarial reweighting approach for improving fairness. Our results show that ARL improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets, outperforming state-of-the-art alternatives.
Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model
https://papers.nips.cc/paper_files/paper/2020/hash/08058bf500242562c0d031ff830ad094-Abstract.html
Alex X. Lee, Anusha Nagabandi, Pieter Abbeel, Sergey Levine
https://papers.nips.cc/paper_files/paper/2020/hash/08058bf500242562c0d031ff830ad094-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9787-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-Supplemental.pdf
Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these high-dimensional observation spaces present a number of challenges in practice, since the policy must now solve two problems: representation learning and task learning. In this work, we tackle these two problems separately, by explicitly learning latent representations that can accelerate reinforcement learning from images. We propose the stochastic latent actor-critic (SLAC) algorithm: a sample-efficient and high-performing RL algorithm for learning policies for complex continuous control tasks directly from high-dimensional image inputs. SLAC provides a novel and principled approach for unifying stochastic sequential models and RL into a single method, by learning a compact latent representation and then performing RL in the model's learned latent space. Our experimental evaluation demonstrates that our method outperforms both model-free and model-based alternatives in terms of final performance and sample efficiency, on a range of difficult image-based control tasks. Our code and videos of our results are available at our website.
Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian
https://papers.nips.cc/paper_files/paper/2020/hash/08425b881bcde94a383cd258cea331be-Abstract.html
Jack Parker-Holder, Luke Metz, Cinjon Resnick, Hengyuan Hu, Adam Lerer, Alistair Letcher, Alexander Peysakhovich, Aldo Pacchiano, Jakob Foerster
https://papers.nips.cc/paper_files/paper/2020/hash/08425b881bcde94a383cd258cea331be-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/08425b881bcde94a383cd258cea331be-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9788-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/08425b881bcde94a383cd258cea331be-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/08425b881bcde94a383cd258cea331be-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/08425b881bcde94a383cd258cea331be-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/08425b881bcde94a383cd258cea331be-Supplemental.pdf
Over the last decade, a single algorithm has changed many facets of our lives - Stochastic Gradient Descent (SGD). In the era of ever decreasing loss functions, SGD and its various offspring have become the go-to optimization tool in machine learning and are a key component of the success of deep neural networks (DNNs). While SGD is guaranteed to converge to a local optimum (under loose assumptions), in some cases it may matter which local optimum is found, and this is often context-dependent. Examples frequently arise in machine learning, from shape-versus-texture-features to ensemble methods and zero-shot coordination. In these settings, there are desired solutions which SGD on standard' loss functions will not find, since it instead converges to theeasy' solutions. In this paper, we present a different approach. Rather than following the gradient, which corresponds to a locally greedy direction, we instead follow the eigenvectors of the Hessian. By iteratively following and branching amongst the ridges, we effectively span the loss surface to find qualitatively different solutions. We show both theoretically and experimentally that our method, called Ridge Rider (RR), offers a promising direction for a variety of challenging problems.
The route to chaos in routing games: When is price of anarchy too optimistic?
https://papers.nips.cc/paper_files/paper/2020/hash/0887f1a5b9970ad13f46b8c1485f7900-Abstract.html
Thiparat Chotibut, Fryderyk Falniowski, Michał Misiurewicz, Georgios Piliouras
https://papers.nips.cc/paper_files/paper/2020/hash/0887f1a5b9970ad13f46b8c1485f7900-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0887f1a5b9970ad13f46b8c1485f7900-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9789-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0887f1a5b9970ad13f46b8c1485f7900-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0887f1a5b9970ad13f46b8c1485f7900-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0887f1a5b9970ad13f46b8c1485f7900-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0887f1a5b9970ad13f46b8c1485f7900-Supplemental.pdf
Routing games are amongst the most studied classes of games in game theory. Their most well-known property is that learning dynamics typically converge to equilibria implying approximately optimal performance (low Price of Anarchy). We perform a stress test for these classic results by studying the ubiquitous learning dynamics, Multiplicative Weights Update (MWU), in different classes of congestion games, uncovering intricate non-equilibrium phenomena. We study MWU using the actual game costs without applying cost normalization to $[0,1]$. Although this non-standard assumption leads to large regret, it captures realistic agents' behaviors. Namely, as the total demand increases, agents respond more aggressively to unbearably large costs. We start with the illustrative case of non-atomic routing games with two paths of linear cost, and show that every system has a carrying capacity, above which it becomes unstable. If the equilibrium flow is a symmetric $50-50\%$ split, the system exhibits one period-doubling bifurcation. Although the Price of Anarchy is equal to one, in the large population limit the time-average social cost for all but a zero measure set of initial conditions converges to its worst possible value. For asymmetric equilibrium flows, increasing the demand eventually forces the system into Li-Yorke chaos with positive topological entropy and periodic orbits of all possible periods. Remarkably, in all non-equilibrating regimes, the time-average flows on the paths converge {\it exactly} to the equilibrium flows, a property akin to no-regret learning in zero-sum games. We extend our results to games with arbitrarily many strategies, polynomial cost functions, non-atomic as well as atomic routing games, and heterogenous users.
Online Algorithm for Unsupervised Sequential Selection with Contextual Information
https://papers.nips.cc/paper_files/paper/2020/hash/08e5d8066881eab185d0de9db3b36c7f-Abstract.html
Arun Verma, Manjesh Kumar Hanawal, Csaba Szepesvari, Venkatesh Saligrama
https://papers.nips.cc/paper_files/paper/2020/hash/08e5d8066881eab185d0de9db3b36c7f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/08e5d8066881eab185d0de9db3b36c7f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9790-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/08e5d8066881eab185d0de9db3b36c7f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/08e5d8066881eab185d0de9db3b36c7f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/08e5d8066881eab185d0de9db3b36c7f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/08e5d8066881eab185d0de9db3b36c7f-Supplemental.pdf
In this paper, we study Contextual Unsupervised Sequential Selection (USS), a new variant of the stochastic contextual bandits problem where the loss of an arm cannot be inferred from the observed feedback. In our setup, arms are associated with fixed costs and are ordered, forming a cascade. In each round, a context is presented, and the learner selects the arms sequentially till some depth. The total cost incurred by stopping at an arm is the sum of fixed costs of arms selected and the stochastic loss associated with the arm. The learner's goal is to learn a decision rule that maps contexts to arms with the goal of minimizing the total expected loss. The problem is challenging as we are faced with an unsupervised setting as the total loss cannot be estimated. Clearly, learning is feasible only if the optimal arm can be inferred (explicitly or implicitly) from the problem structure. We observe that learning is still possible when the problem instance satisfies the so-called 'Contextual Weak Dominance' (CWD) property. Under CWD, we propose an algorithm for the contextual USS problem and demonstrate that it has sub-linear regret. Experiments on synthetic and real datasets validate our algorithm.
Adapting Neural Architectures Between Domains
https://papers.nips.cc/paper_files/paper/2020/hash/08f38e0434442128fab5ead6217ca759-Abstract.html
Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu
https://papers.nips.cc/paper_files/paper/2020/hash/08f38e0434442128fab5ead6217ca759-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/08f38e0434442128fab5ead6217ca759-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9791-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/08f38e0434442128fab5ead6217ca759-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/08f38e0434442128fab5ead6217ca759-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/08f38e0434442128fab5ead6217ca759-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/08f38e0434442128fab5ead6217ca759-Supplemental.pdf
Neural architecture search (NAS) has demonstrated impressive performance in automatically designing high-performance neural networks. The power of deep neural networks is to be unleashed for analyzing a large volume of data (e.g. ImageNet), but the architecture search is often executed on another smaller dataset (e.g. CIFAR-10) to finish it in a feasible time. However, it is hard to guarantee that the optimal architecture derived on the proxy task could maintain its advantages on another more challenging dataset. This paper aims to improve the generalization of neural architectures via domain adaptation. We analyze the generalization bounds of the derived architecture and suggest its close relations with the validation error and the data distribution distance on both domains. These theoretical analyses lead to AdaptNAS, a novel and principled approach to adapt neural architectures between domains in NAS. Our experimental evaluation shows that only a small part of ImageNet will be sufficient for AdaptNAS to extend its architecture success to the entire ImageNet and outperform state-of-the-art comparison algorithms.
What went wrong and when? Instance-wise feature importance for time-series black-box models
https://papers.nips.cc/paper_files/paper/2020/hash/08fa43588c2571ade19bc0fa5936e028-Abstract.html
Sana Tonekaboni, Shalmali Joshi, Kieran Campbell, David K. Duvenaud, Anna Goldenberg
https://papers.nips.cc/paper_files/paper/2020/hash/08fa43588c2571ade19bc0fa5936e028-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/08fa43588c2571ade19bc0fa5936e028-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9792-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/08fa43588c2571ade19bc0fa5936e028-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/08fa43588c2571ade19bc0fa5936e028-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/08fa43588c2571ade19bc0fa5936e028-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/08fa43588c2571ade19bc0fa5936e028-Supplemental.pdf
Explanations of time series models are useful for high stakes applications like healthcare but have received little attention in machine learning literature. We propose FIT, a framework that evaluates the importance of observations for a multivariate time-series black-box model by quantifying the shift in the predictive distribution over time. FIT defines the importance of an observation based on its contribution to the distributional shift under a KL-divergence that contrasts the predictive distribution against a counterfactual where the rest of the features are unobserved. We also demonstrate the need to control for time-dependent distribution shifts. We compare with state-of-the-art baselines on simulated and real-world clinical data and demonstrate that our approach is superior in identifying important time points and observations throughout the time series.
Towards Better Generalization of Adaptive Gradient Methods
https://papers.nips.cc/paper_files/paper/2020/hash/08fb104b0f2f838f3ce2d2b3741a12c2-Abstract.html
Yingxue Zhou, Belhal Karimi, Jinxing Yu, Zhiqiang Xu, Ping Li
https://papers.nips.cc/paper_files/paper/2020/hash/08fb104b0f2f838f3ce2d2b3741a12c2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9793-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-Supplemental.pdf
Adaptive gradient methods such as AdaGrad, RMSprop and Adam have been optimizers of choice for deep learning due to their fast training speed. However, it was recently observed that their generalization performance is often worse than that of SGD for over-parameterized neural networks. While new algorithms such as AdaBound, SWAT, and Padam were proposed to improve the situation, the provided analyses are only committed to optimization bounds for the training objective, leaving critical generalization capacity unexplored. To close this gap, we propose \textit{\textbf{S}table \textbf{A}daptive \textbf{G}radient \textbf{D}escent} (\textsc{SAGD}) for nonconvex optimization which leverages differential privacy to boost the generalization performance of adaptive gradient methods. Theoretical analyses show that \textsc{SAGD} has high-probability convergence to a population stationary point. We further conduct experiments on various popular deep learning tasks and models. Experimental results illustrate that \textsc{SAGD} is empirically competitive and often better than baselines.
Learning Guidance Rewards with Trajectory-space Smoothing
https://papers.nips.cc/paper_files/paper/2020/hash/0912d0f15f1394268c66639e39b26215-Abstract.html
Tanmay Gangwani, Yuan Zhou, Jian Peng
https://papers.nips.cc/paper_files/paper/2020/hash/0912d0f15f1394268c66639e39b26215-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0912d0f15f1394268c66639e39b26215-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9794-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0912d0f15f1394268c66639e39b26215-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0912d0f15f1394268c66639e39b26215-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0912d0f15f1394268c66639e39b26215-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0912d0f15f1394268c66639e39b26215-Supplemental.pdf
Long-term temporal credit assignment is an important challenge in deep reinforcement learning (RL). It refers to the ability of the agent to attribute actions to consequences that may occur after a long time interval. Existing policy-gradient and Q-learning algorithms typically rely on dense environmental rewards that provide rich short-term supervision and help with credit assignment. However, they struggle to solve tasks with delays between an action and the corresponding rewarding feedback. To make credit assignment easier, recent works have proposed algorithms to learn dense "guidance" rewards that could be used in place of the sparse or delayed environmental rewards. This paper is in the same vein -- starting with a surrogate RL objective that involves smoothing in the trajectory-space, we arrive at a new algorithm for learning guidance rewards. We show that the guidance rewards have an intuitive interpretation, and can be obtained without training any additional neural networks. Due to the ease of integration, we use the guidance rewards in a few popular algorithms (Q-learning, Actor-Critic, Distributional-RL) and present results in single-agent and multi-agent tasks that elucidate the benefit of our approach when the environmental rewards are sparse or delayed.
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/093b60fd0557804c8ba0cbf1453da22f-Abstract.html
Chaobing Song, Yong Jiang, Yi Ma
https://papers.nips.cc/paper_files/paper/2020/hash/093b60fd0557804c8ba0cbf1453da22f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/093b60fd0557804c8ba0cbf1453da22f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9795-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/093b60fd0557804c8ba0cbf1453da22f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/093b60fd0557804c8ba0cbf1453da22f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/093b60fd0557804c8ba0cbf1453da22f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/093b60fd0557804c8ba0cbf1453da22f-Supplemental.pdf
In this paper, we introduce a simplified and unified method for finite-sum convex optimization, named \emph{Variance Reduction via Accelerated Dual Averaging (VRADA)}. In the general convex and smooth setting, VRADA can attain an $O\big(\frac{1}{n}\big)$-accurate solution in $O(n\log\log n)$ number of stochastic gradient evaluations, where $n$ is the number of samples; meanwhile, VRADA matches the lower bound of this setting up to a $\log\log n$ factor. In the strongly convex and smooth setting, VRADA matches the lower bound in the regime $n \le \Theta(\kappa)$, while it improves the rate in the regime $n\gg \kappa$ to $O\big(n +\frac{n\log(1/\epsilon)}{\log(n/\kappa)}\big)$, where $\kappa$ is the condition number. Besides improving the best known complexity results, VRADA has more unified and simplified algorithmic implementation and convergence analysis for both the general convex and strongly convex settings. Through experiments on real datasets, we show the good performance of VRADA over existing methods for large-scale machine learning problems.
Tree! I am no Tree! I am a low dimensional Hyperbolic Embedding
https://papers.nips.cc/paper_files/paper/2020/hash/093f65e080a295f8076b1c5722a46aa2-Abstract.html
Rishi Sonthalia, Anna Gilbert
https://papers.nips.cc/paper_files/paper/2020/hash/093f65e080a295f8076b1c5722a46aa2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/093f65e080a295f8076b1c5722a46aa2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9796-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/093f65e080a295f8076b1c5722a46aa2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/093f65e080a295f8076b1c5722a46aa2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/093f65e080a295f8076b1c5722a46aa2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/093f65e080a295f8076b1c5722a46aa2-Supplemental.zip
Given data, finding a faithful low-dimensional hyperbolic embedding of the data is a key method by which we can extract hierarchical information or learn representative geometric features of the data. In this paper, we explore a new method for learning hyperbolic representations by taking a metric-first approach. Rather than determining the low-dimensional hyperbolic embedding directly, we learn a tree structure on the data. This tree structure can then be used directly to extract hierarchical information, embedded into a hyperbolic manifold using Sarkar's construction \cite{sarkar}, or used as a tree approximation of the original metric. To this end, we present a novel fast algorithm \textsc{TreeRep} such that, given a $\delta$-hyperbolic metric (for any $\delta \geq 0$), the algorithm learns a tree structure that approximates the original metric. In the case when $\delta = 0$, we show analytically that \textsc{TreeRep} exactly recovers the original tree structure. We show empirically that \textsc{TreeRep} is not only many orders of magnitude faster than previously known algorithms, but also produces metrics with lower average distortion and higher mean average precision than most previous algorithms for learning hyperbolic embeddings, extracting hierarchical information, and approximating metrics via tree metrics.
Deep Structural Causal Models for Tractable Counterfactual Inference
https://papers.nips.cc/paper_files/paper/2020/hash/0987b8b338d6c90bbedd8631bc499221-Abstract.html
Nick Pawlowski, Daniel Coelho de Castro, Ben Glocker
https://papers.nips.cc/paper_files/paper/2020/hash/0987b8b338d6c90bbedd8631bc499221-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9797-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-Supplemental.zip
We formulate a general framework for building structural causal models (SCMs) with deep learning components. The proposed approach employs normalising flows and variational inference to enable tractable inference of exogenous noise variables - a crucial step for counterfactual inference that is missing from existing deep causal learning methods. Our framework is validated on a synthetic dataset built on MNIST as well as on a real-world medical dataset of brain MRI scans. Our experimental results indicate that we can successfully train deep SCMs that are capable of all three levels of Pearl's ladder of causation: association, intervention, and counterfactuals, giving rise to a powerful new approach for answering causal questions in imaging applications and beyond.
Convolutional Generation of Textured 3D Meshes
https://papers.nips.cc/paper_files/paper/2020/hash/098d86c982354a96556bd861823ebfbd-Abstract.html
Dario Pavllo, Graham Spinks, Thomas Hofmann, Marie-Francine Moens, Aurelien Lucchi
https://papers.nips.cc/paper_files/paper/2020/hash/098d86c982354a96556bd861823ebfbd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/098d86c982354a96556bd861823ebfbd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9798-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/098d86c982354a96556bd861823ebfbd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/098d86c982354a96556bd861823ebfbd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/098d86c982354a96556bd861823ebfbd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/098d86c982354a96556bd861823ebfbd-Supplemental.zip
While recent generative models for 2D images achieve impressive visual results, they clearly lack the ability to perform 3D reasoning. This heavily restricts the degree of control over generated objects as well as the possible applications of such models. In this work, we bridge this gap by leveraging recent advances in differentiable rendering. We design a framework that can generate triangle meshes and associated high-resolution texture maps, using only 2D supervision from single-view natural images. A key contribution of our work is the encoding of the mesh and texture as 2D representations, which are semantically aligned and can be easily modeled by a 2D convolutional GAN. We demonstrate the efficacy of our method on Pascal3D+ Cars and CUB, both in an unconditional setting and in settings where the model is conditioned on class labels, attributes, and text. Finally, we propose an evaluation methodology that assesses the mesh and texture quality separately.
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/099fe6b0b444c23836c4a5d07346082b-Abstract.html
Jianfei Chen, Yu Gai, Zhewei Yao, Michael W. Mahoney, Joseph E. Gonzalez
https://papers.nips.cc/paper_files/paper/2020/hash/099fe6b0b444c23836c4a5d07346082b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/099fe6b0b444c23836c4a5d07346082b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9799-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/099fe6b0b444c23836c4a5d07346082b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/099fe6b0b444c23836c4a5d07346082b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/099fe6b0b444c23836c4a5d07346082b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/099fe6b0b444c23836c4a5d07346082b-Supplemental.pdf
Fully quantized training (FQT), which uses low-bitwidth hardware by quantizing the activations, weights, and gradients of a neural network model, is a promising approach to accelerate the training of deep neural networks. One major challenge with FQT is the lack of theoretical understanding, in particular of how gradient quantization impacts convergence properties. In this paper, we address this problem by presenting a statistical framework for analyzing FQT algorithms. We view the quantized gradient of FQT as a stochastic estimator of its full precision counterpart, a procedure known as quantization-aware training (QAT). We show that the FQT gradient is an unbiased estimator of the QAT gradient, and we discuss the impact of gradient quantization on its variance. Inspired by these theoretical results, we develop two novel gradient quantizers, and we show that these have smaller variance than the existing per-tensor quantizer. For training ResNet-50 on ImageNet, our 5-bit block Householder quantizer achieves only 0.5% validation accuracy loss relative to QAT, comparable to the existing INT8 baseline.
Better Set Representations For Relational Reasoning
https://papers.nips.cc/paper_files/paper/2020/hash/09ccf3183d9e90e5ae1f425d5f9b2c00-Abstract.html
Qian Huang, Horace He, Abhay Singh, Yan Zhang, Ser Nam Lim, Austin R. Benson
https://papers.nips.cc/paper_files/paper/2020/hash/09ccf3183d9e90e5ae1f425d5f9b2c00-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/09ccf3183d9e90e5ae1f425d5f9b2c00-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9800-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/09ccf3183d9e90e5ae1f425d5f9b2c00-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/09ccf3183d9e90e5ae1f425d5f9b2c00-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/09ccf3183d9e90e5ae1f425d5f9b2c00-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/09ccf3183d9e90e5ae1f425d5f9b2c00-Supplemental.zip
Incorporating relational reasoning into neural networks has greatly expanded their capabilities and scope. One defining trait of relational reasoning is that it operates on a set of entities, as opposed to standard vector representations. Existing end-to-end approaches for relational reasoning typically extract entities from inputs by directly interpreting the latent feature representations as a set. We show that these approaches do not respect set permutational invariance and thus have fundamental representational limitations. To resolve this limitation, we propose a simple and general network module called Set Refiner Network (SRN). We first use synthetic image experiments to demonstrate how our approach effectively decomposes objects without explicit supervision. Then, we insert our module into existing relational reasoning models and show that respecting set invariance leads to substantial gains in prediction performance and robustness on several relational reasoning tasks. Code can be found at github.com/CUAI/BetterSetRepresentations.
AutoSync: Learning to Synchronize for Data-Parallel Distributed Deep Learning
https://papers.nips.cc/paper_files/paper/2020/hash/0a2298a72858d90d5c4b4fee954b6896-Abstract.html
Hao Zhang, Yuan Li, Zhijie Deng, Xiaodan Liang, Lawrence Carin, Eric Xing
https://papers.nips.cc/paper_files/paper/2020/hash/0a2298a72858d90d5c4b4fee954b6896-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0a2298a72858d90d5c4b4fee954b6896-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9801-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0a2298a72858d90d5c4b4fee954b6896-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0a2298a72858d90d5c4b4fee954b6896-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0a2298a72858d90d5c4b4fee954b6896-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0a2298a72858d90d5c4b4fee954b6896-Supplemental.pdf
Synchronization is a key step in data-parallel distributed machine learning (ML). Different synchronization systems and strategies perform differently, and to achieve optimal parallel training throughput requires synchronization strategies that adapt to model structures and cluster configurations. Existing synchronization systems often only consider a single or a few synchronization aspects, and the burden of deciding the right synchronization strategy is then placed on the ML practitioners, who may lack the required expertise. In this paper, we develop a model- and resource-dependent representation for synchronization, which unifies multiple synchronization aspects ranging from architecture, message partitioning, placement scheme, to communication topology. Based on this representation, we build an end-to-end pipeline, AutoSync, to automatically optimize synchronization strategies given model structures and resource specifications, lowering the bar for data-parallel distributed ML. By learning from low-shot data collected in only 200 trial runs, AutoSync can discover synchronization strategies up to 1.6x better than manually optimized ones. We develop transfer-learning mechanisms to further reduce the auto-optimization cost -- the simulators can transfer among similar model architectures, among similar cluster configurations, or both. We also present a dataset that contains over 10000 synchronization strategies and run-time pairs on a diverse set of models and cluster specifications.
A Combinatorial Perspective on Transfer Learning
https://papers.nips.cc/paper_files/paper/2020/hash/0a3b6f64f0523984e51323fe53b8c504-Abstract.html
Jianan Wang, Eren Sezener, David Budden, Marcus Hutter, Joel Veness
https://papers.nips.cc/paper_files/paper/2020/hash/0a3b6f64f0523984e51323fe53b8c504-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0a3b6f64f0523984e51323fe53b8c504-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9802-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0a3b6f64f0523984e51323fe53b8c504-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0a3b6f64f0523984e51323fe53b8c504-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0a3b6f64f0523984e51323fe53b8c504-Review.html
null
Human intelligence is characterized not only by the capacity to learn complex skills, but the ability to rapidly adapt and acquire new skills within an ever-changing environment. In this work we study how the learning of modular solutions can allow for effective generalization to both unseen and potentially differently distributed data. Our main postulate is that the combination of task segmentation, modular learning and memory-based ensembling can give rise to generalization on an exponentially growing number of unseen tasks. We provide a concrete instantiation of this idea using a combination of: (1) the Forget-Me-Not Process, for task segmentation and memory based ensembling; and (2) Gated Linear Networks, which in contrast to contemporary deep learning techniques use a modular and local learning mechanism. We demonstrate that this system exhibits a number of desirable continual learning properties: robustness to catastrophic forgetting, no negative transfer and increasing levels of positive transfer as more tasks are seen. We show competitive performance against both offline and online methods on standard continual learning benchmarks.
Hardness of Learning Neural Networks with Natural Weights
https://papers.nips.cc/paper_files/paper/2020/hash/0a4dc6dae338c9cb08947c07581f77a2-Abstract.html
Amit Daniely, Gal Vardi
https://papers.nips.cc/paper_files/paper/2020/hash/0a4dc6dae338c9cb08947c07581f77a2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0a4dc6dae338c9cb08947c07581f77a2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9803-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0a4dc6dae338c9cb08947c07581f77a2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0a4dc6dae338c9cb08947c07581f77a2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0a4dc6dae338c9cb08947c07581f77a2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0a4dc6dae338c9cb08947c07581f77a2-Supplemental.pdf
Neural networks are nowadays highly successful despite strong hardness results. The existing hardness results focus on the network architecture, and assume that the network's weights are arbitrary. A natural approach to settle the discrepancy is to assume that the network's weights are ``well-behaved" and posses some generic properties that may allow efficient learning. This approach is supported by the intuition that the weights in real-world networks are not arbitrary, but exhibit some ''random-like" properties with respect to some ''natural" distributions. We prove negative results in this regard, and show that for depth-$2$ networks, and many ``natural" weights distributions such as the normal and the uniform distribution, most networks are hard to learn. Namely, there is no efficient learning algorithm that is provably successful for most weights, and every input distribution. It implies that there is no generic property that holds with high probability in such random networks and allows efficient learning.
Higher-Order Spectral Clustering of Directed Graphs
https://papers.nips.cc/paper_files/paper/2020/hash/0a5052334511e344f15ae0bfafd47a67-Abstract.html
Steinar Laenen, He Sun
https://papers.nips.cc/paper_files/paper/2020/hash/0a5052334511e344f15ae0bfafd47a67-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0a5052334511e344f15ae0bfafd47a67-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9804-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0a5052334511e344f15ae0bfafd47a67-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0a5052334511e344f15ae0bfafd47a67-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0a5052334511e344f15ae0bfafd47a67-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0a5052334511e344f15ae0bfafd47a67-Supplemental.pdf
Clustering is an important topic in algorithms, and has a number of applications in machine learning, computer vision, statistics, and several other research disciplines. Traditional objectives of graph clustering are to find clusters with low conductance. Not only are these objectives just applicable for undirected graphs, they are also incapable to take the relationships between clusters into account, which could be crucial for many applications. To overcome these downsides, we study directed graphs (digraphs) whose clusters exhibit further “structural” information amongst each other. Based on the Hermitian matrix representation of digraphs, we present a nearly-linear time algorithm for digraph clustering, and further show that our proposed algorithm can be implemented in sublinear time under reasonable assumptions. The significance of our theoretical work is demonstrated by extensive experimental results on the UN Comtrade Dataset: the output clustering of our algorithm exhibits not only how the clusters (sets of countries) relate to each other with respect to their import and export records, but also how these clusters evolve over time, in accordance with known facts in international trade.
Primal-Dual Mesh Convolutional Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/0a656cc19f3f5b41530182a9e03982a4-Abstract.html
Francesco Milano, Antonio Loquercio, Antoni Rosinol, Davide Scaramuzza, Luca Carlone
https://papers.nips.cc/paper_files/paper/2020/hash/0a656cc19f3f5b41530182a9e03982a4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0a656cc19f3f5b41530182a9e03982a4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9805-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0a656cc19f3f5b41530182a9e03982a4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0a656cc19f3f5b41530182a9e03982a4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0a656cc19f3f5b41530182a9e03982a4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0a656cc19f3f5b41530182a9e03982a4-Supplemental.pdf
Recent works in geometric deep learning have introduced neural networks that allow performing inference tasks on three-dimensional geometric data by defining convolution --and sometimes pooling-- operations on triangle meshes. These methods, however, either consider the input mesh as a graph, and do not exploit specific geometric properties of meshes for feature aggregation and downsampling, or are specialized for meshes, but rely on a rigid definition of convolution that does not properly capture the local topology of the mesh. We propose a method that combines the advantages of both types of approaches, while addressing their limitations: we extend a primal-dual framework drawn from the graph-neural-network literature to triangle meshes, and define convolutions on two types of graphs constructed from an input mesh. Our method takes features for both edges and faces of a 3D mesh as input, and dynamically aggregates them using an attention mechanism. At the same time, we introduce a pooling operation with a precise geometric interpretation, that allows handling variations in the mesh connectivity by clustering mesh faces in a task-driven fashion. We provide theoretical insights of our approach using tools from the mesh-simplification literature. In addition, we validate experimentally our method in the tasks of shape classification and shape segmentation, where we obtain comparable or superior performance to the state of the art.
The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning
https://papers.nips.cc/paper_files/paper/2020/hash/0a716fe8c7745e51a3185fc8be6ca23a-Abstract.html
Giulia Denevi, Massimiliano Pontil, Carlo Ciliberto
https://papers.nips.cc/paper_files/paper/2020/hash/0a716fe8c7745e51a3185fc8be6ca23a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0a716fe8c7745e51a3185fc8be6ca23a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9806-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0a716fe8c7745e51a3185fc8be6ca23a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0a716fe8c7745e51a3185fc8be6ca23a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0a716fe8c7745e51a3185fc8be6ca23a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0a716fe8c7745e51a3185fc8be6ca23a-Supplemental.zip
Biased regularization and fine tuning are two recent meta-learning approaches. They have been shown to be effective to tackle distributions of tasks, in which the tasks’ target vectors are all close to a common meta-parameter vector. However, these methods may perform poorly on heterogeneous environments of tasks, where the complexity of the tasks’ distribution cannot be captured by a single meta- parameter vector. We address this limitation by conditional meta-learning, inferring a conditioning function mapping task’s side information into a meta-parameter vector that is appropriate for that task at hand. We characterize properties of the environment under which the conditional approach brings a substantial advantage over standard meta-learning and we highlight examples of environments, such as those with multiple clusters, satisfying these properties. We then propose a convex meta-algorithm providing a comparable advantage also in practice. Numerical experiments confirm our theoretical findings.
Watch out! Motion is Blurring the Vision of Your Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/0a73de68f10e15626eb98701ecf03adb-Abstract.html
Qing Guo, Felix Juefei-Xu, Xiaofei Xie, Lei Ma, Jian Wang, Bing Yu, Wei Feng, Yang Liu
https://papers.nips.cc/paper_files/paper/2020/hash/0a73de68f10e15626eb98701ecf03adb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0a73de68f10e15626eb98701ecf03adb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9807-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0a73de68f10e15626eb98701ecf03adb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0a73de68f10e15626eb98701ecf03adb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0a73de68f10e15626eb98701ecf03adb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0a73de68f10e15626eb98701ecf03adb-Supplemental.pdf
The state-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations. While such examples are hardly found in the physical world, the image blurring effect caused by object motion, on the other hand, commonly occurs in practice, making the study of which greatly important especially for the widely adopted real-time image processing tasks (e.g., object detection, tracking). In this paper, we initiate the first step to comprehensively investigate the potential hazards of blur effect for DNN, caused by object motion. We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples, named motion-based adversarial blur attack (ABBA). To this end, we first formulate the kernel-prediction-based attack where an input image is convolved with kernels in a pixel-wise way, and the misclassification capability is achieved by tuning the kernel weights. To generate visually more natural and plausible examples, we further propose the saliency-regularized adversarial kernel prediction, where the salient region serves as a moving object, and the predicted kernel is regularized to achieve naturally visual effects. Besides, the attack is further enhanced by adaptively tuning the translations of object and background. A comprehensive evaluation on the NeurIPS'17 adversarial competition dataset demonstrates the effectiveness of ABBA by considering various kernel sizes, translations, and regions. The in-depth study further confirms that our method shows a more effective penetrating capability to the state-of-the-art GAN-based deblurring mechanisms compared with other blurring methods. We release the code to \url{https://github.com/tsingqguo/ABBA}.
Sinkhorn Barycenter via Functional Gradient Descent
https://papers.nips.cc/paper_files/paper/2020/hash/0a93091da5efb0d9d5649e7f6b2ad9d7-Abstract.html
Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani
https://papers.nips.cc/paper_files/paper/2020/hash/0a93091da5efb0d9d5649e7f6b2ad9d7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0a93091da5efb0d9d5649e7f6b2ad9d7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9808-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0a93091da5efb0d9d5649e7f6b2ad9d7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0a93091da5efb0d9d5649e7f6b2ad9d7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0a93091da5efb0d9d5649e7f6b2ad9d7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0a93091da5efb0d9d5649e7f6b2ad9d7-Supplemental.pdf
In this paper, we consider the problem of computing the barycenter of a set of probability distributions under the Sinkhorn divergence. This problem has recently found applications across various domains, including graphics, learning, and vision, as it provides a meaningful mechanism to aggregate knowledge. Unlike previous approaches which directly operate in the space of probability measures, we recast the Sinkhorn barycenter problem as an instance of unconstrained functional optimization and develop a novel functional gradient descent method named \texttt{Sinkhorn Descent} (\texttt{SD}). We prove that \texttt{SD} converges to a stationary point at a sublinear rate, and under reasonable assumptions, we further show that it asymptotically finds a global minimizer of the Sinkhorn barycenter problem. Moreover, by providing a mean-field analysis, we show that \texttt{SD} preserves the {weak convergence} of empirical measures. Importantly, the computational complexity of \texttt{SD} scales linearly in the dimension $d$ and we demonstrate its scalability by solving a $100$-dimensional Sinkhorn barycenter problem.
Coresets for Near-Convex Functions
https://papers.nips.cc/paper_files/paper/2020/hash/0afe095e81a6ac76ff3f69975cb3e7ae-Abstract.html
Murad Tukan, Alaa Maalouf, Dan Feldman
https://papers.nips.cc/paper_files/paper/2020/hash/0afe095e81a6ac76ff3f69975cb3e7ae-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0afe095e81a6ac76ff3f69975cb3e7ae-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9809-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0afe095e81a6ac76ff3f69975cb3e7ae-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0afe095e81a6ac76ff3f69975cb3e7ae-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0afe095e81a6ac76ff3f69975cb3e7ae-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0afe095e81a6ac76ff3f69975cb3e7ae-Supplemental.pdf
Coreset is usually a small weighted subset of $n$ input points in $\mathbb{R}^d$, that provably approximates their loss function for a given set of queries (models, classifiers, etc.). Coresets become increasingly common in machine learning since existing heuristics or inefficient algorithms may be improved by running them possibly many times on the small coreset that can be maintained for streaming distributed data. Coresets can be obtained by sensitivity (importance) sampling, where its size is proportional to the total sum of sensitivities. Unfortunately, computing the sensitivity of each point is problem dependent and may be harder to compute than the original optimization problem at hand. We suggest a generic framework for computing sensitivities (and thus coresets) for wide family of loss functions which we call near-convex functions. This is by suggesting the $f$-SVD factorization that generalizes the SVD factorization of matrices to functions. Example applications include coresets that are either new or significantly improves previous results, such as SVM, Logistic regression, M-estimators, and $\ell_z$-regression. Experimental results and open source are also provided.
Bayesian Deep Ensembles via the Neural Tangent Kernel
https://papers.nips.cc/paper_files/paper/2020/hash/0b1ec366924b26fc98fa7b71a9c249cf-Abstract.html
Bobby He, Balaji Lakshminarayanan, Yee Whye Teh
https://papers.nips.cc/paper_files/paper/2020/hash/0b1ec366924b26fc98fa7b71a9c249cf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0b1ec366924b26fc98fa7b71a9c249cf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9810-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0b1ec366924b26fc98fa7b71a9c249cf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0b1ec366924b26fc98fa7b71a9c249cf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0b1ec366924b26fc98fa7b71a9c249cf-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0b1ec366924b26fc98fa7b71a9c249cf-Supplemental.pdf
We explore the link between deep ensembles and Gaussian processes (GPs) through the lens of the Neural Tangent Kernel (NTK): a recent development in understanding the training dynamics of wide neural networks (NNs). Previous work has shown that even in the infinite width limit, when NNs become GPs, there is no GP posterior interpretation to a deep ensemble trained with squared error loss. We introduce a simple modification to standard deep ensembles training, through addition of a computationally-tractable, randomised and untrainable function to each ensemble member, that enables a posterior interpretation in the infinite width limit. When ensembled together, our trained NNs give an approximation to a posterior predictive distribution, and we prove that our Bayesian deep ensembles make more conservative predictions than standard deep ensembles in the infinite width limit. Finally, using finite width NNs we demonstrate that our Bayesian deep ensembles faithfully emulate the analytic posterior predictive when available, and outperform standard deep ensembles in various out-of-distribution settings, for both regression and classification tasks.
Improved Schemes for Episodic Memory-based Lifelong Learning
https://papers.nips.cc/paper_files/paper/2020/hash/0b5e29aa1acf8bdc5d8935d7036fa4f5-Abstract.html
Yunhui Guo, Mingrui Liu, Tianbao Yang, Tajana Rosing
https://papers.nips.cc/paper_files/paper/2020/hash/0b5e29aa1acf8bdc5d8935d7036fa4f5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0b5e29aa1acf8bdc5d8935d7036fa4f5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9811-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0b5e29aa1acf8bdc5d8935d7036fa4f5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0b5e29aa1acf8bdc5d8935d7036fa4f5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0b5e29aa1acf8bdc5d8935d7036fa4f5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0b5e29aa1acf8bdc5d8935d7036fa4f5-Supplemental.zip
Current deep neural networks can achieve remarkable performance on a single task. However, when the deep neural network is continually trained on a sequence of tasks, it seems to gradually forget the previous learned knowledge. This phenomenon is referred to as catastrophic forgetting and motivates the field called lifelong learning. Recently, episodic memory based approaches such as GEM and A-GEM have shown remarkable performance. In this paper, we provide the first unified view of episodic memory based approaches from an optimization's perspective. This view leads to two improved schemes for episodic memory based lifelong learning, called MEGA-\rom{1} and MEGA-\rom{2}. MEGA-\rom{1} and MEGA-\rom{2} modulate the balance between old tasks and the new task by integrating the current gradient with the gradient computed on the episodic memory. Notably, we show that GEM and A-GEM are degenerate cases of MEGA-\rom{1} and MEGA-\rom{2} which consistently put the same emphasis on the current task, regardless of how the loss changes over time. Our proposed schemes address this issue by using novel loss-balancing updating rules, which drastically improve the performance over GEM and A-GEM. Extensive experimental results show that the proposed schemes significantly advance the state-of-the-art on four commonly used lifelong learning benchmarks, reducing the error by up to 18%.
Adaptive Sampling for Stochastic Risk-Averse Learning
https://papers.nips.cc/paper_files/paper/2020/hash/0b6ace9e8971cf36f1782aa982a708db-Abstract.html
Sebastian Curi, Kfir Y. Levy, Stefanie Jegelka, Andreas Krause
https://papers.nips.cc/paper_files/paper/2020/hash/0b6ace9e8971cf36f1782aa982a708db-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0b6ace9e8971cf36f1782aa982a708db-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9812-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0b6ace9e8971cf36f1782aa982a708db-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0b6ace9e8971cf36f1782aa982a708db-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0b6ace9e8971cf36f1782aa982a708db-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0b6ace9e8971cf36f1782aa982a708db-Supplemental.pdf
In high-stakes machine learning applications, it is crucial to not only perform well {\em on average}, but also when restricted to {\em difficult} examples. To address this, we consider the problem of training models in a risk-averse manner. We propose an adaptive sampling algorithm for stochastically optimizing the {\em Conditional Value-at-Risk (CVaR)} of a loss distribution, which measures its performance on the $\alpha$ fraction of most difficult examples. We use a distributionally robust formulation of the CVaR to phrase the problem as a zero-sum game between two players, and solve it efficiently using regret minimization. Our approach relies on sampling from structured Determinantal Point Processes (DPPs), which enables scaling it to large data sets. Finally, we empirically demonstrate its effectiveness on large-scale convex and non-convex learning tasks.
Deep Wiener Deconvolution: Wiener Meets Deep Learning for Image Deblurring
https://papers.nips.cc/paper_files/paper/2020/hash/0b8aff0438617c055eb55f0ba5d226fa-Abstract.html
Jiangxin Dong, Stefan Roth, Bernt Schiele
https://papers.nips.cc/paper_files/paper/2020/hash/0b8aff0438617c055eb55f0ba5d226fa-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9813-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-Supplemental.pdf
We present a simple and effective approach for non-blind image deblurring, combining classical techniques and deep learning. In contrast to existing methods that deblur the image directly in the standard image space, we propose to perform an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features. A multi-scale feature refinement module then predicts the deblurred image from the deconvolved deep features, progressively recovering detail and small-scale structures. The proposed model is trained in an end-to-end manner and evaluated on scenarios with both simulated and real-world image blur. Our extensive experimental results show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts. Moreover, our approach quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin.
Discovering Reinforcement Learning Algorithms
https://papers.nips.cc/paper_files/paper/2020/hash/0b96d81f0494fde5428c7aea243c9157-Abstract.html
Junhyuk Oh, Matteo Hessel, Wojciech M. Czarnecki, Zhongwen Xu, Hado P. van Hasselt, Satinder Singh, David Silver
https://papers.nips.cc/paper_files/paper/2020/hash/0b96d81f0494fde5428c7aea243c9157-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0b96d81f0494fde5428c7aea243c9157-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9814-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0b96d81f0494fde5428c7aea243c9157-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0b96d81f0494fde5428c7aea243c9157-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0b96d81f0494fde5428c7aea243c9157-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0b96d81f0494fde5428c7aea243c9157-Supplemental.pdf
Reinforcement learning (RL) algorithms update an agent’s parameters according to one of several possible rules, discovered manually through years of research. Automating the discovery of update rules from data could lead to more efficient algorithms, or algorithms that are better adapted to specific environments. Although there have been prior attempts at addressing this significant scientific challenge, it remains an open question whether it is feasible to discover alternatives to fundamental concepts of RL such as value functions and temporal-difference learning. This paper introduces a new meta-learning approach that discovers an entire update rule which includes both what to predict' (e.g. value functions) andhow to learn from it' (e.g. bootstrapping) by interacting with a set of environments. The output of this method is an RL algorithm that we call Learned Policy Gradient (LPG). Empirical results show that our method discovers its own alternative to the concept of value functions. Furthermore it discovers a bootstrapping mechanism to maintain and use its predictions. Surprisingly, when trained solely on toy environments, LPG generalises effectively to complex Atari games and achieves non-trivial performance. This shows the potential to discover general RL algorithms from data.
Taming Discrete Integration via the Boon of Dimensionality
https://papers.nips.cc/paper_files/paper/2020/hash/0baf163c24ed14b515aaf57a9de5501c-Abstract.html
Jeffrey Dudek, Dror Fried, Kuldeep S Meel
https://papers.nips.cc/paper_files/paper/2020/hash/0baf163c24ed14b515aaf57a9de5501c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0baf163c24ed14b515aaf57a9de5501c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9815-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0baf163c24ed14b515aaf57a9de5501c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0baf163c24ed14b515aaf57a9de5501c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0baf163c24ed14b515aaf57a9de5501c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0baf163c24ed14b515aaf57a9de5501c-Supplemental.pdf
Building on the promising approach proposed by Chakraborty et al, our work overcomes the key weakness of their approach: a restriction to dyadic weights. We augment our proposed reduction, called DeWeight, with a state of the art efficient approximate model counter and perform detailed empirical analysis over benchmarks arising from neural network verification domains, an emerging application area of critical importance. DeWeight, to the best of our knowledge, is the first technique to compute estimates with provable guarantees for this class of benchmarks.

Neural Information Processing Systems NeurIPS 2020 Accepted Paper Meta Info Dataset

This dataset is collected from the NeurIPS 2020 Advances in Neural Information Processing Systems 35 conference accepted paper (https://papers.nips.cc/paper_files/paper/2020) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/nips2020). For researchers who are interested in doing analysis of NIPS 2020 accepted papers and potential research trends, you can use the already cleaned up json file in the dataset. Each row contains the meta information of a paper in the NIPS 2020 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.

Meta Information of Json File

{
    "title": "A graph similarity for deep learning",
    "url": "https://papers.nips.cc/paper_files/paper/2020/hash/0004d0b59e19461ff126e3a08a814c33-Abstract.html",
    "authors": "Seongmin Ok",
    "detail_url": "https://papers.nips.cc/paper_files/paper/2020/hash/0004d0b59e19461ff126e3a08a814c33-Abstract.html",
    "tags": "NIPS 2020",
    "AuthorFeedback": "https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-AuthorFeedback.pdf",
    "Bibtex": "https://papers.nips.cc/paper_files/paper/9725-/bibtex",
    "MetaReview": "https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-MetaReview.html",
    "Paper": "https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Paper.pdf",
    "Review": "https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Review.html",
    "Supplemental": "https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Supplemental.pdf",
    "abstract": "Graph neural networks (GNNs) have been successful in learning representations from graphs. Many popular GNNs follow the pattern of aggregate-transform: they aggregate the neighbors' attributes and then transform the results of aggregation with a learnable function. Analyses of these GNNs explain which pairs of non-identical graphs have different representations. However, we still lack an understanding of how similar these representations will be. We adopt kernel distance and propose transform-sum-cat as an alternative to aggregate-transform to reflect the continuous similarity between the node neighborhoods in the neighborhood aggregation. The idea leads to a simple and efficient graph similarity, which we name Weisfeiler-Leman similarity (WLS). In contrast to existing graph kernels, WLS is easy to implement with common deep learning frameworks. In graph classification experiments, transform-sum-cat significantly outperforms other neighborhood aggregation methods from popular GNN models. We also develop a simple and fast GNN model based on transform-sum-cat, which obtains, in comparison with widely used GNN models, (1) a higher accuracy in node classification, (2) a lower absolute error in graph regression, and (3) greater stability in adversarial training of graph generation."
}

Related

AI Agent Marketplace and Search

AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog

AI Agent Reviews

AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews

AI Equation

List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex

Downloads last month
32