title
stringlengths
13
150
url
stringlengths
97
97
authors
stringlengths
8
467
detail_url
stringlengths
97
97
tags
stringclasses
1 value
AuthorFeedback
stringlengths
102
102
Bibtex
stringlengths
53
54
MetaReview
stringlengths
99
99
Paper
stringlengths
93
93
Review
stringlengths
95
95
Supplemental
stringlengths
100
100
abstract
stringlengths
53
2k
A novel variational form of the Schatten-$p$ quasi-norm
https://papers.nips.cc/paper_files/paper/2020/hash/f53eb4122d5e2ce81a12093f8f9ce922-Abstract.html
Paris Giampouras, Rene Vidal, Athanasios Rontogiannis, Benjamin Haeffele
https://papers.nips.cc/paper_files/paper/2020/hash/f53eb4122d5e2ce81a12093f8f9ce922-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f53eb4122d5e2ce81a12093f8f9ce922-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11525-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f53eb4122d5e2ce81a12093f8f9ce922-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f53eb4122d5e2ce81a12093f8f9ce922-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f53eb4122d5e2ce81a12093f8f9ce922-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f53eb4122d5e2ce81a12093f8f9ce922-Supplemental.pdf
The Schatten-$p$ quasi-norm with $p\in(0,1)$ has recently gained considerable attention in various low-rank matrix estimation problems offering significant benefits over relevant convex heuristics such as the nuclear norm. However, due to the nonconvexity of the Schatten-$p$ quasi-norm, minimization suffers from two major drawbacks: 1) the lack of theoretical guarantees and 2) the high computational cost which is demanded for the minimization task even for trivial tasks such as finding stationary points. In an attempt to reduce the high computational cost induced by Schatten-$p$ quasi-norm minimization, variational forms, which are defined over smaller-size matrix factors whose product equals the original matrix, have been proposed. Here, we propose and analyze a novel {\it variational form of Schatten-$p$ quasi-norm} which, for the first time in the literature, is defined for any continuous value of $p\in(0,1]$ and decouples along the columns of the factorized matrices. The proposed form can be considered as the natural generalization of the well-known variational form of the nuclear norm to the nonconvex case i.e., for $p\in(0,1)$. Notably, low-rankness is now imposed via a group-sparsity promoting regularizer. The resulting formulation gives way to SVD-free algorithms thus offering lower computational complexity than the one that is induced by the original definition of the Schatten-$p$ quasi-norm. A local optimality analysis is provided which shows~that we can arrive at a local minimum of the original Schatten-$p$ quasi-norm problem by reaching a local minimum of the matrix factorization based surrogate problem. In addition, for the case of the squared Frobenious loss with linear operators obeying the restricted isometry property (RIP), a rank-one update scheme is proposed, which offers a way to escape poor local minima. Finally, the efficiency of our approach is empirically shown on a matrix completion problem.
Energy-based Out-of-distribution Detection
https://papers.nips.cc/paper_files/paper/2020/hash/f5496252609c43eb8a3d147ab9b9c006-Abstract.html
Weitang Liu, Xiaoyun Wang, John Owens, Yixuan Li
https://papers.nips.cc/paper_files/paper/2020/hash/f5496252609c43eb8a3d147ab9b9c006-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f5496252609c43eb8a3d147ab9b9c006-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11526-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f5496252609c43eb8a3d147ab9b9c006-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f5496252609c43eb8a3d147ab9b9c006-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f5496252609c43eb8a3d147ab9b9c006-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f5496252609c43eb8a3d147ab9b9c006-Supplemental.pdf
Determining whether inputs are out-of-distribution (OOD) is an essential building block for safely deploying machine learning models in the open world. However, previous methods relying on the softmax confidence score suffer from overconfident posterior distributions for OOD data. We propose a unified framework for OOD detection that uses an energy score. We show that energy scores better distinguish in- and out-of-distribution samples than the traditional approach using the softmax scores. Unlike softmax confidence scores, energy scores are theoretically aligned with the probability density of the inputs and are less susceptible to the overconfidence issue. Within this framework, energy can be flexibly used as a scoring function for any pre-trained neural classifier as well as a trainable cost function to shape the energy surface explicitly for OOD detection. On a CIFAR-10 pre-trained WideResNet, using the energy score reduces the average FPR (at TPR 95%) by 18.03% compared to the softmax confidence score. With energy-based training, our method outperforms the state-of-the-art on common benchmarks.
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
https://papers.nips.cc/paper_files/paper/2020/hash/f56d8183992b6c54c92c16a8519a6e2b-Abstract.html
Chen Liu, Mathieu Salzmann, Tao Lin, Ryota Tomioka, Sabine Süsstrunk
https://papers.nips.cc/paper_files/paper/2020/hash/f56d8183992b6c54c92c16a8519a6e2b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f56d8183992b6c54c92c16a8519a6e2b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11527-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f56d8183992b6c54c92c16a8519a6e2b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f56d8183992b6c54c92c16a8519a6e2b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f56d8183992b6c54c92c16a8519a6e2b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f56d8183992b6c54c92c16a8519a6e2b-Supplemental.pdf
We analyze the influence of adversarial training on the loss landscape of machine learning models. To this end, we first provide analytical studies of the properties of adversarial loss functions under different adversarial budgets. We then demonstrate that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients. Our conclusions are validated by numerical analyses, which show that training under large adversarial budgets impede the escape from suboptimal random initialization, cause non-vanishing gradients and make the models' minima found sharper. Based on these observations, we show that a periodic adversarial scheduling (PAS) strategy can effectively overcome these challenges, yielding better results than vanilla adversarial training while being much less sensitive to the choice of learning rate.
User-Dependent Neural Sequence Models for Continuous-Time Event Data
https://papers.nips.cc/paper_files/paper/2020/hash/f56de5ef149cf0aedcc8f4797031e229-Abstract.html
Alex Boyd, Robert Bamler, Stephan Mandt, Padhraic Smyth
https://papers.nips.cc/paper_files/paper/2020/hash/f56de5ef149cf0aedcc8f4797031e229-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f56de5ef149cf0aedcc8f4797031e229-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11528-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f56de5ef149cf0aedcc8f4797031e229-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f56de5ef149cf0aedcc8f4797031e229-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f56de5ef149cf0aedcc8f4797031e229-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f56de5ef149cf0aedcc8f4797031e229-Supplemental.pdf
Continuous-time event data are common in applications such as individual behavior data, financial transactions, and medical health records. Modeling such data can be very challenging, in particular for applications with many different types of events,since it requires a model to predict the event types as well as the time of occurrence. Recurrent neural networks that parameterize time-varying intensity functions are the current state-of-the-art for predictive modeling with such data. These models typically assume that all event sequences come from the same data distribution. However, in many applications event sequences are generated by different sources,or users, and their characteristics can be very different. In this paper, we extend the broad class of neural marked point process models to mixtures of latent embeddings,where each mixture component models the characteristic traits of a given user. Our approach relies on augmenting these models with a latent variable that encodes user characteristics, represented by a mixture model over user behavior that is trained via amortized variational inference. We evaluate our methods on four large real-world datasets and demonstrate systematic improvements from our approach over existing work for a variety of predictive metrics such as log-likelihood, next event ranking, and source-of-sequence identification.
Active Structure Learning of Causal DAGs via Directed Clique Trees
https://papers.nips.cc/paper_files/paper/2020/hash/f57bd0a58e953e5c43cd4a4e5af46138-Abstract.html
Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, Karthikeyan Shanmugam
https://papers.nips.cc/paper_files/paper/2020/hash/f57bd0a58e953e5c43cd4a4e5af46138-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f57bd0a58e953e5c43cd4a4e5af46138-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11529-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f57bd0a58e953e5c43cd4a4e5af46138-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f57bd0a58e953e5c43cd4a4e5af46138-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f57bd0a58e953e5c43cd4a4e5af46138-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f57bd0a58e953e5c43cd4a4e5af46138-Supplemental.zip
A growing body of work has begun to study intervention design for efficient structure learning of causal directed acyclic graphs (DAGs). A typical setting is a \emph{causally sufficient} setting, i.e. a system with no latent confounders, selection bias, or feedback, when the essential graph of the observational equivalence class (EC) is given as an input and interventions are assumed to be noiseless. Most existing works focus on \textit{worst-case} or \textit{average-case} lower bounds for the number of interventions required to orient a DAG. These worst-case lower bounds only establish that the largest clique in the essential graph \textit{could} make it difficult to learn the true DAG. In this work, we develop a \textit{universal} lower bound for single-node interventions that establishes that the largest clique is \textit{always} a fundamental impediment to structure learning. Specifically, we present a decomposition of a DAG into independently orientable components through \emph{directed clique trees} and use it to prove that the number of single-node interventions necessary to orient any DAG in an EC is at least the sum of half the size of the largest cliques in each chain component of the essential graph. Moreover, we present a two-phase intervention design algorithm that, under certain conditions on the chordal skeleton, matches the optimal number of interventions up to a multiplicative logarithmic factor in the number of maximal cliques. We show via synthetic experiments that our algorithm can scale to much larger graphs than most of the related work and achieves better worst-case performance than other scalable approaches. A code base to recreate these results can be found at \url{https://github.com/csquires/dct-policy}.
Convergence and Stability of Graph Convolutional Networks on Large Random Graphs
https://papers.nips.cc/paper_files/paper/2020/hash/f5a14d4963acf488e3a24780a84ac96c-Abstract.html
Nicolas Keriven, Alberto Bietti, Samuel Vaiter
https://papers.nips.cc/paper_files/paper/2020/hash/f5a14d4963acf488e3a24780a84ac96c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f5a14d4963acf488e3a24780a84ac96c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11530-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f5a14d4963acf488e3a24780a84ac96c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f5a14d4963acf488e3a24780a84ac96c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f5a14d4963acf488e3a24780a84ac96c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f5a14d4963acf488e3a24780a84ac96c-Supplemental.pdf
We study properties of Graph Convolutional Networks (GCNs) by analyzing their behavior on standard models of random graphs, where nodes are represented by random latent variables and edges are drawn according to a similarity kernel. This allows us to overcome the difficulties of dealing with discrete notions such as isomorphisms on very large graphs, by considering instead more natural geometric aspects. We first study the convergence of GCNs to their continuous counterpart as the number of nodes grows. Our results are fully non-asymptotic and are valid for relatively sparse graphs with an average degree that grows logarithmically with the number of nodes. We then analyze the stability of GCNs to small deformations of the random graph model. In contrast to previous studies of stability in discrete settings, our continuous setup allows us to provide more intuitive deformation-based metrics for understanding stability, which have proven useful for explaining the success of convolutional representations on Euclidean domains.
BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/f5b1b89d98b7286673128a5fb112cb9a-Abstract.html
Maximilian Balandat, Brian Karrer, Daniel Jiang, Samuel Daulton, Ben Letham, Andrew G. Wilson, Eytan Bakshy
https://papers.nips.cc/paper_files/paper/2020/hash/f5b1b89d98b7286673128a5fb112cb9a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f5b1b89d98b7286673128a5fb112cb9a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11531-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f5b1b89d98b7286673128a5fb112cb9a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f5b1b89d98b7286673128a5fb112cb9a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f5b1b89d98b7286673128a5fb112cb9a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f5b1b89d98b7286673128a5fb112cb9a-Supplemental.pdf
Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design. We introduce BoTorch, a modern programming framework for Bayesian optimization that combines Monte-Carlo (MC) acquisition functions, a novel sample average approximation optimization approach, auto-differentiation, and variance reduction techniques. BoTorch's modular design facilitates flexible specification and optimization of probabilistic models written in PyTorch, simplifying implementation of new acquisition functions. Our approach is backed by novel theoretical convergence results and made practical by a distinctive algorithmic foundation that leverages fast predictive distributions, hardware acceleration, and deterministic optimization. We also propose a novel "one-shot" formulation of the Knowledge Gradient, enabled by a combination of our theoretical and software contributions. In experiments, we demonstrate the improved sample efficiency of BoTorch relative to other popular libraries.
Reconsidering Generative Objectives For Counterfactual Reasoning
https://papers.nips.cc/paper_files/paper/2020/hash/f5cfbc876972bd0d031c8abc37344c28-Abstract.html
Danni Lu, Chenyang Tao, Junya Chen, Fan Li, Feng Guo, Lawrence Carin
https://papers.nips.cc/paper_files/paper/2020/hash/f5cfbc876972bd0d031c8abc37344c28-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f5cfbc876972bd0d031c8abc37344c28-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11532-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f5cfbc876972bd0d031c8abc37344c28-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f5cfbc876972bd0d031c8abc37344c28-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f5cfbc876972bd0d031c8abc37344c28-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f5cfbc876972bd0d031c8abc37344c28-Supplemental.pdf
There has been recent interest in exploring generative goals for counterfactual reasoning, such as individualized treatment effect (ITE) estimation. However, existing solutions often fail to address issues that are unique to causal inference, such as covariate balancing and (infeasible) counterfactual validation. As a step towards more flexible, scalable and accurate ITE estimation, we present a novel generative Bayesian estimation framework that integrates representation learning, adversarial matching and causal estimation. By appealing to the Robinson decomposition, we derive a reformulated variational bound that explicitly targets the causal effect estimation rather than specific predictive goals. Our procedure acknowledges the uncertainties in representation and solves a Fenchel mini-max game to resolve the representation imbalance for better counterfactual generalization, justified by new theory. Further, the latent variable formulation employed enables robustness to unobservable latent confounders, extending the scope of its applicability. The utility of the proposed solution is demonstrated via an extensive set of tests against competing solutions, both under various simulation setups and to real-world datasets, with encouraging results reported.
Robust Federated Learning: The Case of Affine Distribution Shifts
https://papers.nips.cc/paper_files/paper/2020/hash/f5e536083a438cec5b64a4954abc17f1-Abstract.html
Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, Ali Jadbabaie
https://papers.nips.cc/paper_files/paper/2020/hash/f5e536083a438cec5b64a4954abc17f1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f5e536083a438cec5b64a4954abc17f1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11533-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f5e536083a438cec5b64a4954abc17f1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f5e536083a438cec5b64a4954abc17f1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f5e536083a438cec5b64a4954abc17f1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f5e536083a438cec5b64a4954abc17f1-Supplemental.zip
Federated learning is a distributed paradigm that aims at training models using samples distributed across multiple users in a network while keeping the samples on users’ devices with the aim of efficiency and protecting users privacy. In such settings, the training data is often statistically heterogeneous and manifests various distribution shifts across users, which degrades the performance of the learnt model. The primary goal of this paper is to develop a robust federated learning algorithm that achieves satisfactory performance against distribution shifts in users' samples. To achieve this goal, we first consider a structured affine distribution shift in users' data that captures the device-dependent data heterogeneity in federated settings. This perturbation model is applicable to various federated learning problems such as image classification where the images undergo device-dependent imperfections, e.g. different intensity, contrast, and brightness. To address affine distribution shifts across users, we propose a Federated Learning framework Robust to Affine distribution shifts (FLRA) that is provably robust against affine Wasserstein shifts to the distribution of observed samples. To solve the FLRA's distributed minimax optimization problem, we propose a fast and efficient optimization method and provide convergence and performance guarantees via a gradient Descent Ascent (GDA) method. We further prove generalization error bounds for the learnt classifier to show proper generalization from empirical distribution of samples to the true underlying distribution. We perform several numerical experiments to empirically support FLRA. We show that an affine distribution shift indeed suffices to significantly decrease the performance of the learnt classifier in a new test user, and our proposed algorithm achieves a significant gain in comparison to standard federated learning and adversarial training methods.
Quantile Propagation for Wasserstein-Approximate Gaussian Processes
https://papers.nips.cc/paper_files/paper/2020/hash/f5e62af885293cf4d511ceef31e61c80-Abstract.html
Rui Zhang, Christian Walder, Edwin V. Bonilla, Marian-Andrei Rizoiu, Lexing Xie
https://papers.nips.cc/paper_files/paper/2020/hash/f5e62af885293cf4d511ceef31e61c80-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f5e62af885293cf4d511ceef31e61c80-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11534-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f5e62af885293cf4d511ceef31e61c80-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f5e62af885293cf4d511ceef31e61c80-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f5e62af885293cf4d511ceef31e61c80-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f5e62af885293cf4d511ceef31e61c80-Supplemental.zip
Approximate inference techniques are the cornerstone of probabilistic methods based on Gaussian process priors. Despite this, most work approximately optimizes standard divergence measures such as the Kullback-Leibler (KL) divergence, which lack the basic desiderata for the task at hand, while chiefly offering merely technical convenience. We develop a new approximate inference method for Gaussian process models which overcomes the technical challenges arising from abandoning these convenient divergences. Our method---dubbed Quantile Propagation (QP)---is similar to expectation propagation (EP) but minimizes the $L_2$ Wasserstein distance (WD) instead of the KL divergence. The WD exhibits all the required properties of a distance metric, while respecting the geometry of the underlying sample space. We show that QP matches quantile functions rather than moments as in EP and has the same mean update but a smaller variance update than EP, thereby alleviating EP's tendency to over-estimate posterior variances. Crucially, despite the significant complexity of dealing with the WD, QP has the same favorable locality property as EP, and thereby admits an efficient algorithm. Experiments on classification and Poisson regression show that QP outperforms both EP and variational Bayes.
Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/f5f3b8d720f34ebebceb7765e447268b-Abstract.html
Tianren Zhang, Shangqi Guo, Tian Tan, Xiaolin Hu, Feng Chen
https://papers.nips.cc/paper_files/paper/2020/hash/f5f3b8d720f34ebebceb7765e447268b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f5f3b8d720f34ebebceb7765e447268b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11535-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f5f3b8d720f34ebebceb7765e447268b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f5f3b8d720f34ebebceb7765e447268b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f5f3b8d720f34ebebceb7765e447268b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f5f3b8d720f34ebebceb7765e447268b-Supplemental.pdf
Goal-conditioned hierarchical reinforcement learning (HRL) is a promising approach for scaling up reinforcement learning (RL) techniques. However, it often suffers from training inefficiency as the action space of the high-level, i.e., the goal space, is often large. Searching in a large goal space poses difficulties for both high-level subgoal generation and low-level policy learning. In this paper, we show that this problem can be effectively alleviated by restricting the high-level action space from the whole goal space to a k-step adjacent region of the current state using an adjacency constraint. We theoretically prove that the proposed adjacency constraint preserves the optimal hierarchical policy in deterministic MDPs, and show that this constraint can be practically implemented by training an adjacency network that can discriminate between adjacent and non-adjacent subgoals. Experimental results on discrete and continuous control tasks show that incorporating the adjacency constraint improves the performance of state-of-the-art HRL approaches in both deterministic and stochastic environments.
High-contrast “gaudy” images improve the training of deep neural network models of visual cortex
https://papers.nips.cc/paper_files/paper/2020/hash/f610a13de080fb8df6cf972fc01ad93f-Abstract.html
Benjamin Cowley, Jonathan W. Pillow
https://papers.nips.cc/paper_files/paper/2020/hash/f610a13de080fb8df6cf972fc01ad93f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f610a13de080fb8df6cf972fc01ad93f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11536-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f610a13de080fb8df6cf972fc01ad93f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f610a13de080fb8df6cf972fc01ad93f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f610a13de080fb8df6cf972fc01ad93f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f610a13de080fb8df6cf972fc01ad93f-Supplemental.pdf
A key challenge in understanding the sensory transformations of the visual system is to obtain a highly predictive model that maps natural images to neural responses. Deep neural networks (DNNs) provide a promising candidate for such a model. However, DNNs require orders of magnitude more training data than neuroscientists can collect because experimental recording time is severely limited. This motivates us to find images that train highly-predictive DNNs with as little training data as possible. We propose high-contrast, binarized versions of natural images---termed gaudy images---to efficiently train DNNs to predict higher-order visual cortical responses. In simulation experiments and analyses of real neural data, we find that training DNNs with gaudy images substantially reduces the number of training images needed to accurately predict responses to natural images. We also find that gaudy images, chosen before training, outperform images chosen during training by active learning algorithms. Thus, gaudy images overemphasize features of natural images that are the most important for efficiently training DNNs. We believe gaudy images will aid in the modeling of visual cortical neurons, potentially opening new scientific questions about visual processing.
Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion
https://papers.nips.cc/paper_files/paper/2020/hash/f6185f0ef02dcaec414a3171cd01c697-Abstract.html
Zhanqiu Zhang, Jianyu Cai, Jie Wang
https://papers.nips.cc/paper_files/paper/2020/hash/f6185f0ef02dcaec414a3171cd01c697-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f6185f0ef02dcaec414a3171cd01c697-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11537-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f6185f0ef02dcaec414a3171cd01c697-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f6185f0ef02dcaec414a3171cd01c697-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f6185f0ef02dcaec414a3171cd01c697-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f6185f0ef02dcaec414a3171cd01c697-Supplemental.pdf
Tensor factorization based models have shown great power in knowledge graph completion (KGC). However, their performance usually suffers from the overfitting problem seriously. This motivates various regularizers---such as the squared Frobenius norm and tensor nuclear norm regulariers---while the limited applicability significantly limits their practical usage. To address this challenge, we propose a novel regularizer---namely, \textbf{DU}ality-induced \textbf{R}egul\textbf{A}rizer (DURA)---which is not only effective in improving the performance of existing models but widely applicable to various methods. The major novelty of DURA is based on the observation that, for an existing tensor factorization based KGC model (\textit{primal}), there is often another distance based KGC model (\textit{dual}) closely associated with it.
Distributed Training with Heterogeneous Data: Bridging Median- and Mean-Based Algorithms
https://papers.nips.cc/paper_files/paper/2020/hash/f629ed9325990b10543ab5946c1362fb-Abstract.html
Xiangyi Chen, Tiancong Chen, Haoran Sun, Steven Z. Wu, Mingyi Hong
https://papers.nips.cc/paper_files/paper/2020/hash/f629ed9325990b10543ab5946c1362fb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f629ed9325990b10543ab5946c1362fb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11538-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f629ed9325990b10543ab5946c1362fb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f629ed9325990b10543ab5946c1362fb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f629ed9325990b10543ab5946c1362fb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f629ed9325990b10543ab5946c1362fb-Supplemental.pdf
Recently, there is a growing interest in the study of median-based algorithms for distributed non-convex optimization. Two prominent examples include signSGD with majority vote, an effective approach for communication reduction via 1-bit compression on the local gradients, and medianSGD, an algorithm recently proposed to ensure robustness against Byzantine workers. The convergence analyses for these algorithms critically rely on the assumption that all the distributed data are drawn iid from the same distribution. However, in applications such as Federated Learning, the data across different nodes or machines can be inherently heterogeneous, which violates such an iid assumption. This work analyzes signSGD and medianSGD in distributed settings with heterogeneous data. We show that these algorithms are non-convergent whenever there is some disparity between the expected median and mean over the local gradients. To overcome this gap, we provide a novel gradient correction mechanism that perturbs the local gradients with noise, which we show can provably close the gap between mean and median of the gradients. The proposed methods largely preserve nice properties of these median-based algorithms, such as the low per-iteration communication complexity of signSGD, and further enjoy global convergence to stationary solutions. Our perturbation technique can be of independent interest when one wishes to estimate mean through a median estimator.
H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks
https://papers.nips.cc/paper_files/paper/2020/hash/f6876a9f998f6472cc26708e27444456-Abstract.html
Thomas Limbacher, Robert Legenstein
https://papers.nips.cc/paper_files/paper/2020/hash/f6876a9f998f6472cc26708e27444456-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f6876a9f998f6472cc26708e27444456-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11539-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f6876a9f998f6472cc26708e27444456-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f6876a9f998f6472cc26708e27444456-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f6876a9f998f6472cc26708e27444456-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f6876a9f998f6472cc26708e27444456-Supplemental.pdf
The ability to base current computations on memories from the past is critical for many cognitive tasks such as story understanding. Hebbian-type synaptic plasticity is believed to underlie the retention of memories over medium and long time scales in the brain. However, it is unclear how such plasticity processes are integrated with computations in cortical networks. Here, we propose Hebbian Memory Networks (H-Mems), a simple neural network model that is built around a core hetero-associative network subject to Hebbian plasticity. We show that the network can be optimized to utilize the Hebbian plasticity processes for its computations. H-Mems can one-shot memorize associations between stimulus pairs and use these associations for decisions later on. Furthermore, they can solve demanding question-answering tasks on synthetic stories. Our study shows that neural network models are able to enrich their computations with memories through simple Hebbian plasticity processes.
Neural Unsigned Distance Fields for Implicit Function Learning
https://papers.nips.cc/paper_files/paper/2020/hash/f69e505b08403ad2298b9f262659929a-Abstract.html
Julian Chibane, Mohamad Aymen mir, Gerard Pons-Moll
https://papers.nips.cc/paper_files/paper/2020/hash/f69e505b08403ad2298b9f262659929a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f69e505b08403ad2298b9f262659929a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11540-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f69e505b08403ad2298b9f262659929a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f69e505b08403ad2298b9f262659929a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f69e505b08403ad2298b9f262659929a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f69e505b08403ad2298b9f262659929a-Supplemental.pdf
In this work we target a learnable output representation that allows continuous, high resolution outputs of arbitrary shape. Recent works represent 3D surfaces implicitly with a Neural Network, thereby breaking previous barriers in resolution, and ability to represent diverse topologies. However, neural implicit representations are limited to closed surfaces, which divide the space into inside and outside. Many real world objects such as walls of a scene scanned by a sensor, clothing, or a car with inner structures are not closed. This constitutes a significant barrier, in terms of data pre-processing (objects need to be artificially closed creating artifacts), and the ability to output open surfaces. In this work, we propose Neural Distance Fields (NDF), a neural network based model which predicts the unsigned distance field for arbitrary 3D shapes given sparse point clouds. NDF represent surfaces at high resolutions as prior implicit models, but do not require closed surface data, and significantly broaden the class of representable shapes in the output. NDF allow to extract the surface as very dense point clouds and as meshes. We also show that NDF allow for surface normal calculation and can be rendered using a slight modification of sphere tracing. We find NDF can be used for multi-target regression (multiple outputs for one input) with techniques that have been exclusively used for rendering in graphics. Experiments on ShapeNet show that NDF, while simple, is the state-of-the art, and allows to reconstruct shapes with inner structures, such as the chairs inside a bus. Notably, we show that NDF are not restricted to 3D shapes, and can approximate more general open surfaces such as curves, manifolds, and functions. Code is available for research at https://virtualhumans.mpi-inf.mpg.de/ndf/.
Curriculum By Smoothing
https://papers.nips.cc/paper_files/paper/2020/hash/f6a673f09493afcd8b129a0bcf1cd5bc-Abstract.html
Samarth Sinha, Animesh Garg, Hugo Larochelle
https://papers.nips.cc/paper_files/paper/2020/hash/f6a673f09493afcd8b129a0bcf1cd5bc-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f6a673f09493afcd8b129a0bcf1cd5bc-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11541-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f6a673f09493afcd8b129a0bcf1cd5bc-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f6a673f09493afcd8b129a0bcf1cd5bc-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f6a673f09493afcd8b129a0bcf1cd5bc-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f6a673f09493afcd8b129a0bcf1cd5bc-Supplemental.pdf
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation. Moreover, recent work in Generative Adversarial Networks (GANs) has highlighted the importance of learning by progressively increasing the difficulty of a learning task Kerras et al. When learning a network from scratch, the information propagated within the network during the earlier stages of training can contain distortion artifacts due to noise which can be detrimental to training. In this paper, we propose an elegant curriculum-based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters. We propose to augment the training of CNNs by controlling the amount of high frequency information propagated within the CNNs as training progresses, by convolving the output of a CNN feature map of each layer with a Gaussian kernel. By decreasing the variance of the Gaussian kernel, we gradually increase the amount of high-frequency information available within the network for inference. As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data. Our proposed augmented training scheme significantly improves the performance of CNNs on various vision tasks without either adding additional trainable parameters or an auxiliary regularization objective. The generality of our method is demonstrated through empirical performance gains in CNN architectures across four different tasks: transfer learning, cross-task transfer learning, and generative models.
Fast Transformers with Clustered Attention
https://papers.nips.cc/paper_files/paper/2020/hash/f6a8dd1c954c8506aadc764cc32b895e-Abstract.html
Apoorv Vyas, Angelos Katharopoulos, François Fleuret
https://papers.nips.cc/paper_files/paper/2020/hash/f6a8dd1c954c8506aadc764cc32b895e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f6a8dd1c954c8506aadc764cc32b895e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11542-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f6a8dd1c954c8506aadc764cc32b895e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f6a8dd1c954c8506aadc764cc32b895e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f6a8dd1c954c8506aadc764cc32b895e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f6a8dd1c954c8506aadc764cc32b895e-Supplemental.pdf
Transformers have been proven a successful model for a variety of tasks in sequence modeling. However, computing the attention matrix, which is their key component, has quadratic complexity with respect to the sequence length, thus making them prohibitively expensive for large sequences. To address this, we propose clustered attention, which instead of computing the attention for every query, groups queries into clusters and computes attention just for the centroids. To further improve this approximation, we use the computed clusters to identify the keys with the highest attention per query and compute the exact key/query dot products. This results in a model with linear complexity with respect to the sequence length for a fixed number of clusters. We evaluate our approach on two automatic speech recognition datasets and show that our model consistently outperforms vanilla transformers for a given computational budget. Finally, we demonstrate that our model can approximate arbitrarily complex attention distributions with a minimal number of clusters by approximating a pretrained BERT model on GLUE and SQuAD benchmarks with only 25 clusters and no loss in performance.
The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification
https://papers.nips.cc/paper_files/paper/2020/hash/f6c2a0c4b566bc99d596e58638e342b0-Abstract.html
Christian Tjandraatmadja, Ross Anderson, Joey Huchette, Will Ma, KRUNAL KISHOR PATEL, Juan Pablo Vielma
https://papers.nips.cc/paper_files/paper/2020/hash/f6c2a0c4b566bc99d596e58638e342b0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f6c2a0c4b566bc99d596e58638e342b0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11543-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f6c2a0c4b566bc99d596e58638e342b0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f6c2a0c4b566bc99d596e58638e342b0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f6c2a0c4b566bc99d596e58638e342b0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f6c2a0c4b566bc99d596e58638e342b0-Supplemental.pdf
We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons. Unlike previous single-neuron relaxations which focus only on the univariate input space of the ReLU, our method considers the multivariate input space of the affine pre-activation function preceding the ReLU. Using results from submodularity and convex geometry, we derive an explicit description of the tightest possible convex relaxation when this multivariate input is over a box domain. We show that our convex relaxation is significantly stronger than the commonly used univariate-input relaxation which has been proposed as a natural convex relaxation barrier for verification. While our description of the relaxation may require an exponential number of inequalities, we show that they can be separated in linear time and hence can be efficiently incorporated into optimization algorithms on an as-needed basis. Based on this novel relaxation, we design two polynomial-time algorithms for neural network verification: a linear-programming-based algorithm that leverages the full power of our relaxation, and a fast propagation algorithm that generalizes existing approaches. In both cases, we show that for a modest increase in computational effort, our strengthened relaxation enables us to verify a significantly larger number of instances compared to similar algorithms.
Strongly Incremental Constituency Parsing with Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/f7177163c833dff4b38fc8d2872f1ec6-Abstract.html
Kaiyu Yang, Jia Deng
https://papers.nips.cc/paper_files/paper/2020/hash/f7177163c833dff4b38fc8d2872f1ec6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f7177163c833dff4b38fc8d2872f1ec6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11544-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f7177163c833dff4b38fc8d2872f1ec6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f7177163c833dff4b38fc8d2872f1ec6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f7177163c833dff4b38fc8d2872f1ec6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f7177163c833dff4b38fc8d2872f1ec6-Supplemental.pdf
Parsing sentences into syntax trees can benefit downstream applications in NLP. Transition-based parsers build trees by executing actions in a state transition system. They are computationally efficient, and can leverage machine learning to predict actions based on partial trees. However, existing transition-based parsers are predominantly based on the shift-reduce transition system, which does not align with how humans are known to parse sentences. Psycholinguistic research suggests that human parsing is strongly incremental—humans grow a single parse tree by adding exactly one token at each step. In this paper, we propose a novel transition system called attach-juxtapose. It is strongly incremental; it represents a partial sentence using a single tree; each action adds exactly one token into the partial tree. Based on our transition system, we develop a strongly incremental parser. At each step, it encodes the partial tree using a graph neural network and predicts an action. We evaluate our parser on Penn Treebank (PTB) and Chinese Treebank (CTB). On PTB, it outperforms existing parsers trained with only constituency trees; and it performs on par with state-of-the-art parsers that use dependency trees as additional training data. On CTB, our parser establishes a new state of the art. Code is available at https://github.com/princeton-vl/attach-juxtapose-parser.
AOT: Appearance Optimal Transport Based Identity Swapping for Forgery Detection
https://papers.nips.cc/paper_files/paper/2020/hash/f718499c1c8cef6730f9fd03c8125cab-Abstract.html
Hao Zhu, Chaoyou Fu, Qianyi Wu, Wayne Wu, Chen Qian, Ran He
https://papers.nips.cc/paper_files/paper/2020/hash/f718499c1c8cef6730f9fd03c8125cab-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f718499c1c8cef6730f9fd03c8125cab-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11545-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f718499c1c8cef6730f9fd03c8125cab-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f718499c1c8cef6730f9fd03c8125cab-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f718499c1c8cef6730f9fd03c8125cab-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f718499c1c8cef6730f9fd03c8125cab-Supplemental.pdf
Recent studies have shown that the performance of forgery detection can be improved with diverse and challenging Deepfakes datasets. However, due to the lack of Deepfakes datasets with large variance in appearance, which can be hardly produced by recent identity swapping methods, the detection algorithm may fail in this situation. In this work, we provide a new identity swapping algorithm with large differences in appearance for face forgery detection. The appearance gaps mainly arise from the large discrepancies in illuminations and skin colors that widely exist in real-world scenarios. However, due to the difficulties of modeling the complex appearance mapping, it is challenging to transfer fine-grained appearances adaptively while preserving identity traits. This paper formulates appearance mapping as an optimal transport problem and proposes an Appearance Optimal Transport model (AOT) to formulate it in both latent and pixel space. Specifically, a relighting generator is designed to simulate the optimal transport plan. It is solved via minimizing the Wasserstein distance of the learned features in the latent space, enabling better performance and less computation than conventional optimization. To further refine the solution of the optimal transport plan, we develop a segmentation game to minimize the Wasserstein distance in the pixel space. A discriminator is introduced to distinguish the fake parts from a mix of real and fake image patches. Extensive experiments reveal that the superiority of our method when compared with state-of-the-art methods and the ability of our generated data to improve the performance of face forgery detection.
Uncertainty-Aware Learning for Zero-Shot Semantic Segmentation
https://papers.nips.cc/paper_files/paper/2020/hash/f73b76ce8949fe29bf2a537cfa420e8f-Abstract.html
Ping Hu, Stan Sclaroff, Kate Saenko
https://papers.nips.cc/paper_files/paper/2020/hash/f73b76ce8949fe29bf2a537cfa420e8f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f73b76ce8949fe29bf2a537cfa420e8f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11546-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f73b76ce8949fe29bf2a537cfa420e8f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f73b76ce8949fe29bf2a537cfa420e8f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f73b76ce8949fe29bf2a537cfa420e8f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f73b76ce8949fe29bf2a537cfa420e8f-Supplemental.pdf
Zero-shot semantic segmentation (ZSS) aims to classify pixels of novel classes without training examples available. Recently, most ZSS methods focus on learning the visual-semantic correspondence to transfer knowledge from seen classes to unseen classes at the pixel level. Yet, few works study the adverse effects caused by the noisy and outlying training samples in the seen classes. In this paper, we identify this challenge and address it with a novel framework that learns to discriminate noisy samples based on Bayesian uncertainty estimation. Specifically, we model the network outputs with Gaussian and Laplacian distributions, with the variances accounting for the observation noise as well as the uncertainty of input samples. Learning objectives are then derived with the estimated variances playing as adaptive attenuation for individual samples in training. Consequently, our model learns more attentively from representative samples of seen classes while suffering less from noisy and outlying ones, thus providing better reliability and generalization toward unseen categories. We demonstrate the effectiveness of our framework through comprehensive experiments on multiple challenging benchmarks, and show that our method achieves significant accuracy improvement over previous approaches for large open-set segmentation.
Delta-STN: Efficient Bilevel Optimization for Neural Networks using Structured Response Jacobians
https://papers.nips.cc/paper_files/paper/2020/hash/f754186469a933256d7d64095e963594-Abstract.html
Juhan Bae, Roger B. Grosse
https://papers.nips.cc/paper_files/paper/2020/hash/f754186469a933256d7d64095e963594-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f754186469a933256d7d64095e963594-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11547-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f754186469a933256d7d64095e963594-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f754186469a933256d7d64095e963594-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f754186469a933256d7d64095e963594-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f754186469a933256d7d64095e963594-Supplemental.pdf
Hyperparameter optimization of neural networks can be elegantly formulated as a bilevel optimization problem. While research on bilevel optimization of neural networks has been dominated by implicit differentiation and unrolling, hypernetworks such as Self-Tuning Networks (STNs) have recently gained traction due to their ability to amortize the optimization of the inner objective. In this paper, we diagnose several subtle pathologies in the training of STNs. Based on these observations, we propose the Delta-STN, an improved hypernetwork architecture which stabilizes training and optimizes hyperparameters much more efficiently than STNs. The key idea is to focus on accurately approximating the best-response Jacobian rather than the full best-response function; we achieve this by reparameterizing the hypernetwork and linearizing the network around the current parameters. We demonstrate empirically that our Delta-STN can tune regularization hyperparameters (e.g. weight decay, dropout, number of cutout holes) with higher accuracy, faster convergence, and improved stability compared to existing approaches.
First-Order Methods for Large-Scale Market Equilibrium Computation
https://papers.nips.cc/paper_files/paper/2020/hash/f75526659f31040afeb61cb7133e4e6d-Abstract.html
Yuan Gao, Christian Kroer
https://papers.nips.cc/paper_files/paper/2020/hash/f75526659f31040afeb61cb7133e4e6d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f75526659f31040afeb61cb7133e4e6d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11548-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f75526659f31040afeb61cb7133e4e6d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f75526659f31040afeb61cb7133e4e6d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f75526659f31040afeb61cb7133e4e6d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f75526659f31040afeb61cb7133e4e6d-Supplemental.zip
Market equilibrium is a solution concept with many applications such as digital ad markets, fair division, and resource sharing. For many classes of utility functions, equilibria can be captured by convex programs. We develop simple first-order methods suitable for solving these programs for large-scale markets. We focus on three practically-relevant utility classes: linear, quasilinear, and Leontief utilities. Using structural properties of market equilibria under each utility class, we show that the corresponding convex programs can be reformulated as optimization of a structured smooth convex function over a polyhedral set, for which projected gradient achieves linear convergence. To do so, we utilize recent linear convergence results under weakened strong-convexity conditions, and further refine the relevant constants in existing convergence results. Then, we show that proximal gradient (a generalization of projected gradient) with a practical linesearch scheme achieves linear convergence under the Proximal-PL condition, a recently developed error bound condition for convex composite problems. For quasilinear utilities, we show that Mirror Descent applied to a new convex program achieves sublinear last-iterate convergence and yields a form of Proportional Response dynamics, an elegant, interpretable algorithm for computing market equilibria originally developed for linear utilities. Numerical experiments show that Proportional Response is highly efficient for computing approximate market equilibria, while projected gradient with linesearch can be much faster when higher-accuracy solutions are needed.
Minimax Optimal Nonparametric Estimation of Heterogeneous Treatment Effects
https://papers.nips.cc/paper_files/paper/2020/hash/f75b757d3459c3e93e98ddab7b903938-Abstract.html
Zijun Gao, Yanjun Han
https://papers.nips.cc/paper_files/paper/2020/hash/f75b757d3459c3e93e98ddab7b903938-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f75b757d3459c3e93e98ddab7b903938-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11549-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f75b757d3459c3e93e98ddab7b903938-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f75b757d3459c3e93e98ddab7b903938-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f75b757d3459c3e93e98ddab7b903938-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f75b757d3459c3e93e98ddab7b903938-Supplemental.pdf
A central goal of causal inference is to detect and estimate the treatment effects of a given treatment or intervention on an outcome variable of interest, where a member known as the heterogeneous treatment effect (HTE) is of growing popularity in recent practical applications such as the personalized medicine. In this paper, we model the HTE as a smooth nonparametric difference between two less smooth baseline functions, and determine the tight statistical limits of the nonparametric HTE estimation as a function of the covariate geometry. In particular, a two-stage nearest-neighbor-based estimator throwing away observations with poor matching quality is near minimax optimal. We also establish the tight dependence on the density ratio without the usual assumption that the covariate densities are bounded away from zero, where a key step is to employ a novel maximal inequality which could be of independent interest.
Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis
https://papers.nips.cc/paper_files/paper/2020/hash/f76a89f0cb91bc419542ce9fa43902dc-Abstract.html
Ye Yuan, Kris Kitani
https://papers.nips.cc/paper_files/paper/2020/hash/f76a89f0cb91bc419542ce9fa43902dc-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f76a89f0cb91bc419542ce9fa43902dc-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11550-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f76a89f0cb91bc419542ce9fa43902dc-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f76a89f0cb91bc419542ce9fa43902dc-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f76a89f0cb91bc419542ce9fa43902dc-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f76a89f0cb91bc419542ce9fa43902dc-Supplemental.zip
Reinforcement learning has shown great promise for synthesizing realistic human behaviors by learning humanoid control policies from motion capture data. However, it is still very challenging to reproduce sophisticated human skills like ballet dance, or to stably imitate long-term human behaviors with complex transitions. The main difficulty lies in the dynamics mismatch between the humanoid model and real humans. That is, motions of real humans may not be physically possible for the humanoid model. To overcome the dynamics mismatch, we propose a novel approach, residual force control (RFC), that augments a humanoid control policy by adding external residual forces into the action space. During training, the RFC-based policy learns to apply residual forces to the humanoid to compensate for the dynamics mismatch and better imitate the reference motion. Experiments on a wide range of dynamic motions demonstrate that our approach outperforms state-of-the-art methods in terms of convergence speed and the quality of learned motions. Notably, we showcase a physics-based virtual character empowered by RFC that can perform highly agile ballet dance moves such as pirouette, arabesque and jeté. Furthermore, we propose a dual-policy control framework, where a kinematic policy and an RFC-based policy work in tandem to synthesize multi-modal infinite-horizon human motions without any task guidance or user input. Our approach is the first humanoid control method that successfully learns from a large-scale human motion dataset (Human3.6M) and generates diverse long-term motions. Code and videos are available at https://www.ye-yuan.com/rfc.
A General Method for Robust Learning from Batches
https://papers.nips.cc/paper_files/paper/2020/hash/f7a82ce7e16d9687e7cd9a9feb85d187-Abstract.html
Ayush Jain, Alon Orlitsky
https://papers.nips.cc/paper_files/paper/2020/hash/f7a82ce7e16d9687e7cd9a9feb85d187-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f7a82ce7e16d9687e7cd9a9feb85d187-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11551-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f7a82ce7e16d9687e7cd9a9feb85d187-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f7a82ce7e16d9687e7cd9a9feb85d187-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f7a82ce7e16d9687e7cd9a9feb85d187-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f7a82ce7e16d9687e7cd9a9feb85d187-Supplemental.pdf
Building on this framework, we derive the first robust agnostic: (1) polynomial-time distribution estimation algorithms for structured distributions, including piecewise-polynomial, monotone, log-concave, and gaussian-mixtures, and also significantly improve their sample complexity; (2) classification algorithms, and also establish their near-optimal sample complexity; (3) computationally-efficient algorithms for the fundamental problem of interval-based classification that underlies nearly all natural-1-dimensional classification problems.
Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning
https://papers.nips.cc/paper_files/paper/2020/hash/f7ac67a9aa8d255282de7d11391e1b69-Abstract.html
Zhongzheng Ren, Raymond Yeh, Alexander Schwing
https://papers.nips.cc/paper_files/paper/2020/hash/f7ac67a9aa8d255282de7d11391e1b69-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f7ac67a9aa8d255282de7d11391e1b69-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11552-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f7ac67a9aa8d255282de7d11391e1b69-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f7ac67a9aa8d255282de7d11391e1b69-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f7ac67a9aa8d255282de7d11391e1b69-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f7ac67a9aa8d255282de7d11391e1b69-Supplemental.pdf
Existing semi-supervised learning (SSL) algorithms use a single weight to balance the loss of labeled and unlabeled examples, i.e., all unlabeled examples are equally weighted. But not all unlabeled data are equal. In this paper we study how to use a different weight for “every” unlabeled example. Manual tuning of all those weights -- as done in prior work -- is no longer possible. Instead, we adjust those weights via an algorithm based on the influence function, a measure of a model's dependency on one training example. To make the approach efficient, we propose a fast and effective approximation of the influence function. We demonstrate that this technique outperforms state-of-the-art methods on semi-supervised image and language classification tasks.
Hard Negative Mixing for Contrastive Learning
https://papers.nips.cc/paper_files/paper/2020/hash/f7cade80b7cc92b991cf4d2806d6bd78-Abstract.html
Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus
https://papers.nips.cc/paper_files/paper/2020/hash/f7cade80b7cc92b991cf4d2806d6bd78-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f7cade80b7cc92b991cf4d2806d6bd78-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11553-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f7cade80b7cc92b991cf4d2806d6bd78-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f7cade80b7cc92b991cf4d2806d6bd78-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f7cade80b7cc92b991cf4d2806d6bd78-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f7cade80b7cc92b991cf4d2806d6bd78-Supplemental.pdf
Contrastive learning has become a key component of self-supervised learning approaches for computer vision. By learning to embed two augmented versions of the same image close to each other and to push the embeddings of different images apart, one can train highly transferable visual representations. As revealed by recent studies, heavy data augmentation and large sets of negatives are both crucial in learning such representations. At the same time, data mixing strategies, either at the image or the feature level, improve both supervised and semi-supervised learning by synthesizing novel examples, forcing networks to learn more robust features. In this paper, we argue that an important aspect of contrastive learning, i.e. the effect of hard negatives, has so far been neglected. To get more meaningful negative samples, current top contrastive self-supervised learning approaches either substantially increase the batch sizes, or keep very large memory banks; increasing memory requirements, however, leads to diminishing returns in terms of performance. We therefore start by delving deeper into a top-performing framework and show evidence that harder negatives are needed to facilitate better and faster learning. Based on these observations, and motivated by the success of data mixing, we propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead. We exhaustively ablate our approach on linear classification, object detection, and instance segmentation and show that employing our hard negative mixing procedure improves the quality of visual representations learned by a state-of-the-art self-supervised learning method.
MOReL: Model-Based Offline Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/f7efa4f864ae9b88d43527f4b14f750f-Abstract.html
Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims
https://papers.nips.cc/paper_files/paper/2020/hash/f7efa4f864ae9b88d43527f4b14f750f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f7efa4f864ae9b88d43527f4b14f750f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11554-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f7efa4f864ae9b88d43527f4b14f750f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f7efa4f864ae9b88d43527f4b14f750f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f7efa4f864ae9b88d43527f4b14f750f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f7efa4f864ae9b88d43527f4b14f750f-Supplemental.pdf
In offline reinforcement learning (RL), the goal is to learn a highly rewarding policy based solely on a dataset of historical interactions with the environment. This serves as an extreme test for an agent's ability to effectively use historical data which is known to be critical for efficient RL. Prior work in offline RL has been confined almost exclusively to model-free RL approaches. In this work, we present MOReL, an algorithmic framework for model-based offline RL. This framework consists of two steps: (a) learning a pessimistic MDP using the offline dataset; (b) learning a near-optimal policy in this pessimistic MDP. The design of the pessimistic MDP is such that for any policy, the performance in the real environment is approximately lower-bounded by the performance in the pessimistic MDP. This enables the pessimistic MDP to serve as a good surrogate for purposes of policy evaluation and learning. Theoretically, we show that MOReL is minimax optimal (up to log factors) for offline RL. Empirically, MOReL matches or exceeds state-of-the-art results on widely used offline RL benchmarks. Overall, the modular design of MOReL enables translating advances in its components (for e.g., in model learning, planning etc.) to improvements in offline RL.
Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings
https://papers.nips.cc/paper_files/paper/2020/hash/f81dee42585b3814de199b2e88757f5c-Abstract.html
Christopher Morris, Gaurav Rattan, Petra Mutzel
https://papers.nips.cc/paper_files/paper/2020/hash/f81dee42585b3814de199b2e88757f5c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f81dee42585b3814de199b2e88757f5c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11555-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f81dee42585b3814de199b2e88757f5c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f81dee42585b3814de199b2e88757f5c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f81dee42585b3814de199b2e88757f5c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f81dee42585b3814de199b2e88757f5c-Supplemental.pdf
Graph kernels based on the $1$-dimensional Weisfeiler-Leman algorithm and corresponding neural architectures recently emerged as powerful tools for (supervised) learning with graphs. However, due to the purely local nature of the algorithms, they might miss essential patterns in the given data and can only handle binary relations. The $k$-dimensional Weisfeiler-Leman algorithm addresses this by considering $k$-tuples, defined over the set of vertices, and defines a suitable notion of adjacency between these vertex tuples. Hence, it accounts for the higher-order interactions between vertices. However, it does not scale and may suffer from overfitting when used in a machine learning setting. Hence, it remains an important open problem to design WL-based graph learning methods that are simultaneously expressive, scalable, and non-overfitting. Here, we propose local variants and corresponding neural architectures, which consider a subset of the original neighborhood, making them more scalable, and less prone to overfitting. The expressive power of (one of) our algorithms is strictly higher than the original algorithm, in terms of ability to distinguish non-isomorphic graphs. Our experimental study confirms that the local algorithms, both kernel and neural architectures, lead to vastly reduced computation times, and prevent overfitting. The kernel version establishes a new state-of-the-art for graph classification on a wide range of benchmark datasets, while the neural version shows promising performance on large-scale molecular regression tasks.
Adversarial Crowdsourcing Through Robust Rank-One Matrix Completion
https://papers.nips.cc/paper_files/paper/2020/hash/f86890095c957e9b949d11d15f0d0cd5-Abstract.html
Qianqian Ma, Alex Olshevsky
https://papers.nips.cc/paper_files/paper/2020/hash/f86890095c957e9b949d11d15f0d0cd5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f86890095c957e9b949d11d15f0d0cd5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11556-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f86890095c957e9b949d11d15f0d0cd5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f86890095c957e9b949d11d15f0d0cd5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f86890095c957e9b949d11d15f0d0cd5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f86890095c957e9b949d11d15f0d0cd5-Supplemental.pdf
These results are then applied to the problem of classification from crowdsourced data under the assumption that while the majority of the workers are governed by the standard single-coin David-Skene model (i.e., they output the correct answer with a certain probability), some of the workers can deviate arbitrarily from this model. In particular, the ``adversarial'' workers could even make decisions designed to make the algorithm output an incorrect answer. Extensive experimental results show our algorithm for this problem, based on rank-one matrix completion with perturbations, outperforms all other state-of-the-art methods in such an adversarial scenario.
Learning Semantic-aware Normalization for Generative Adversarial Networks
https://papers.nips.cc/paper_files/paper/2020/hash/f885a14eaf260d7d9f93c750e1174228-Abstract.html
Heliang Zheng, Jianlong Fu, Yanhong Zeng, Jiebo Luo, Zheng-Jun Zha
https://papers.nips.cc/paper_files/paper/2020/hash/f885a14eaf260d7d9f93c750e1174228-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f885a14eaf260d7d9f93c750e1174228-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11557-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f885a14eaf260d7d9f93c750e1174228-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f885a14eaf260d7d9f93c750e1174228-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f885a14eaf260d7d9f93c750e1174228-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f885a14eaf260d7d9f93c750e1174228-Supplemental.pdf
The recent advances in image generation have been achieved by style-based image generators. Such approaches learn to disentangle latent factors in different image scales and encode latent factors as “style” to control image synthesis. However, existing approaches cannot further disentangle fine-grained semantics from each other, which are often conveyed from feature channels. In this paper, we propose a novel image synthesis approach by learning Semantic-aware relative importance for feature channels in Generative Adversarial Networks (SariGAN). Such a model disentangles latent factors according to the semantic of feature channels by channel-/group- wise fusion of latent codes and feature channels. Particularly, we learn to cluster feature channels by semantics and propose an adaptive group-wise Normalization (AdaGN) to independently control the styles of different channel groups. For example, we can adjust the statistics of channel groups for a human face to control the open and close of the mouth, while keeping other facial features unchanged. We propose to use adversarial training, a channel grouping loss, and a mutual information loss for joint optimization, which not only enables high-fidelity image synthesis but leads to superior interpretable properties. Extensive experiments show that our approach outperforms the SOTA style-based approaches in both unconditional image generation and conditional image inpainting tasks.
Differentiable Causal Discovery from Interventional Data
https://papers.nips.cc/paper_files/paper/2020/hash/f8b7aa3a0d349d9562b424160ad18612-Abstract.html
Philippe Brouillard, Sébastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, Alexandre Drouin
https://papers.nips.cc/paper_files/paper/2020/hash/f8b7aa3a0d349d9562b424160ad18612-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f8b7aa3a0d349d9562b424160ad18612-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11558-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f8b7aa3a0d349d9562b424160ad18612-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f8b7aa3a0d349d9562b424160ad18612-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f8b7aa3a0d349d9562b424160ad18612-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f8b7aa3a0d349d9562b424160ad18612-Supplemental.pdf
Learning a causal directed acyclic graph from data is a challenging task that involves solving a combinatorial problem for which the solution is not always identifiable. A new line of work reformulates this problem as a continuous constrained optimization one, which is solved via the augmented Lagrangian method. However, most methods based on this idea do not make use of interventional data, which can significantly alleviate identifiability issues. This work constitutes a new step in this direction by proposing a theoretically-grounded method based on neural networks that can leverage interventional data. We illustrate the flexibility of the continuous-constrained framework by taking advantage of expressive neural architectures such as normalizing flows. We show that our approach compares favorably to the state of the art in a variety of settings, including perfect and imperfect interventions for which the targeted nodes may even be unknown.
One-sample Guided Object Representation Disassembling
https://papers.nips.cc/paper_files/paper/2020/hash/f8e59f4b2fe7c5705bf878bbd494ccdf-Abstract.html
Zunlei Feng, Yongming He, Xinchao Wang, Xin Gao, Jie Lei, Cheng Jin, Mingli Song
https://papers.nips.cc/paper_files/paper/2020/hash/f8e59f4b2fe7c5705bf878bbd494ccdf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f8e59f4b2fe7c5705bf878bbd494ccdf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11559-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f8e59f4b2fe7c5705bf878bbd494ccdf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f8e59f4b2fe7c5705bf878bbd494ccdf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f8e59f4b2fe7c5705bf878bbd494ccdf-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f8e59f4b2fe7c5705bf878bbd494ccdf-Supplemental.zip
The ability to disassemble the features of objects and background is crucial for many machine learning tasks, including image classification, image editing, visual concepts learning, and so on. However, existing (semi-)supervised methods all need a large amount of annotated samples, while unsupervised methods can't handle real-world images with complicated backgrounds. In this paper, we introduce the One-sample Guided Object Representation Disassembling (One-GORD) method, which only requires one annotated sample for each object category to learn disassembled object representation from unannotated images. For the annotated one-sample, we first adopt some data augmentation strategies to generate some synthetic samples, which can guide the disassembling of the object features and background features. For the unannotated images, two self-supervised mechanisms: dual-swapping and fuzzy classification are introduced to disassemble object features from the background with the guidance of annotated one-sample. What's more, we devise two metrics to evaluate the disassembling performance from the perspective of representation and image, respectively. Experiments demonstrate that the One-GORD achieves competitive dissembling performance and can handle natural scenes with complicated backgrounds.
Extrapolation Towards Imaginary 0-Nearest Neighbour and Its Improved Convergence Rate
https://papers.nips.cc/paper_files/paper/2020/hash/f9028faec74be6ec9b852b0a542e2f39-Abstract.html
Akifumi Okuno, Hidetoshi Shimodaira
https://papers.nips.cc/paper_files/paper/2020/hash/f9028faec74be6ec9b852b0a542e2f39-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f9028faec74be6ec9b852b0a542e2f39-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11560-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f9028faec74be6ec9b852b0a542e2f39-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f9028faec74be6ec9b852b0a542e2f39-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f9028faec74be6ec9b852b0a542e2f39-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f9028faec74be6ec9b852b0a542e2f39-Supplemental.pdf
$k$-nearest neighbour ($k$-NN) is one of the simplest and most widely-used methods for supervised classification, that predicts a query's label by taking weighted ratio of observed labels of $k$ objects nearest to the query. The weights and the parameter $k \in \mathbb{N}$ regulate its bias-variance trade-off, and the trade-off implicitly affects the convergence rate of the excess risk for the $k$-NN classifier; several existing studies considered selecting optimal $k$ and weights to obtain faster convergence rate. Whereas $k$-NN with non-negative weights has been developed widely, it was also proved that negative weights are essential for eradicating the bias terms and attaining optimal convergence rate. In this paper, we propose a novel multiscale $k$-NN (MS-$k$-NN), that extrapolates unweighted $k$-NN estimators from several $k \ge 1$ values to $k=0$, thus giving an imaginary 0-NN estimator. Our method implicitly computes optimal real-valued weights that are adaptive to the query and its neighbour points. We theoretically prove that the MS-$k$-NN attains the improved rate, which coincides with the existing optimal rate under some conditions.
Robust Persistence Diagrams using Reproducing Kernels
https://papers.nips.cc/paper_files/paper/2020/hash/f99499791ad90c9c0ba9852622d0d15f-Abstract.html
Siddharth Vishwanath, Kenji Fukumizu, Satoshi Kuriki, Bharath K. Sriperumbudur
https://papers.nips.cc/paper_files/paper/2020/hash/f99499791ad90c9c0ba9852622d0d15f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f99499791ad90c9c0ba9852622d0d15f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11561-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f99499791ad90c9c0ba9852622d0d15f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f99499791ad90c9c0ba9852622d0d15f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f99499791ad90c9c0ba9852622d0d15f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f99499791ad90c9c0ba9852622d0d15f-Supplemental.pdf
Persistent homology has become an important tool for extracting geometric and topological features from data, whose multi-scale features are summarized in a persistence diagram. From a statistical perspective, however, persistence diagrams are very sensitive to perturbations in the input space. In this work, we develop a framework for constructing robust persistence diagrams from superlevel filtrations of robust density estimators constructed using reproducing kernels. Using an analogue of the influence function on the space of persistence diagrams, we establish the proposed framework to be less sensitive to outliers. The robust persistence diagrams are shown to be consistent estimators in the bottleneck distance, with the convergence rate controlled by the smoothness of the kernel — this, in turn, allows us to construct uniform confidence bands in the space of persistence diagrams. Finally, we demonstrate the superiority of the proposed approach on benchmark datasets.
Contextual Games: Multi-Agent Learning with Side Information
https://papers.nips.cc/paper_files/paper/2020/hash/f9afa97535cf7c8789a1c50a2cd83787-Abstract.html
Pier Giuseppe Sessa, Ilija Bogunovic, Andreas Krause, Maryam Kamgarpour
https://papers.nips.cc/paper_files/paper/2020/hash/f9afa97535cf7c8789a1c50a2cd83787-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f9afa97535cf7c8789a1c50a2cd83787-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11562-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f9afa97535cf7c8789a1c50a2cd83787-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f9afa97535cf7c8789a1c50a2cd83787-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f9afa97535cf7c8789a1c50a2cd83787-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f9afa97535cf7c8789a1c50a2cd83787-Supplemental.pdf
We formulate the novel class of contextual games, a type of repeated games driven by contextual information at each round. By means of kernel-based regularity assumptions, we model the correlation between different contexts and game outcomes and propose a novel online (meta) algorithm that exploits such correlations to minimize the contextual regret of individual players. We define game-theoretic notions of contextual Coarse Correlated Equilibria (c-CCE) and optimal contextual welfare for this new class of games and show that c-CCEs and optimal welfare can be approached whenever players' contextual regrets vanish. Finally, we empirically validate our results in a traffic routing experiment, where our algorithm leads to better performance and higher welfare compared to baselines that do not exploit the available contextual information or the correlations present in the game.
Goal-directed Generation of Discrete Structures with Conditional Generative Models
https://papers.nips.cc/paper_files/paper/2020/hash/f9b9f0fef2274a6b7009b5d52f44a3b6-Abstract.html
Amina Mollaysa, Brooks Paige, Alexandros Kalousis
https://papers.nips.cc/paper_files/paper/2020/hash/f9b9f0fef2274a6b7009b5d52f44a3b6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f9b9f0fef2274a6b7009b5d52f44a3b6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11563-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f9b9f0fef2274a6b7009b5d52f44a3b6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f9b9f0fef2274a6b7009b5d52f44a3b6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f9b9f0fef2274a6b7009b5d52f44a3b6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f9b9f0fef2274a6b7009b5d52f44a3b6-Supplemental.zip
Despite recent advances, goal-directed generation of structured discrete data remains challenging. For problems such as program synthesis (generating source code) and materials design (generating molecules), finding examples which satisfy desired constraints or exhibit desired properties is difficult. In practice, expensive heuristic search or reinforcement learning algorithms are often employed. In this paper, we investigate the use of conditional generative models which directly attack this inverse problem, by modeling the distribution of discrete structures given properties of interest. Unfortunately, the maximum likelihood training of such models often fails with the samples from the generative model inadequately respecting the input properties. To address this, we introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward. We avoid high-variance score-function estimators that would otherwise be required by sampling from an approximation to the normalized rewards, allowing simple Monte Carlo estimation of model gradients. We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value. In both cases, we find improvements over maximum likelihood estimation and other baselines.
Beyond Lazy Training for Over-parameterized Tensor Decomposition
https://papers.nips.cc/paper_files/paper/2020/hash/f9d3a954de63277730a1c66d8b38dee3-Abstract.html
Xiang Wang, Chenwei Wu, Jason D. Lee, Tengyu Ma, Rong Ge
https://papers.nips.cc/paper_files/paper/2020/hash/f9d3a954de63277730a1c66d8b38dee3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f9d3a954de63277730a1c66d8b38dee3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11564-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f9d3a954de63277730a1c66d8b38dee3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f9d3a954de63277730a1c66d8b38dee3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f9d3a954de63277730a1c66d8b38dee3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f9d3a954de63277730a1c66d8b38dee3-Supplemental.pdf
Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an $l$-th order tensor in $(R^d)^{\otimes l}$ of rank $r$ (where $r\ll d$), can variants of gradient descent find a rank $m$ decomposition where $m > r$? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least $m = \Omega(d^{l-1})$, while a variant of gradient descent can find an approximate tensor when $m = O^*(r^{2.5l}\log d)$. Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
https://papers.nips.cc/paper_files/paper/2020/hash/f9fd2624beefbc7808e4e405d73f57ab-Abstract.html
Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, J. Zico Kolter
https://papers.nips.cc/paper_files/paper/2020/hash/f9fd2624beefbc7808e4e405d73f57ab-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/f9fd2624beefbc7808e4e405d73f57ab-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11565-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/f9fd2624beefbc7808e4e405d73f57ab-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/f9fd2624beefbc7808e4e405d73f57ab-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/f9fd2624beefbc7808e4e405d73f57ab-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/f9fd2624beefbc7808e4e405d73f57ab-Supplemental.pdf
We present a method for provably defending any pretrained image classifier against $\ell_p$ adversarial attacks. This method, for instance, allows public vision API providers and users to seamlessly convert pretrained non-robust classification services into provably robust ones. By prepending a custom-trained denoiser to any off-the-shelf image classifier and using randomized smoothing, we effectively create a new classifier that is guaranteed to be $\ell_p$-robust to adversarial examples, without modifying the pretrained classifier. Our approach applies to both the white-box and the black-box settings of the pretrained classifier. We refer to this defense as denoised smoothing, and we demonstrate its effectiveness through extensive experimentation on ImageNet and CIFAR-10. Finally, we use our approach to provably defend the Azure, Google, AWS, and ClarifAI image classification APIs. Our code replicating all the experiments in the paper can be found at: https://github.com/microsoft/denoised-smoothing.
Minibatch Stochastic Approximate Proximal Point Methods
https://papers.nips.cc/paper_files/paper/2020/hash/fa2246fa0fdf0d3e270c86767b77ba1b-Abstract.html
Hilal Asi, Karan Chadha, Gary Cheng, John C. Duchi
https://papers.nips.cc/paper_files/paper/2020/hash/fa2246fa0fdf0d3e270c86767b77ba1b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fa2246fa0fdf0d3e270c86767b77ba1b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11566-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fa2246fa0fdf0d3e270c86767b77ba1b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fa2246fa0fdf0d3e270c86767b77ba1b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fa2246fa0fdf0d3e270c86767b77ba1b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fa2246fa0fdf0d3e270c86767b77ba1b-Supplemental.pdf
We extend the Approximate-Proximal Point (aProx) family of model-based methods for solving stochastic convex optimization problems, including stochastic subgradient, proximal point, and bundle methods, to the minibatch setting. To do this, we propose two minibatched algorithms for which we prove a non-asymptotic upper bound on the rate of convergence, revealing a linear speedup in minibatch size. In contrast to standard stochastic gradient methods, these methods may have linear speedup in the minibatch setting even for non-smooth functions. Our algorithms maintain the desirable traits characteristic of the aProx family, such as robustness to initial step size choice. Additionally, we show improved convergence rates for "interpolation" problems, which (for example) gives a new parallelization strategy for alternating projections. We corroborate our theoretical results with extensive empirical testing, which demonstrates the gains provided by accurate modeling and minibatching.
Attribute Prototype Network for Zero-Shot Learning
https://papers.nips.cc/paper_files/paper/2020/hash/fa2431bf9d65058fe34e9713e32d60e6-Abstract.html
Wenjia Xu, Yongqin Xian, Jiuniu Wang, Bernt Schiele, Zeynep Akata
https://papers.nips.cc/paper_files/paper/2020/hash/fa2431bf9d65058fe34e9713e32d60e6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fa2431bf9d65058fe34e9713e32d60e6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11567-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fa2431bf9d65058fe34e9713e32d60e6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fa2431bf9d65058fe34e9713e32d60e6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fa2431bf9d65058fe34e9713e32d60e6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fa2431bf9d65058fe34e9713e32d60e6-Supplemental.pdf
From the beginning of zero-shot learning research, visual attributes have been shown to play an important role. In order to better transfer attribute-based knowledge from known to unknown classes, we argue that an image representation with integrated attribute localization ability would be beneficial for zero-shot learning. To this end, we propose a novel zero-shot representation learning framework that jointly learns discriminative global and local features using only class-level attributes. While a visual-semantic embedding layer learns global features, local features are learned through an attribute prototype network that simultaneously regresses and decorrelates attributes from intermediate features. We show that our locality augmented image representations achieve a new state-of-the-art on three zero-shot learning benchmarks. As an additional benefit, our model points to the visual evidence of the attributes in an image, e.g. for the CUB dataset, confirming the improved attribute localization ability of our image representation.
CrossTransformers: spatially-aware few-shot transfer
https://papers.nips.cc/paper_files/paper/2020/hash/fa28c6cdf8dd6f41a657c3d7caa5c709-Abstract.html
Carl Doersch, Ankush Gupta, Andrew Zisserman
https://papers.nips.cc/paper_files/paper/2020/hash/fa28c6cdf8dd6f41a657c3d7caa5c709-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fa28c6cdf8dd6f41a657c3d7caa5c709-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11568-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fa28c6cdf8dd6f41a657c3d7caa5c709-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fa28c6cdf8dd6f41a657c3d7caa5c709-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fa28c6cdf8dd6f41a657c3d7caa5c709-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fa28c6cdf8dd6f41a657c3d7caa5c709-Supplemental.zip
Given new tasks with very little data---such as new classes in a classification problem or a domain shift in the input---performance of modern vision systems degrades remarkably quickly. In this work, we illustrate how the neural network representations which underpin modern vision systems are subject to supervision collapse, whereby they lose any information that is not necessary for performing the training task, including information that may be necessary for transfer to new tasks or domains. We then propose two methods to mitigate this problem. First, we employ self-supervised learning to encourage general-purpose features that transfer better. Second, we propose a novel Transformer based neural network architecture called CrossTransformers, which can take a small number of labeled images and an unlabeled query, find coarse spatial correspondence between the query and the labeled images, and then infer class membership by computing distances between spatially-corresponding features. The result is a classifier that is more robust to task and domain shift, which we demonstrate via state-of-the-art performance on Meta-Dataset, a recent dataset for evaluating transfer from ImageNet to many other vision datasets.
Learning Latent Space Energy-Based Prior Model
https://papers.nips.cc/paper_files/paper/2020/hash/fa3060edb66e6ff4507886f9912e1ab9-Abstract.html
Bo Pang, Tian Han, Erik Nijkamp, Song-Chun Zhu, Ying Nian Wu
https://papers.nips.cc/paper_files/paper/2020/hash/fa3060edb66e6ff4507886f9912e1ab9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fa3060edb66e6ff4507886f9912e1ab9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11569-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fa3060edb66e6ff4507886f9912e1ab9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fa3060edb66e6ff4507886f9912e1ab9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fa3060edb66e6ff4507886f9912e1ab9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fa3060edb66e6ff4507886f9912e1ab9-Supplemental.pdf
We propose an energy-based model (EBM) in the latent space of a generator model, so that the EBM serves as a prior model that stands on the top-down network of the generator model. Both the latent space EBM and the top-down network can be learned jointly by maximum likelihood, which involves short-run MCMC sampling from both the prior and posterior distributions of the latent vector. Due to the low dimensionality of the latent space and the expressiveness of the top-down network, a simple EBM in latent space can capture regularities in the data effectively, and MCMC sampling in latent space is efficient and mixes well. We show that the learned model exhibits strong performances in terms of image and text generation and anomaly detection. The one-page code can be found in supplementary materials.
SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology
https://papers.nips.cc/paper_files/paper/2020/hash/fa78a16157fed00d7a80515818432169-Abstract.html
Mark Veillette, Siddharth Samsi, Chris Mattioli
https://papers.nips.cc/paper_files/paper/2020/hash/fa78a16157fed00d7a80515818432169-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fa78a16157fed00d7a80515818432169-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11571-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fa78a16157fed00d7a80515818432169-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fa78a16157fed00d7a80515818432169-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fa78a16157fed00d7a80515818432169-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fa78a16157fed00d7a80515818432169-Supplemental.pdf
Modern deep learning approaches have shown promising results in meteorological applications like precipitation nowcasting, synthetic radar generation, front detection and several others. In order to effectively train and validate these complex algorithms, large and diverse datasets containing high-resolution imagery are required. Petabytes of weather data, such as from the Geostationary Environmental Satellite System (GOES) and the Next-Generation Radar (NEXRAD) system, are available to the public; however, the size and complexity of these datasets is a hindrance to developing and training deep models. To help address this problem, we introduce the Storm EVent ImagRy (SEVIR) dataset - a single, rich dataset that combines spatially and temporally aligned data from multiple sensors, along with baseline implementations of deep learning models and evaluation metrics, to accelerate new algorithmic innovations. SEVIR is an annotated, curated and spatio-temporally aligned dataset containing over 10,000 weather events that each consist of 384 km x 384 km image sequences spanning 4 hours of time. Images in SEVIR were sampled and aligned across five different data types: three channels (C02, C09, C13) from the GOES-16 advanced baseline imager, NEXRAD vertically integrated liquid mosaics, and GOES-16 Geostationary Lightning Mapper (GLM) flashes. Many events in SEVIR were selected and matched to the NOAA Storm Events database so that additional descriptive information such as storm impacts and storm descriptions can be linked to the rich imagery provided by the sensors. We describe the data collection methodology and illustrate the applications of this dataset with two examples of deep learning in meteorology: precipitation nowcasting and synthetic weather radar generation. In addition, we also describe a set of metrics that can be used to evaluate the outputs of these models. The SEVIR dataset and baseline implementations of selected applications are available for download.
Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation
https://papers.nips.cc/paper_files/paper/2020/hash/fae0b27c451c728867a567e8c1bb4e53-Abstract.html
Bowen Li, Xiaojuan Qi, Philip Torr, Thomas Lukasiewicz
https://papers.nips.cc/paper_files/paper/2020/hash/fae0b27c451c728867a567e8c1bb4e53-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fae0b27c451c728867a567e8c1bb4e53-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11572-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fae0b27c451c728867a567e8c1bb4e53-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fae0b27c451c728867a567e8c1bb4e53-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fae0b27c451c728867a567e8c1bb4e53-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fae0b27c451c728867a567e8c1bb4e53-Supplemental.pdf
We propose a novel lightweight generative adversarial network for efficient image manipulation using natural language descriptions. To achieve this, a new word-level discriminator is proposed, which provides the generator with fine-grained training feedback at word-level, to facilitate training a lightweight generator that has a small number of parameters, but can still correctly focus on specific visual attributes of an image, and then edit them without affecting other contents that are not described in the text. Furthermore, thanks to the explicit training signal related to each word, the discriminator can also be simplified to have a lightweight structure. Compared with the state of the art, our method has a much smaller number of parameters, but still achieves a competitive manipulation performance. Extensive experimental results demonstrate that our method can better disentangle different visual attributes, then correctly map them to corresponding semantic words, and thus achieve a more accurate image modification using natural language descriptions.
High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/faff959d885ec0ecf70741a846c34d1d-Abstract.html
Qing Feng , Ben Letham, Hongzi Mao, Eytan Bakshy
https://papers.nips.cc/paper_files/paper/2020/hash/faff959d885ec0ecf70741a846c34d1d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/faff959d885ec0ecf70741a846c34d1d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11573-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/faff959d885ec0ecf70741a846c34d1d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/faff959d885ec0ecf70741a846c34d1d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/faff959d885ec0ecf70741a846c34d1d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/faff959d885ec0ecf70741a846c34d1d-Supplemental.pdf
Contextual policies are used in many settings to customize system parameters and actions to the specifics of a particular setting. In some real-world settings, such as randomized controlled trials or A/B tests, it may not be possible to measure policy outcomes at the level of context—we observe only aggregate rewards across a distribution of contexts. This makes policy optimization much more difficult because we must solve a high-dimensional optimization problem over the entire space of contextual policies, for which existing optimization methods are not suitable. We develop effective models that leverage the structure of the search space to enable contextual policy optimization directly from the aggregate rewards using Bayesian optimization. We use a collection of simulation studies to characterize the performance and robustness of the models, and show that our approach of inferring a low-dimensional context embedding performs best. Finally, we show successful contextual policy optimization in a real-world video bitrate policy problem.
Model Fusion via Optimal Transport
https://papers.nips.cc/paper_files/paper/2020/hash/fb2697869f56484404c8ceee2985b01d-Abstract.html
Sidak Pal Singh, Martin Jaggi
https://papers.nips.cc/paper_files/paper/2020/hash/fb2697869f56484404c8ceee2985b01d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fb2697869f56484404c8ceee2985b01d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11574-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fb2697869f56484404c8ceee2985b01d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fb2697869f56484404c8ceee2985b01d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fb2697869f56484404c8ceee2985b01d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fb2697869f56484404c8ceee2985b01d-Supplemental.pdf
The code is available at the following link, https://github.com/sidak/otfusion.
On the Stability and Convergence of Robust Adversarial Reinforcement Learning: A Case Study on Linear Quadratic Systems
https://papers.nips.cc/paper_files/paper/2020/hash/fb2e203234df6dee15934e448ee88971-Abstract.html
Kaiqing Zhang, Bin Hu, Tamer Basar
https://papers.nips.cc/paper_files/paper/2020/hash/fb2e203234df6dee15934e448ee88971-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fb2e203234df6dee15934e448ee88971-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11575-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fb2e203234df6dee15934e448ee88971-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fb2e203234df6dee15934e448ee88971-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fb2e203234df6dee15934e448ee88971-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fb2e203234df6dee15934e448ee88971-Supplemental.zip
Reinforcement learning (RL) algorithms can fail to generalize due to the gap between the simulation and the real world. One standard remedy is to use robust adversarial RL (RARL) that accounts for this gap during the policy training, by modeling the gap as an adversary against the training agent. In this work, we reexamine the effectiveness of RARL under a fundamental robust control setting: the linear quadratic (LQ) case. We first observe that the popular RARL scheme that greedily alternates agents’ updates can easily destabilize the system. Motivated by this, we propose several other policy-based RARL algorithms whose convergence behaviors are then studied both empirically and theoretically. We find: i) the conventional RARL framework (Pinto et al., 2017) can learn a destabilizing policy if the initial policy does not enjoy the robust stability property against the adversary; and ii) with robustly stabilizing initializations, our proposed double-loop RARL algorithm provably converges to the global optimal cost while maintaining robust stability on-the-fly. We also examine the stability and convergence issues of other variants of policy-based RARL algorithms, and then discuss several ways to learn robustly stabilizing initializations. From a robust control perspective, we aim to provide some new and critical angles about RARL, by identifying and addressing the stability issues in this fundamental LQ setting in continuous control. Our results make an initial attempt toward better theoretical understandings of policy-based RARL, the core approach in Pinto et al., 2017.
Learning Individually Inferred Communication for Multi-Agent Cooperation
https://papers.nips.cc/paper_files/paper/2020/hash/fb2fcd534b0ff3bbed73cc51df620323-Abstract.html
Ziluo Ding, Tiejun Huang, Zongqing Lu
https://papers.nips.cc/paper_files/paper/2020/hash/fb2fcd534b0ff3bbed73cc51df620323-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fb2fcd534b0ff3bbed73cc51df620323-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11576-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fb2fcd534b0ff3bbed73cc51df620323-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fb2fcd534b0ff3bbed73cc51df620323-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fb2fcd534b0ff3bbed73cc51df620323-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fb2fcd534b0ff3bbed73cc51df620323-Supplemental.pdf
Communication lays the foundation for human cooperation. It is also crucial for multi-agent cooperation. However, existing work focuses on broadcast communication, which is not only impractical but also leads to information redundancy that could even impair the learning process. To tackle these difficulties, we propose Individually Inferred Communication (I2C), a simple yet effective model to enable agents to learn a prior for agent-agent communication. The prior knowledge is learned via causal inference and realized by a feed-forward neural network that maps the agent's local observation to a belief about who to communicate with. The influence of one agent on another is inferred via the joint action-value function in multi-agent reinforcement learning and quantified to label the necessity of agent-agent communication. Furthermore, the agent policy is regularized to better exploit communicated messages. Empirically, we show that I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios, comparing to existing methods.
Set2Graph: Learning Graphs From Sets
https://papers.nips.cc/paper_files/paper/2020/hash/fb4ab556bc42d6f0ee0f9e24ec4d1af0-Abstract.html
Hadar Serviansky, Nimrod Segol, Jonathan Shlomi, Kyle Cranmer, Eilam Gross, Haggai Maron, Yaron Lipman
https://papers.nips.cc/paper_files/paper/2020/hash/fb4ab556bc42d6f0ee0f9e24ec4d1af0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fb4ab556bc42d6f0ee0f9e24ec4d1af0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11577-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fb4ab556bc42d6f0ee0f9e24ec4d1af0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fb4ab556bc42d6f0ee0f9e24ec4d1af0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fb4ab556bc42d6f0ee0f9e24ec4d1af0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fb4ab556bc42d6f0ee0f9e24ec4d1af0-Supplemental.pdf
This paper advocates a family of neural network models for learning Set2Graph functions that is both practical and of maximal expressive power (universal), that is, can approximate arbitrary continuous Set2Graph functions over compact sets. Testing these models on different machine learning tasks, mainly an application to particle physics, we find them favorable to existing baselines.
Graph Random Neural Networks for Semi-Supervised Learning on Graphs
https://papers.nips.cc/paper_files/paper/2020/hash/fb4c835feb0a65cc39739320d7a51c02-Abstract.html
Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, Jie Tang
https://papers.nips.cc/paper_files/paper/2020/hash/fb4c835feb0a65cc39739320d7a51c02-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fb4c835feb0a65cc39739320d7a51c02-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11578-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fb4c835feb0a65cc39739320d7a51c02-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fb4c835feb0a65cc39739320d7a51c02-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fb4c835feb0a65cc39739320d7a51c02-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fb4c835feb0a65cc39739320d7a51c02-Supplemental.pdf
We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored. However, most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce. In this paper, we propose a simple yet effective framework—GRAPH RANDOM NEURAL NETWORKS (GRAND)—to address these issues. In GRAND, we first design a random propagation strategy to perform graph data augmentation. Then we leverage consistency regularization to optimize the prediction consistency of unlabeled nodes across different data augmentations. Extensive experiments on graph benchmark datasets suggest that GRAND significantly outperforms state-of- the-art GNN baselines on semi-supervised node classification. Finally, we show that GRAND mitigates the issues of over-smoothing and non-robustness, exhibiting better generalization behavior than existing GNNs. The source code of GRAND is publicly available at https://github.com/Grand20/grand.
Gradient Boosted Normalizing Flows
https://papers.nips.cc/paper_files/paper/2020/hash/fb5d9e209ebda9ab6556a31639190622-Abstract.html
Robert Giaquinto, Arindam Banerjee
https://papers.nips.cc/paper_files/paper/2020/hash/fb5d9e209ebda9ab6556a31639190622-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fb5d9e209ebda9ab6556a31639190622-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11579-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fb5d9e209ebda9ab6556a31639190622-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fb5d9e209ebda9ab6556a31639190622-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fb5d9e209ebda9ab6556a31639190622-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fb5d9e209ebda9ab6556a31639190622-Supplemental.pdf
By chaining a sequence of differentiable invertible transformations, normalizing flows (NF) provide an expressive method of posterior approximation, exact density evaluation, and sampling. The trend in normalizing flow literature has been to devise deeper, more complex transformations to achieve greater flexibility. We propose an alternative: Gradient Boosted Normalizing Flows (GBNF) model a density by successively adding new NF components with gradient boosting. Under the boosting framework, each new NF component optimizes a weighted likelihood objective, resulting in new components that are fit to the suitable residuals of the previously trained components. The GBNF formulation results in a mixture model structure, whose flexibility increases as more components are added. Moreover, GBNFs offer a wider, as opposed to strictly deeper, approach that improves existing NFs at the cost of additional training---not more complex transformations. We demonstrate the effectiveness of this technique for density estimation and, by coupling GBNF with a variational autoencoder, generative modeling of images. Our results show that GBNFs outperform their non-boosted analog, and, in some cases, produce better results with smaller, simpler flows.
Open Graph Benchmark: Datasets for Machine Learning on Graphs
https://papers.nips.cc/paper_files/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, Jure Leskovec
https://papers.nips.cc/paper_files/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fb60d411a5c5b72b2e7d3527cfc84fd0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11580-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fb60d411a5c5b72b2e7d3527cfc84fd0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fb60d411a5c5b72b2e7d3527cfc84fd0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fb60d411a5c5b72b2e7d3527cfc84fd0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fb60d411a5c5b72b2e7d3527cfc84fd0-Supplemental.pdf
We present the Open Graph Benchmark (OGB), a diverse set of challenging and realistic benchmark datasets to facilitate scalable, robust, and reproducible graph machine learning (ML) research. OGB datasets are large-scale, encompass multiple important graph ML tasks, and cover a diverse range of domains, ranging from social and information networks to biological networks, molecular graphs, source code ASTs, and knowledge graphs. For each dataset, we provide a unified evaluation protocol using meaningful application-specific data splits and evaluation metrics. In addition to building the datasets, we also perform extensive benchmark experiments for each dataset. Our experiments suggest that OGB datasets present significant challenges of scalability to large-scale graphs and out-of-distribution generalization under realistic data splits, indicating fruitful opportunities for future research. Finally, OGB provides an automated end-to-end graph ML pipeline that simplifies and standardizes the process of graph data loading, experimental setup, and model evaluation. OGB will be regularly updated and welcomes inputs from the community. OGB datasets as well as data loaders, evaluation scripts, baseline code, and leaderboards are publicly available at https://ogb.stanford.edu .
Towards Understanding Hierarchical Learning: Benefits of Neural Representations
https://papers.nips.cc/paper_files/paper/2020/hash/fb647ca6672b0930e9d00dc384d8b16f-Abstract.html
Minshuo Chen, Yu Bai, Jason D. Lee, Tuo Zhao, Huan Wang, Caiming Xiong, Richard Socher
https://papers.nips.cc/paper_files/paper/2020/hash/fb647ca6672b0930e9d00dc384d8b16f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fb647ca6672b0930e9d00dc384d8b16f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11581-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fb647ca6672b0930e9d00dc384d8b16f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fb647ca6672b0930e9d00dc384d8b16f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fb647ca6672b0930e9d00dc384d8b16f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fb647ca6672b0930e9d00dc384d8b16f-Supplemental.pdf
Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to ``shallow learners'' such as kernels. In this work, we demonstrate that intermediate \emph{neural representations} add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree-$p$ polynomial ($p \geq 4$) in $d$ dimension, neural representation requires only $\widetilde{O}(d^{\ceil{p/2}})$ samples, while the best-known sample complexity upper bound for the raw input is $\widetilde{O}(d^{p-1})$. We contrast our result with a lower bound showing that neural representations do not improve over the raw input (in the infinite width limit), when the trainable network is instead a neural tangent kernel. Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
Texture Interpolation for Probing Visual Perception
https://papers.nips.cc/paper_files/paper/2020/hash/fba9d88164f3e2d9109ee770223212a0-Abstract.html
Jonathan Vacher, Aida Davila, Adam Kohn, Ruben Coen-Cagli
https://papers.nips.cc/paper_files/paper/2020/hash/fba9d88164f3e2d9109ee770223212a0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fba9d88164f3e2d9109ee770223212a0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11582-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fba9d88164f3e2d9109ee770223212a0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fba9d88164f3e2d9109ee770223212a0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fba9d88164f3e2d9109ee770223212a0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fba9d88164f3e2d9109ee770223212a0-Supplemental.pdf
Texture synthesis models are important tools for understanding visual processing. In particular, statistical approaches based on neurally relevant features have been instrumental in understanding aspects of visual perception and of neural coding. New deep learning-based approaches further improve the quality of synthetic textures. Yet, it is still unclear why deep texture synthesis performs so well, and applications of this new framework to probe visual perception are scarce. Here, we show that distributions of deep convolutional neural network (CNN) activations of a texture are well described by elliptical distributions and therefore, following optimal transport theory, constraining their mean and covariance is sufficient to generate new texture samples. Then, we propose the natural geodesics (ie the shortest path between two points) arising with the optimal transport metric to interpolate between arbitrary textures. Compared to other CNN-based approaches, our interpolation method appears to match more closely the geometry of texture perception, and our mathematical framework is better suited to study its statistical nature. We apply our method by measuring the perceptual scale associated to the interpolation parameter in human observers, and the neural sensitivity of different areas of visual cortex in macaque monkeys.
Hierarchical Neural Architecture Search for Deep Stereo Matching
https://papers.nips.cc/paper_files/paper/2020/hash/fc146be0b230d7e0a92e66a6114b840d-Abstract.html
Xuelian Cheng, Yiran Zhong, Mehrtash Harandi, Yuchao Dai, Xiaojun Chang, Hongdong Li, Tom Drummond, Zongyuan Ge
https://papers.nips.cc/paper_files/paper/2020/hash/fc146be0b230d7e0a92e66a6114b840d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fc146be0b230d7e0a92e66a6114b840d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11583-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fc146be0b230d7e0a92e66a6114b840d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fc146be0b230d7e0a92e66a6114b840d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fc146be0b230d7e0a92e66a6114b840d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fc146be0b230d7e0a92e66a6114b840d-Supplemental.pdf
To reduce the human efforts in neural network design, Neural Architecture Search (NAS) has been applied with remarkable success to various high-level vision tasks such as classification and semantic segmentation. The underlying idea for the NAS algorithm is straightforward, namely, to allow the network the ability to choose among a set of operations (\eg convolution with different filter sizes), one is able to find an optimal architecture that is better adapted to the problem at hand. However, so far the success of NAS has not been enjoyed by low-level geometric vision tasks such as stereo matching. This is partly due to the fact that state-of-the-art deep stereo matching networks, designed by humans, are already sheer in size. Directly applying the NAS to such massive structures is computationally prohibitive based on the currently available mainstream computing resources. In this paper, we propose the first \emph{end-to-end} hierarchical NAS framework for deep stereo matching by incorporating task-specific human knowledge into the neural architecture search framework. Specifically, following the gold standard pipeline for deep stereo matching (\ie, feature extraction -- feature volume construction and dense matching), we optimize the architectures of the entire pipeline jointly. Extensive experiments show that our searched network outperforms all state-of-the-art deep stereo matching architectures and is ranked at the top 1 accuracy on KITTI stereo 2012, 2015, and Middlebury benchmarks, as well as the top 1 on SceneFlow dataset with a substantial improvement on the size of the network and the speed of inference. Code available at https://github.com/XuelianCheng/LEAStereo.
MuSCLE: Multi Sweep Compression of LiDAR using Deep Entropy Models
https://papers.nips.cc/paper_files/paper/2020/hash/fc152e73692bc3c934d248f639d9e963-Abstract.html
Sourav Biswas, Jerry Liu, Kelvin Wong, Shenlong Wang, Raquel Urtasun
https://papers.nips.cc/paper_files/paper/2020/hash/fc152e73692bc3c934d248f639d9e963-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fc152e73692bc3c934d248f639d9e963-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11584-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fc152e73692bc3c934d248f639d9e963-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fc152e73692bc3c934d248f639d9e963-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fc152e73692bc3c934d248f639d9e963-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fc152e73692bc3c934d248f639d9e963-Supplemental.zip
We present a novel compression algorithm for reducing the storage of LiDAR sensory data streams. Our model exploits spatio-temporal relationships across multiple LIDAR sweeps to reduce the bitrate of both geometry and intensity values. Towards this goal, we propose a novel conditional entropy model that models the probabilities of the octree symbols, by considering both coarse level geometry and previous sweeps’ geometric and intensity information. We then exploit the learned probability to encode the full data-stream into a compact one. Our experiments demonstrate that our method significantly reduces the joint geometry and intensity bitrate over prior state-of-the-art LiDAR compression methods, with a reduction of 7–17% and 15–35% on the UrbanCity and SemanticKITTI datasets respectively.
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
https://papers.nips.cc/paper_files/paper/2020/hash/fc2022c89b61c76bbef978f1370660bf-Abstract.html
Edward Moroshko, Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Nati Srebro, Daniel Soudry
https://papers.nips.cc/paper_files/paper/2020/hash/fc2022c89b61c76bbef978f1370660bf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fc2022c89b61c76bbef978f1370660bf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11585-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fc2022c89b61c76bbef978f1370660bf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fc2022c89b61c76bbef978f1370660bf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fc2022c89b61c76bbef978f1370660bf-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fc2022c89b61c76bbef978f1370660bf-Supplemental.pdf
We provide a detailed asymptotic study of gradient flow trajectories and their implicit optimization bias when minimizing the exponential loss over "diagonal linear networks". This is the simplest model displaying a transition between "kernel" and non-kernel ("rich" or "active") regimes. We show how the transition is controlled by the relationship between the initialization scale and how accurately we minimize the training loss. Our results indicate that some limit behavior of gradient descent only kick in at ridiculous training accuracies (well beyond 10^-100). Moreover, the implicit bias at reasonable initialization scales and training accuracies is more complex and not captured by these limits.
Focus of Attention Improves Information Transfer in Visual Features
https://papers.nips.cc/paper_files/paper/2020/hash/fc2dc7d20994a777cfd5e6de734fe254-Abstract.html
Matteo Tiezzi, Stefano Melacci, Alessandro Betti, Marco Maggini, Marco Gori
https://papers.nips.cc/paper_files/paper/2020/hash/fc2dc7d20994a777cfd5e6de734fe254-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fc2dc7d20994a777cfd5e6de734fe254-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11586-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fc2dc7d20994a777cfd5e6de734fe254-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fc2dc7d20994a777cfd5e6de734fe254-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fc2dc7d20994a777cfd5e6de734fe254-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fc2dc7d20994a777cfd5e6de734fe254-Supplemental.zip
Unsupervised learning from continuous visual streams is a challenging problem that cannot be naturally and efficiently managed in the classic batch-mode setting of computation. The information stream must be carefully processed accordingly to an appropriate spatio-temporal distribution of the visual data, while most approaches of learning commonly assume uniform probability density. In this paper we focus on unsupervised learning for transferring visual information in a truly online setting by using a computational model that is inspired to the principle of least action in physics. The maximization of the mutual information is carried out by a temporal process which yields online estimation of the entropy terms. The model, which is based on second-order differential equations, maximizes the information transfer from the input to a discrete space of symbols related to the visual features of the input, whose computation is supported by hidden neurons. In order to better structure the input probability distribution, we use a human-like focus of attention model that, coherently with the information maximization model, is also based on second-order differential equations. We provide experimental results to support the theory by showing that the spatio-temporal filtering induced by the focus of attention allows the system to globally transfer more information from the input stream over the focused areas and, in some contexts, over the whole frames with respect to the unfiltered case that yields uniform probability distributions.
Auditing Differentially Private Machine Learning: How Private is Private SGD?
https://papers.nips.cc/paper_files/paper/2020/hash/fc4ddc15f9f4b4b06ef7844d6bb53abf-Abstract.html
Matthew Jagielski, Jonathan Ullman, Alina Oprea
https://papers.nips.cc/paper_files/paper/2020/hash/fc4ddc15f9f4b4b06ef7844d6bb53abf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fc4ddc15f9f4b4b06ef7844d6bb53abf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11587-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fc4ddc15f9f4b4b06ef7844d6bb53abf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fc4ddc15f9f4b4b06ef7844d6bb53abf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fc4ddc15f9f4b4b06ef7844d6bb53abf-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fc4ddc15f9f4b4b06ef7844d6bb53abf-Supplemental.zip
We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis. We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks. While previous work (Ma et al., arXiv 2019) proposed this connection between differential privacy and data poisoning as a defense against data poisoning, our use as a tool for understanding the privacy of a specific mechanism is new. More generally, our work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that we believe has the potential to complement and influence analytical work on differential privacy.
A Dynamical Central Limit Theorem for Shallow Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/fc5b3186f1cf0daece964f78259b7ba0-Abstract.html
Zhengdao Chen, Grant Rotskoff, Joan Bruna, Eric Vanden-Eijnden
https://papers.nips.cc/paper_files/paper/2020/hash/fc5b3186f1cf0daece964f78259b7ba0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fc5b3186f1cf0daece964f78259b7ba0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11588-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fc5b3186f1cf0daece964f78259b7ba0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fc5b3186f1cf0daece964f78259b7ba0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fc5b3186f1cf0daece964f78259b7ba0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fc5b3186f1cf0daece964f78259b7ba0-Supplemental.pdf
Recent theoretical work has characterized the dynamics and convergence properties for wide shallow neural networks trained via gradient descent; the asymptotic regime in which the number of parameters tends towards infinity has been dubbed the "mean-field" limit. At initialization, the randomly sampled parameters lead to a deviation from the mean-field limit that is dictated by the classical central limit theorem (CLT). However, the dynamics of training introduces correlations among the parameters raising the question of how the fluctuations evolve during training. Here, we analyze the mean-field dynamics as a Wasserstein gradient flow and prove that the deviations from the mean-field evolution scaled by the width, in the width-asymptotic limit, remain bounded throughout training. This observation has implications for both the approximation rate and the generalization: the upper bound we obtain is controlled by a Monte-Carlo type resampling error, which importantly does not depend on dimension. We also relate the bound on the fluctuations to the total variation norm of the measure to which the dynamics converges, which in turn controls the generalization error.
Measuring Systematic Generalization in Neural Proof Generation with Transformers
https://papers.nips.cc/paper_files/paper/2020/hash/fc84ad56f9f547eb89c72b9bac209312-Abstract.html
Nicolas Gontier, Koustuv Sinha, Siva Reddy, Chris Pal
https://papers.nips.cc/paper_files/paper/2020/hash/fc84ad56f9f547eb89c72b9bac209312-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fc84ad56f9f547eb89c72b9bac209312-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11589-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fc84ad56f9f547eb89c72b9bac209312-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fc84ad56f9f547eb89c72b9bac209312-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fc84ad56f9f547eb89c72b9bac209312-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fc84ad56f9f547eb89c72b9bac209312-Supplemental.pdf
We are interested in understanding how well Transformer language models (TLMs) can perform reasoning tasks when trained on knowledge encoded in the form of natural language. We investigate their systematic generalization abilities on a logical reasoning task in natural language, which involves reasoning over relationships between entities grounded in first-order logical proofs. Specifically, we perform soft theorem-proving by leveraging TLMs to generate natural language proofs. We test the generated proofs for logical consistency, along with the accuracy of the final inference. We observe length-generalization issues when evaluated on longer-than-trained sequences. However, we observe TLMs improve their generalization performance after being exposed to longer, exhaustive proofs. In addition, we discover that TLMs are able to generalize better using backward-chaining proofs compared to their forward-chaining counterparts, while they find it easier to generate forward chaining proofs. We observe that models that are not trained to generate proofs are better at generalizing to problems based on longer proofs. This suggests that Transformers have efficient internal reasoning strategies that are harder to interpret. These results highlight the systematic generalization behavior of TLMs in the context of logical reasoning, and we believe this work motivates deeper inspection of their underlying reasoning strategies.
Big Self-Supervised Models are Strong Semi-Supervised Learners
https://papers.nips.cc/paper_files/paper/2020/hash/fcbc95ccdd551da181207c0c1400c655-Abstract.html
Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey E. Hinton
https://papers.nips.cc/paper_files/paper/2020/hash/fcbc95ccdd551da181207c0c1400c655-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fcbc95ccdd551da181207c0c1400c655-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11590-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fcbc95ccdd551da181207c0c1400c655-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fcbc95ccdd551da181207c0c1400c655-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fcbc95ccdd551da181207c0c1400c655-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fcbc95ccdd551da181207c0c1400c655-Supplemental.pdf
One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to common approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9% ImageNet top-1 accuracy with just 1% of the labels ($\le$13 labeled images per class) using ResNet-50, a 10X improvement in label efficiency over the previous state-of-the-art. With 10% of labels, ResNet-50 trained with our method achieves 77.5% top-1 accuracy, outperforming standard supervised training with all of the labels.
Learning from Label Proportions: A Mutual Contamination Framework
https://papers.nips.cc/paper_files/paper/2020/hash/fcde14913c766cf307c75059e0e89af5-Abstract.html
Clayton Scott, Jianxin Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/fcde14913c766cf307c75059e0e89af5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fcde14913c766cf307c75059e0e89af5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11591-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fcde14913c766cf307c75059e0e89af5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fcde14913c766cf307c75059e0e89af5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fcde14913c766cf307c75059e0e89af5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fcde14913c766cf307c75059e0e89af5-Supplemental.pdf
Learning from label proportions (LLP) is a weakly supervised setting for classification in which unlabeled training instances are grouped into bags, and each bag is annotated with the proportion of each class occurring in that bag. Prior work on LLP has yet to establish a consistent learning procedure, nor does there exist a theoretically justified, general purpose training criterion. In this work we address these two issues by posing LLP in terms of mutual contamination models (MCMs), which have recently been applied successfully to study various other weak supervision settings. In the process, we establish several novel technical results for MCMs, including unbiased losses and generalization error bounds under non-iid sampling plans. We also point out the limitations of a common experimental setting for LLP, and propose a new one based on our MCM framework.
Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/fcf55a303b71b84d326fb1d06e332a26-Abstract.html
Geoff Pleiss, Martin Jankowiak, David Eriksson, Anil Damle, Jacob Gardner
https://papers.nips.cc/paper_files/paper/2020/hash/fcf55a303b71b84d326fb1d06e332a26-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fcf55a303b71b84d326fb1d06e332a26-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11592-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fcf55a303b71b84d326fb1d06e332a26-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fcf55a303b71b84d326fb1d06e332a26-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fcf55a303b71b84d326fb1d06e332a26-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fcf55a303b71b84d326fb1d06e332a26-Supplemental.pdf
Matrix square roots and their inverses arise frequently in machine learning, e.g., when sampling from high-dimensional Gaussians N(0,K) or “whitening” a vector b against covariance matrix K. While existing methods typically require O(N^3) computation, we introduce a highly-efficient quadratic-time algorithm for computing K^{1/2}b, K^{-1/2}b, and their derivatives through matrix-vector multiplication (MVMs). Our method combines Krylov subspace methods with a rational approximation and typically achieves 4 decimal places of accuracy with fewer than 100 MVMs. Moreover, the backward pass requires little additional computation. We demonstrate our method's applicability on matrices as large as 50,000 by 50,000 - well beyond traditional methods - with little approximation error. Applying this increased scalability to variational Gaussian processes, Bayesian optimization, and Gibbs sampling results in more powerful models with higher accuracy. In particular, we perform variational GP inference with up to 10,000 inducing points and perform Gibbs sampling on a 25,000-dimensional problem.
Self-Adaptively Learning to Demoiré from Focused and Defocused Image Pairs
https://papers.nips.cc/paper_files/paper/2020/hash/fd348179ec677c5560d4cd9c3ffb6cd9-Abstract.html
Lin Liu, Shanxin Yuan, Jianzhuang Liu, Liping Bao, Gregory Slabaugh, Qi Tian
https://papers.nips.cc/paper_files/paper/2020/hash/fd348179ec677c5560d4cd9c3ffb6cd9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fd348179ec677c5560d4cd9c3ffb6cd9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11593-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fd348179ec677c5560d4cd9c3ffb6cd9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fd348179ec677c5560d4cd9c3ffb6cd9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fd348179ec677c5560d4cd9c3ffb6cd9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fd348179ec677c5560d4cd9c3ffb6cd9-Supplemental.pdf
Moiré artifacts are common in digital photography, resulting from the interference between high-frequency scene content and the color filter array of the camera. Existing deep learning-based demoiréing methods trained on large scale datasets are limited in handling various complex moiré patterns, and mainly focus on demoiréing of photos taken of digital displays. Moreover, obtaining moiré-free ground-truth in natural scenes is difficult but needed for training. In this paper, we propose a self-adaptive learning method for demoiréing a high-frequency image, with the help of an additional defocused moiré-free blur image. Given an image degraded with moiré artifacts and a moiré-free blur image, our network predicts a moiré-free clean image and a blur kernel with a self-adaptive strategy that does not require an explicit training stage, instead performing test-time adaptation. Our model has two sub-networks and works iteratively. During each iteration, one sub-network takes the moiré image as input, removing moiré patterns and restoring image details, and the other sub-network estimates the blur kernel from the blur image. The two sub-networks are jointly optimized. Extensive experiments demonstrate that our method outperforms state-of-the-art methods and can produce high-quality demoiréd results. It can generalize well to the task of removing moiré artifacts caused by display screens. In addition, we build a new moiré dataset, including images with screen and texture moiré artifacts. As far as we know, this is the first dataset with real texture moiré patterns.
Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/fd4f21f2556dad0ea8b7a5c04eabebda-Abstract.html
Nathan Kallus, Angela Zhou
https://papers.nips.cc/paper_files/paper/2020/hash/fd4f21f2556dad0ea8b7a5c04eabebda-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fd4f21f2556dad0ea8b7a5c04eabebda-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11594-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fd4f21f2556dad0ea8b7a5c04eabebda-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fd4f21f2556dad0ea8b7a5c04eabebda-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fd4f21f2556dad0ea8b7a5c04eabebda-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fd4f21f2556dad0ea8b7a5c04eabebda-Supplemental.zip
Off-policy evaluation of sequential decision policies from observational data is necessary in applications of batch reinforcement learning such as education and healthcare. In such settings, however, unobserved variables confound observed actions, rendering exact evaluation of new policies impossible, i.e, unidentifiable. We develop a robust approach that estimates sharp bounds on the (unidentifiable) value of a given policy in an infinite-horizon problem given data from another policy with unobserved confounding, subject to a sensitivity model. We consider stationary unobserved confounding and compute bounds by optimizing over the set of all stationary state-occupancy ratios that agree with a new partially identified estimating equation and the sensitivity model. We prove convergence to the sharp bounds as we collect more confounded data. Although checking set membership is a linear program, the support function is given by a difficult nonconvex optimization problem. We develop approximations based on nonconvex projected gradient descent and demonstrate the resulting bounds empirically.
Model Class Reliance for Random Forests
https://papers.nips.cc/paper_files/paper/2020/hash/fd512441a1a791770a6fa573d688bff5-Abstract.html
Gavin Smith, Roberto Mansilla, James Goulding
https://papers.nips.cc/paper_files/paper/2020/hash/fd512441a1a791770a6fa573d688bff5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fd512441a1a791770a6fa573d688bff5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11595-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fd512441a1a791770a6fa573d688bff5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fd512441a1a791770a6fa573d688bff5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fd512441a1a791770a6fa573d688bff5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fd512441a1a791770a6fa573d688bff5-Supplemental.pdf
Variable Importance (VI) has traditionally been cast as the process of estimating each variables contribution to a predictive model's overall performance. Analysis of a single model instance, however, guarantees no insight into a variables relevance to underlying generative processes. Recent research has sought to address this concern via analysis of Rashomon sets - sets of alternative model instances that exhibit equivalent predictive performance to some reference model, but which take different functional forms. Measures such as Model Class Reliance (MCR) have been proposed, that are computed against Rashomon sets, in order to ascertain how much a variable must be relied on to make robust predictions, or whether alternatives exist. If MCR range is tight, we have no choice but to use a variable; if range is high then there exists competing, perhaps fairer models, that provide alternative explanations of the phenomena being examined. Applications are wide, from enabling construction of `fairer' models in areas such as recidivism, health analytics and ethical marketing. Tractable estimation of MCR for non-linear models is currently restricted to Kernel Regression under squared loss \cite{fisher2019all}. In this paper we introduce a new technique that extends computation of Model Class Reliance (MCR) to Random Forest classifiers and regressors. The proposed approach addresses a number of open research questions, and in contrast to prior Kernel SVM MCR estimation, runs in linearithmic rather than polynomial time. Taking a fundamentally different approach to previous work, we provide a solution for this important model class, identifying situations where irrelevant covariates do not improve predictions.
Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games
https://papers.nips.cc/paper_files/paper/2020/hash/fd5ac6ce504b74460b93610f39e481f7-Abstract.html
Arun Suggala, Praneeth Netrapalli
https://papers.nips.cc/paper_files/paper/2020/hash/fd5ac6ce504b74460b93610f39e481f7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fd5ac6ce504b74460b93610f39e481f7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11596-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fd5ac6ce504b74460b93610f39e481f7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fd5ac6ce504b74460b93610f39e481f7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fd5ac6ce504b74460b93610f39e481f7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fd5ac6ce504b74460b93610f39e481f7-Supplemental.pdf
We consider the problem of online learning and its application to solving minimax games. For the online learning problem, Follow the Perturbed Leader (FTPL) is a widely studied algorithm which enjoys the optimal $O(T^{1/2})$ \emph{worst case} regret guarantee for both convex and nonconvex losses. In this work, we show that when the sequence of loss functions is \emph{predictable}, a simple modification of FTPL which incorporates optimism can achieve better regret guarantees, while retaining the optimal worst-case regret guarantee for unpredictable sequences. A key challenge in obtaining these tighter regret bounds is the stochasticity and optimism in the algorithm, which requires different analysis techniques than those commonly used in the analysis of FTPL. The key ingredient we utilize in our analysis is the dual view of perturbation as regularization. While our algorithm has several applications, we consider the specific application of minimax games. For solving smooth convex-concave games, our algorithm only requires access to a linear optimization oracle. For Lipschitz and smooth nonconvex-nonconcave games, our algorithm requires access to an optimization oracle which computes the perturbed best response. In both these settings, our algorithm solves the game up to an accuracy of $O(T^{-1/2})$ using $T$ calls to the optimization oracle. An important feature of our algorithm is that it is highly parallelizable and requires only $O(T^{1/2})$ iterations, with each iteration making $O(T^{1/2})$ parallel calls to the optimization oracle.
Agnostic $Q$-learning with Function Approximation in Deterministic Systems: Near-Optimal Bounds on Approximation Error and Sample Complexity
https://papers.nips.cc/paper_files/paper/2020/hash/fd5c905bcd8c3348ad1b35d7231ee2b1-Abstract.html
Simon S. Du, Jason D. Lee, Gaurav Mahajan, Ruosong Wang
https://papers.nips.cc/paper_files/paper/2020/hash/fd5c905bcd8c3348ad1b35d7231ee2b1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fd5c905bcd8c3348ad1b35d7231ee2b1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11597-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fd5c905bcd8c3348ad1b35d7231ee2b1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fd5c905bcd8c3348ad1b35d7231ee2b1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fd5c905bcd8c3348ad1b35d7231ee2b1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fd5c905bcd8c3348ad1b35d7231ee2b1-Supplemental.pdf
The current paper studies the problem of agnostic $Q$-learning with function approximation in deterministic systems where the optimal $Q$-function is approximable by a function in the class $\mathcal{F}$ with approximation error $\delta \ge 0$. We propose a novel recursion-based algorithm and show that if $\delta = O\left(\rho/\sqrt{\dim_E}\right)$, then one can find the optimal policy using $O(\dim_E)$ trajectories, where $\rho$ is the gap between the optimal $Q$-value of the best actions and that of the second-best actions and $\dim_E$ is the Eluder dimension of $\mathcal{F}$. Our result has two implications: \begin{enumerate} \item In conjunction with the lower bound in [Du et al., 2020], our upper bound suggests that the condition $\delta = \widetilde{\Theta}\left(\rho/\sqrt{\dim_E}\right)$ is necessary and sufficient for algorithms with polynomial sample complexity. \item In conjunction with the obvious lower bound in the tabular case, our upper bound suggests that the sample complexity $\widetilde{\Theta}\left(\dim_E\right)$ is tight in the agnostic setting. \end{enumerate} Therefore, we help address the open problem on agnostic $Q$-learning proposed in [Wen and Van Roy, 2013]. We further extend our algorithm to the stochastic reward setting and obtain similar results.
Learning to Adapt to Evolving Domains
https://papers.nips.cc/paper_files/paper/2020/hash/fd69dbe29f156a7ef876a40a94f65599-Abstract.html
Hong Liu, Mingsheng Long, Jianmin Wang, Yu Wang
https://papers.nips.cc/paper_files/paper/2020/hash/fd69dbe29f156a7ef876a40a94f65599-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fd69dbe29f156a7ef876a40a94f65599-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11598-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fd69dbe29f156a7ef876a40a94f65599-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fd69dbe29f156a7ef876a40a94f65599-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fd69dbe29f156a7ef876a40a94f65599-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fd69dbe29f156a7ef876a40a94f65599-Supplemental.pdf
Domain adaptation aims at knowledge transfer from a labeled source domain to an unlabeled target domain. Current domain adaptation methods have made substantial advances in adapting discrete domains. However, this can be unrealistic in real-world applications, where target data usually comes in an online and continually evolving manner as small batches, posing challenges to classic domain adaptation paradigm: (1) Mainstream domain adaptation methods are tailored to stationary target domains, and can fail in non-stationary environments. (2) Since the target data arrive online, the agent should also maintain competence on previous target domains, i.e. to adapt without forgetting. To tackle these challenges, we propose a meta-adaptation framework which enables the learner to adapt to continually evolving target domain without catastrophic forgetting. Our framework comprises of two components: a meta-objective of learning representations to adapt to evolving domains, enabling meta-learning for unsupervised domain adaptation; and a meta-adapter for learning to adapt without forgetting, reserving knowledge from previous target data. Experiments validate the effectiveness our method on evolving target domains.
Synthesizing Tasks for Block-based Programming
https://papers.nips.cc/paper_files/paper/2020/hash/fd9dd764a6f1d73f4340d570804eacc4-Abstract.html
Umair Ahmed, Maria Christakis, Aleksandr Efremov, Nigel Fernandez, Ahana Ghosh, Abhik Roychoudhury, Adish Singla
https://papers.nips.cc/paper_files/paper/2020/hash/fd9dd764a6f1d73f4340d570804eacc4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fd9dd764a6f1d73f4340d570804eacc4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11599-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fd9dd764a6f1d73f4340d570804eacc4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fd9dd764a6f1d73f4340d570804eacc4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fd9dd764a6f1d73f4340d570804eacc4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fd9dd764a6f1d73f4340d570804eacc4-Supplemental.pdf
Block-based visual programming environments play a critical role in introducing computing concepts to K-12 students. One of the key pedagogical challenges in these environments is in designing new practice tasks for a student that match a desired level of difficulty and exercise specific programming concepts. In this paper, we formalize the problem of synthesizing visual programming tasks. In particular, given a reference visual task $\task^{in}$ and its solution code $\code^{in}$, we propose a novel methodology to automatically generate a set $\{(\task^{out}, \code^{out})\}$ of new tasks along with solution codes such that tasks $\task^{in}$ and $\task^{out}$ are conceptually similar but visually dissimilar. Our methodology is based on the realization that the mapping from the space of visual tasks to their solution codes is highly discontinuous; hence, directly mutating reference task $\task^{in}$ to generate new tasks is futile. Our task synthesis algorithm operates by first mutating code $\code^{in}$ to obtain a set of codes $\{\code^{out}\}$. Then, the algorithm performs symbolic execution over a code $\code^{out}$ to obtain a visual task $\task^{out}$; this step uses the Monte Carlo Tree Search (MCTS) procedure to guide the search in the symbolic tree. We demonstrate the effectiveness of our algorithm through an extensive empirical evaluation and user study on reference tasks taken from the Hour of Code: Classic Maze challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com.
Scalable Belief Propagation via Relaxed Scheduling
https://papers.nips.cc/paper_files/paper/2020/hash/fdb2c3bab9d0701c4a050a4d8d782c7f-Abstract.html
Vitalii Aksenov, Dan Alistarh, Janne H. Korhonen
https://papers.nips.cc/paper_files/paper/2020/hash/fdb2c3bab9d0701c4a050a4d8d782c7f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fdb2c3bab9d0701c4a050a4d8d782c7f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11600-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fdb2c3bab9d0701c4a050a4d8d782c7f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fdb2c3bab9d0701c4a050a4d8d782c7f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fdb2c3bab9d0701c4a050a4d8d782c7f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fdb2c3bab9d0701c4a050a4d8d782c7f-Supplemental.pdf
The ability to leverage large-scale hardware parallelism has been one of the key enablers of the accelerated recent progress in machine learning. Consequently, there has been considerable effort invested into developing efficient parallel variants of classic machine learning algorithms. However, despite the wealth of knowledge on parallelization, some classic machine learning algorithms often prove hard to parallelize efficiently while maintaining convergence. In this paper, we focus on efficient parallel algorithms for the key machine learning task of inference on graphical models, in particular on the fundamental belief propagation algorithm. We address the challenge of efficiently parallelizing this classic paradigm by showing how to leverage scalable relaxed schedulers, which reduce parallelization overheads, in this context. We investigate the overheads of relaxation analytically, and present an extensive empirical study, showing that our approach outperforms previous parallel belief propagation implementations both in terms of scalability and in terms of wall-clock convergence time, on a range of practical applications.
Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/fdbe012e2e11314b96402b32c0df26b7-Abstract.html
Lemeng Wu, Bo Liu, Peter Stone, Qiang Liu
https://papers.nips.cc/paper_files/paper/2020/hash/fdbe012e2e11314b96402b32c0df26b7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fdbe012e2e11314b96402b32c0df26b7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11601-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fdbe012e2e11314b96402b32c0df26b7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fdbe012e2e11314b96402b32c0df26b7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fdbe012e2e11314b96402b32c0df26b7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fdbe012e2e11314b96402b32c0df26b7-Supplemental.zip
We propose firefly neural architecture descent, a general framework for progressively and dynamically growing neural networks to jointly optimize the networks' parameters and architectures. Our method works in a steepest descent fashion, which iteratively finds the best network within a functional neighborhood of the original network that includes a diverse set of candidate network structures. By using Taylor approximation, the optimal network structure in the neighborhood can be found with a greedy selection procedure. We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures that avoid catastrophic forgetting in continual learning. Empirically, firefly descent achieves promising results on both neural architecture search and continual learning. In particular, on a challenging continual image classification task, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret
https://papers.nips.cc/paper_files/paper/2020/hash/fdc42b6b0ee16a2f866281508ef56730-Abstract.html
Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang, Qiaomin Xie
https://papers.nips.cc/paper_files/paper/2020/hash/fdc42b6b0ee16a2f866281508ef56730-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fdc42b6b0ee16a2f866281508ef56730-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11602-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fdc42b6b0ee16a2f866281508ef56730-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fdc42b6b0ee16a2f866281508ef56730-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fdc42b6b0ee16a2f866281508ef56730-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fdc42b6b0ee16a2f866281508ef56730-Supplemental.pdf
We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels, where the goal is to optimize the total reward under the risk measure of exponential utility. We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ). These algorithms implement a form of risk-sensitive optimism in the face of uncertainty, which adapts to both risk-seeking and risk-averse modes of exploration. We prove that RSVI attains an \ensuremath{\tilde{O}\big(\lambda(|\beta| H^2) \cdot \sqrt{H^{3} S^{2}AT} \big)}$ regret, while RSQ attains an $\ensuremath{\tilde{O}\big(\lambda(|\beta| H^2) \cdot \sqrt{H^{4} SAT} \big)}$ regret, where $\lambda(u) = (e^{3u}-1)/u$ for $u>0$. In the above, $\beta$ is the risk parameter of the exponential utility function, $S$ the number of states, $A$ the number of actions, $T$ the total number of timesteps, and $H$ the episode length. On the flip side, we establish a regret lower bound showing that the exponential dependence on $|\beta|$ and $H$ is unavoidable for any algorithm with an $\tilde{O}(\sqrt{T})$ regret (even when the risk objective is on the same scale as the original reward), thus certifying the near-optimality of the proposed algorithms. Our results demonstrate that incorporating risk awareness into reinforcement learning necessitates an exponential cost in $|\beta|$ and $H$, which quantifies the fundamental tradeoff between risk sensitivity (related to aleatoric uncertainty) and sample efficiency (related to epistemic uncertainty). To the best of our knowledge, this is the first regret analysis of risk-sensitive reinforcement learning with the exponential utility.
Learning to Decode: Reinforcement Learning for Decoding of Sparse Graph-Based Channel Codes
https://papers.nips.cc/paper_files/paper/2020/hash/fdd5b16fc8134339089ef25b3cf0e588-Abstract.html
Salman Habib, Allison Beemer, Joerg Kliewer
https://papers.nips.cc/paper_files/paper/2020/hash/fdd5b16fc8134339089ef25b3cf0e588-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fdd5b16fc8134339089ef25b3cf0e588-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11603-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fdd5b16fc8134339089ef25b3cf0e588-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fdd5b16fc8134339089ef25b3cf0e588-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fdd5b16fc8134339089ef25b3cf0e588-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fdd5b16fc8134339089ef25b3cf0e588-Supplemental.pdf
We show in this work that reinforcement learning can be successfully applied to decoding short to moderate length sparse graph-based channel codes. Specifically, we focus on low-density parity check (LDPC) codes, which for example have been standardized in the context of 5G cellular communication systems due to their excellent error correcting performance. These codes are typically decoded via belief propagation iterative decoding on the corresponding bipartite (Tanner) graph of the code via flooding, i.e., all check and variable nodes in the Tanner graph are updated at once. In contrast, in this paper we utilize a sequential update policy which selects the optimum check node (CN) scheduling in order to improve decoding performance. In particular, we model the CN update process as a multi-armed bandit process with dependent arms and employ a Q-learning scheme for optimizing the CN scheduling policy. In order to reduce the learning complexity, we propose a novel graph-induced CN clustering approach to partition the state space in such a way that dependencies between clusters are minimized. Our results show that compared to other decoding approaches from the literature, the proposed reinforcement learning scheme not only significantly improves the decoding performance, but also reduces the decoding complexity dramatically once the scheduling policy is learned.
Faster DBSCAN via subsampled similarity queries
https://papers.nips.cc/paper_files/paper/2020/hash/fdf1bc5669e8ff5ba45d02fded729feb-Abstract.html
Heinrich Jiang, Jennifer Jang, Jakub Lacki
https://papers.nips.cc/paper_files/paper/2020/hash/fdf1bc5669e8ff5ba45d02fded729feb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fdf1bc5669e8ff5ba45d02fded729feb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11604-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fdf1bc5669e8ff5ba45d02fded729feb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fdf1bc5669e8ff5ba45d02fded729feb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fdf1bc5669e8ff5ba45d02fded729feb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fdf1bc5669e8ff5ba45d02fded729feb-Supplemental.pdf
DBSCAN is a popular density-based clustering algorithm. It computes the $\epsilon$-neighborhood graph of a dataset and uses the connected components of the high-degree nodes to decide the clusters. However, the full neighborhood graph may be too costly to compute with a worst-case complexity of $O(n^2)$. In this paper, we propose a simple variant called SNG-DBSCAN, which clusters based on a subsampled $\epsilon$-neighborhood graph, only requires access to similarity queries for pairs of points and in particular avoids any complex data structures which need the embeddings of the data points themselves. The runtime of the procedure is $O(sn^2)$, where $s$ is the sampling rate. We show under some natural theoretical assumptions that $s \approx \log n/n$ is sufficient for statistical cluster recovery guarantees leading to an $O(n\log n)$ complexity. We provide an extensive experimental analysis showing that on large datasets, one can subsample as little as $0.1\%$ of the neighborhood graph, leading to as much as over 200x speedup and 250x reduction in RAM consumption compared to scikit-learn's implementation of DBSCAN, while still maintaining competitive clustering performance.
De-Anonymizing Text by Fingerprinting Language Generation
https://papers.nips.cc/paper_files/paper/2020/hash/fdf2aade29d18910051a6c76ae661860-Abstract.html
Zhen Sun, Roei Schuster, Vitaly Shmatikov
https://papers.nips.cc/paper_files/paper/2020/hash/fdf2aade29d18910051a6c76ae661860-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fdf2aade29d18910051a6c76ae661860-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11605-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fdf2aade29d18910051a6c76ae661860-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fdf2aade29d18910051a6c76ae661860-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fdf2aade29d18910051a6c76ae661860-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fdf2aade29d18910051a6c76ae661860-Supplemental.pdf
Components of machine learning systems are not (yet) perceived as security hotspots. Secure coding practices, such as ensuring that no execution paths depend on confidential inputs, have not yet been adopted by ML developers. We initiate the study of code security of ML systems by investigating how nucleus sampling---a popular approach for generating text, used for applications such as auto-completion---unwittingly leaks texts typed by users. Our main result is that the series of nucleus sizes for many natural English word sequences is a unique fingerprint. We then show how an attacker can infer typed text by measuring these fingerprints via a suitable side channel (e.g., cache access times), explain how this attack could help de-anonymize anonymous texts, and discuss defenses.
Multiparameter Persistence Image for Topological Machine Learning
https://papers.nips.cc/paper_files/paper/2020/hash/fdff71fcab656abfbefaabecab1a7f6d-Abstract.html
Mathieu Carrière, Andrew Blumberg
https://papers.nips.cc/paper_files/paper/2020/hash/fdff71fcab656abfbefaabecab1a7f6d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fdff71fcab656abfbefaabecab1a7f6d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11606-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fdff71fcab656abfbefaabecab1a7f6d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fdff71fcab656abfbefaabecab1a7f6d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fdff71fcab656abfbefaabecab1a7f6d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fdff71fcab656abfbefaabecab1a7f6d-Supplemental.pdf
We introduce a new descriptor for multiparameter persistence, which we call the Multiparameter Persistence Image, that is suitable for machine learning and statistical frameworks, is robust to perturbations in the data, has finer resolution than existing descriptors based on slicing, and can be efficiently computed on data sets of realistic size. Moreover, we demonstrate its efficacy by comparing its performance to other multiparameter descriptors on several classification tasks.
PLANS: Neuro-Symbolic Program Learning from Videos
https://papers.nips.cc/paper_files/paper/2020/hash/fe131d7f5a6b38b23cc967316c13dae2-Abstract.html
Raphaël Dang-Nhu
https://papers.nips.cc/paper_files/paper/2020/hash/fe131d7f5a6b38b23cc967316c13dae2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fe131d7f5a6b38b23cc967316c13dae2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11607-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fe131d7f5a6b38b23cc967316c13dae2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fe131d7f5a6b38b23cc967316c13dae2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fe131d7f5a6b38b23cc967316c13dae2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fe131d7f5a6b38b23cc967316c13dae2-Supplemental.pdf
Recent years have seen the rise of statistical program learning based on neural models as an alternative to traditional rule-based systems for programming by example. Rule-based approaches offer correctness guarantees in an unsupervised way as they inherently capture logical rules, while neural models are more realistically scalable to raw, high-dimensional input, and provide resistance to noisy I/O specifications. We introduce PLANS (Program LeArning from Neurally inferred Specifications), a hybrid model for program synthesis from visual observations that gets the best of both worlds, relying on (i) a neural architecture trained to extract abstract, high-level information from each raw individual input (ii) a rule-based system using the extracted information as I/O specifications to synthesize a program capturing the different observations. In order to address the key challenge of making PLANS resistant to noise in the network's output, we introduce a dynamic filtering algorithm for I/O specifications based on selective classification techniques. We obtain state-of-the-art performance at program synthesis from diverse demonstration videos in the Karel and ViZDoom environments, while requiring no ground-truth program for training.
Matrix Inference and Estimation in Multi-Layer Models
https://papers.nips.cc/paper_files/paper/2020/hash/fe2b421b8b5f0e7c355ace66a9fe0206-Abstract.html
Parthe Pandit, Mojtaba Sahraee Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher
https://papers.nips.cc/paper_files/paper/2020/hash/fe2b421b8b5f0e7c355ace66a9fe0206-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fe2b421b8b5f0e7c355ace66a9fe0206-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11608-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fe2b421b8b5f0e7c355ace66a9fe0206-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fe2b421b8b5f0e7c355ace66a9fe0206-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fe2b421b8b5f0e7c355ace66a9fe0206-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fe2b421b8b5f0e7c355ace66a9fe0206-Supplemental.pdf
We consider the problem of estimating the input and hidden variables of a stochastic multi-layer neural network from an observation of the output. The hidden variables in each layer are represented as matrices with statistical interactions along both rows as well as columns. This problem applies to matrix imputation, signal recovery via deep generative prior models, multi-task and mixed regression, and learning certain classes of two-layer neural networks. We extend a recently-developed algorithm -- Multi-Layer Vector Approximate Message Passing (ML-VAMP), for this matrix-valued inference problem. It is shown that the performance of the proposed Multi-Layer Matrix VAMP (ML-Mat-VAMP) algorithm can be exactly predicted in a certain random large-system limit, where the dimensions $N\times d$ of the unknown quantities grow as $N\rightarrow\infty$ with $d$ fixed. In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features, as well as training samples, grow to infinity but the number of hidden nodes stays fixed. The analysis enables a precise prediction of the parameter and test error of the learning.
MeshSDF: Differentiable Iso-Surface Extraction
https://papers.nips.cc/paper_files/paper/2020/hash/fe40fb944ee700392ed51bfe84dd4e3d-Abstract.html
Edoardo Remelli, Artem Lukoianov, Stephan Richter, Benoit Guillard, Timur Bagautdinov, Pierre Baque, Pascal Fua
https://papers.nips.cc/paper_files/paper/2020/hash/fe40fb944ee700392ed51bfe84dd4e3d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fe40fb944ee700392ed51bfe84dd4e3d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11609-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fe40fb944ee700392ed51bfe84dd4e3d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fe40fb944ee700392ed51bfe84dd4e3d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fe40fb944ee700392ed51bfe84dd4e3d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fe40fb944ee700392ed51bfe84dd4e3d-Supplemental.pdf
We use two different applications to validate our theoretical insight: Single-View Reconstruction via Differentiable Rendering and Physically-Driven Shape Optimization. In both cases our differentiable parameterization gives us an edge over state-of-the-art algorithms.
Variational Interaction Information Maximization for Cross-domain Disentanglement
https://papers.nips.cc/paper_files/paper/2020/hash/fe663a72b27bdc613873fbbb512f6f67-Abstract.html
HyeongJoo Hwang, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim
https://papers.nips.cc/paper_files/paper/2020/hash/fe663a72b27bdc613873fbbb512f6f67-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fe663a72b27bdc613873fbbb512f6f67-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11610-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fe663a72b27bdc613873fbbb512f6f67-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fe663a72b27bdc613873fbbb512f6f67-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fe663a72b27bdc613873fbbb512f6f67-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fe663a72b27bdc613873fbbb512f6f67-Supplemental.pdf
Cross-domain disentanglement is the problem of learning representations partitioned into domain-invariant and domain-specific representations, which is a key to successful domain transfer or measuring semantic distance between two domains. Grounded in information theory, we cast the simultaneous learning of domain-invariant and domain-specific representations as a joint objective of multiple information constraints, which does not require adversarial training or gradient reversal layers. We derive a tractable bound of the objective and propose a generative model named Interaction Information Auto-Encoder (IIAE). Our approach reveals insights on the desirable representation for cross-domain disentanglement and its connection to Variational Auto-Encoder (VAE). We demonstrate the validity of our model in the image-to-image translation and the cross-domain retrieval tasks. We further show that our model achieves the state-of-the-art performance in the zero-shot sketch based image retrieval task, even without external knowledge.
Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning
https://papers.nips.cc/paper_files/paper/2020/hash/fe73f687e5bc5280214e0486b273a5f9-Abstract.html
Fei Feng, Ruosong Wang, Wotao Yin, Simon S. Du, Lin Yang
https://papers.nips.cc/paper_files/paper/2020/hash/fe73f687e5bc5280214e0486b273a5f9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fe73f687e5bc5280214e0486b273a5f9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11611-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fe73f687e5bc5280214e0486b273a5f9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fe73f687e5bc5280214e0486b273a5f9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fe73f687e5bc5280214e0486b273a5f9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fe73f687e5bc5280214e0486b273a5f9-Supplemental.zip
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems [tang2017exploration,bellemare2016unifying], we investigate when this paradigm is provably efficient. We study episodic Markov decision processes with rich observations generated from a small number of latent states. We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a no-regret tabular RL algorithm. Theoretically, we prove that as long as the unsupervised learning algorithm enjoys a polynomial sample complexity guarantee, we can find a near-optimal policy with sample complexity polynomial in the number of latent states, which is significantly smaller than the number of observations. Empirically, we instantiate our framework on a class of hard exploration problems to demonstrate the practicality of our theory.
Faithful Embeddings for Knowledge Base Queries
https://papers.nips.cc/paper_files/paper/2020/hash/fe74074593f21197b7b7be3c08678616-Abstract.html
Haitian Sun, Andrew Arnold, Tania Bedrax Weiss, Fernando Pereira, William W. Cohen
https://papers.nips.cc/paper_files/paper/2020/hash/fe74074593f21197b7b7be3c08678616-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fe74074593f21197b7b7be3c08678616-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11612-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fe74074593f21197b7b7be3c08678616-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fe74074593f21197b7b7be3c08678616-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fe74074593f21197b7b7be3c08678616-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fe74074593f21197b7b7be3c08678616-Supplemental.pdf
The deductive closure of an ideal knowledge base (KB) contains exactly the logical queries that the KB can answer. However, in practice KBs are both incomplete and over-specified, failing to answer some queries that have real-world answers. \emph{Query embedding} (QE) techniques have been recently proposed where KB entities and KB queries are represented jointly in an embedding space, supporting relaxation and generalization in KB inference. However, experiments in this paper show that QE systems may disagree with deductive reasoning on answers that do not require generalization or relaxation. We address this problem with a novel QE method that is more faithful to deductive reasoning, and show that this leads to better performance on complex queries to incomplete KBs. Finally we show that inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.
Wasserstein Distances for Stereo Disparity Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/fe7ecc4de28b2c83c016b5c6c2acd826-Abstract.html
Divyansh Garg, Yan Wang, Bharath Hariharan, Mark Campbell, Kilian Q. Weinberger, Wei-Lun Chao
https://papers.nips.cc/paper_files/paper/2020/hash/fe7ecc4de28b2c83c016b5c6c2acd826-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fe7ecc4de28b2c83c016b5c6c2acd826-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11613-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fe7ecc4de28b2c83c016b5c6c2acd826-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fe7ecc4de28b2c83c016b5c6c2acd826-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fe7ecc4de28b2c83c016b5c6c2acd826-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fe7ecc4de28b2c83c016b5c6c2acd826-Supplemental.pdf
Existing approaches to depth or disparity estimation output a distribution over a set of pre-defined discrete values. This leads to inaccurate results when the true depth or disparity does not match any of these values. The fact that this distribution is usually learned indirectly through a regression loss causes further problems in ambiguous regions around object boundaries. We address these issues using a new neural network architecture that is capable of outputting arbitrary depth values, and a new loss function that is derived from the Wasserstein distance between the true and the predicted distributions. We validate our approach on a variety of tasks, including stereo disparity and depth estimation, and the downstream 3D object detection. Our approach drastically reduces the error in ambiguous regions, especially around object boundaries that greatly affect the localization of objects in 3D, achieving the state-of-the-art in 3D object detection for autonomous driving.
Multi-agent Trajectory Prediction with Fuzzy Query Attention
https://papers.nips.cc/paper_files/paper/2020/hash/fe87435d12ef7642af67d9bc82a8b3cd-Abstract.html
Nitin Kamra, Hao Zhu, Dweep Kumarbhai Trivedi, Ming Zhang, Yan Liu
https://papers.nips.cc/paper_files/paper/2020/hash/fe87435d12ef7642af67d9bc82a8b3cd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fe87435d12ef7642af67d9bc82a8b3cd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11614-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fe87435d12ef7642af67d9bc82a8b3cd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fe87435d12ef7642af67d9bc82a8b3cd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fe87435d12ef7642af67d9bc82a8b3cd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fe87435d12ef7642af67d9bc82a8b3cd-Supplemental.pdf
Trajectory prediction for scenes with multiple agents and entities is a challenging problem in numerous domains such as traffic prediction, pedestrian tracking and path planning. We present a general architecture to address this challenge which models the crucial inductive biases of motion, namely, inertia, relative motion, intents and interactions. Specifically, we propose a relational model to flexibly model interactions between agents in diverse environments. Since it is well-known that human decision making is fuzzy by nature, at the core of our model lies a novel attention mechanism which models interactions by making continuous-valued (fuzzy) decisions and learning the corresponding responses. Our architecture demonstrates significant performance gains over existing state-of-the-art predictive models in diverse domains such as human crowd trajectories, US freeway traffic, NBA sports data and physics datasets. We also present ablations and augmentations to understand the decision-making process and the source of gains in our model.
Multilabel Classification by Hierarchical Partitioning and Data-dependent Grouping
https://papers.nips.cc/paper_files/paper/2020/hash/fea16e782bc1b1240e4b3c797012e289-Abstract.html
Shashanka Ubaru, Sanjeeb Dash, Arya Mazumdar, Oktay Gunluk
https://papers.nips.cc/paper_files/paper/2020/hash/fea16e782bc1b1240e4b3c797012e289-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fea16e782bc1b1240e4b3c797012e289-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11615-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fea16e782bc1b1240e4b3c797012e289-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fea16e782bc1b1240e4b3c797012e289-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fea16e782bc1b1240e4b3c797012e289-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fea16e782bc1b1240e4b3c797012e289-Supplemental.pdf
In modern multilabel classification problems, each data instance belongs to a small number of classes among a large set of classes. In other words, these problems involve learning very sparse binary label vectors. Moreover, in the large-scale problems, the labels typically have certain (unknown) hierarchy. In this paper we exploit the sparsity of label vectors and the hierarchical structure to embed them in low-dimensional space using label groupings. Consequently, we solve the classification problem in a much lower dimensional space and then obtain labels in the original space using an appropriately defined lifting. Our method builds on the work of (Ubaru & Mazumdar, 2017), where the idea of group testing was also explored for multilabel classification. We first present a novel data-dependent grouping approach, where we use a group construction based on a low-rank Nonnegative Matrix Factorization (NMF) of the label matrix of training instances. The construction also allows us, using recent results, to develop a fast prediction algorithm that has a \emph{logarithmic runtime in the number of labels}. We then present a hierarchical partitioning approach that exploits the label hierarchy in large-scale problems to divide the large label space into smaller sub-problems, which can then be solved independently via the grouping approach. Numerical results on many benchmark datasets illustrate that, compared to other popular methods, our proposed methods achieve comparable accuracy with significantly lower computational costs.
An Analysis of SVD for Deep Rotation Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/fec3392b0dc073244d38eba1feb8e6b7-Abstract.html
Jake Levinson, Carlos Esteves, Kefan Chen, Noah Snavely, Angjoo Kanazawa, Afshin Rostamizadeh, Ameesh Makadia
https://papers.nips.cc/paper_files/paper/2020/hash/fec3392b0dc073244d38eba1feb8e6b7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fec3392b0dc073244d38eba1feb8e6b7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11616-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fec3392b0dc073244d38eba1feb8e6b7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fec3392b0dc073244d38eba1feb8e6b7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fec3392b0dc073244d38eba1feb8e6b7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fec3392b0dc073244d38eba1feb8e6b7-Supplemental.pdf
Symmetric orthogonalization via SVD, and closely related procedures, are well-known techniques for projecting matrices onto O(n) or SO(n). These tools have long been used for applications in computer vision, for example optimal 3D alignment problems solved by orthogonal Procrustes, rotation averaging, or Essential matrix decomposition. Despite its utility in different settings, SVD orthogonalization as a procedure for producing rotation matrices is typically overlooked in deep learning models, where the preferences tend toward classic representations like unit quaternions, Euler angles, and axis-angle, or more recently-introduced methods. Despite the importance of 3D rotations in computer vision and robotics, a single universally effective representation is still missing. Here, we explore the viability of SVD orthogonalization for 3D rotations in neural networks. We present a theoretical analysis of SVD as used for projection onto the rotation group. Our extensive quantitative analysis shows simply replacing existing representations with the SVD orthogonalization procedure obtains state of the art performance in many deep learning applications covering both supervised and unsupervised training.
Can the Brain Do Backpropagation? --- Exact Implementation of Backpropagation in Predictive Coding Networks
https://papers.nips.cc/paper_files/paper/2020/hash/fec87a37cdeec1c6ecf8181c0aa2d3bf-Abstract.html
Yuhang Song, Thomas Lukasiewicz, Zhenghua Xu, Rafal Bogacz
https://papers.nips.cc/paper_files/paper/2020/hash/fec87a37cdeec1c6ecf8181c0aa2d3bf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fec87a37cdeec1c6ecf8181c0aa2d3bf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11617-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fec87a37cdeec1c6ecf8181c0aa2d3bf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fec87a37cdeec1c6ecf8181c0aa2d3bf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fec87a37cdeec1c6ecf8181c0aa2d3bf-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fec87a37cdeec1c6ecf8181c0aa2d3bf-Supplemental.pdf
Backpropagation (BP) has been the most successful algorithm used to train artificial neural networks. However, there are several gaps between BP and learning in biologically plausible neuronal networks of the brain (learning in the brain, or simply BL, for short), in particular, (1) it has been unclear to date, if BP can be implemented exactly via BL, (2) there is a lack of local plasticity in BP, i.e., weight updates require information that is not locally available, while BL utilizes only locally available information, and (3)~there is a lack of autonomy in BP, i.e., some external control over the neural network is required (e.g., switching between prediction and learning stages requires changes to dynamics and synaptic plasticity rules), while BL works fully autonomously. Bridging such gaps, i.e., understanding how BP can be approximated by BL, has been of major interest in both neuroscience and machine learning. Despite tremendous efforts, however, no previous model has bridged the gaps at a degree of demonstrating an equivalence to BP, instead, only approximations to BP have been shown. Here, we present for the first time a framework within BL that bridges the above crucial gaps. We propose a BL model that (1) produces \emph{exactly the same} updates of the neural weights as~BP, while (2)~employing local plasticity, i.e., all neurons perform only local computations, done simultaneously. We then modify it to an alternative BL model that (3) also works fully autonomously. Overall, our work provides important evidence for the debate on the long-disputed question whether the brain can perform~BP.
Manifold GPLVMs for discovering non-Euclidean latent structure in neural data
https://papers.nips.cc/paper_files/paper/2020/hash/fedc604da8b0f9af74b6cfc0fab2163c-Abstract.html
Kristopher Jensen, Ta-Chu Kao, Marco Tripodi, Guillaume Hennequin
https://papers.nips.cc/paper_files/paper/2020/hash/fedc604da8b0f9af74b6cfc0fab2163c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fedc604da8b0f9af74b6cfc0fab2163c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11618-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fedc604da8b0f9af74b6cfc0fab2163c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fedc604da8b0f9af74b6cfc0fab2163c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fedc604da8b0f9af74b6cfc0fab2163c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fedc604da8b0f9af74b6cfc0fab2163c-Supplemental.pdf
A common problem in neuroscience is to elucidate the collective neural representations of behaviorally important variables such as head direction, spatial location, upcoming movements, or mental spatial transformations. Often, these latent variables are internal constructs not directly accessible to the experimenter. Here, we propose a new probabilistic latent variable model to simultaneously identify the latent state and the way each neuron contributes to its representation in an unsupervised way. In contrast to previous models which assume Euclidean latent spaces, we embrace the fact that latent states often belong to symmetric manifolds such as spheres, tori, or rotation groups of various dimensions. We therefore propose the manifold Gaussian process latent variable model (mGPLVM), where neural responses arise from (i) a shared latent variable living on a specific manifold, and (ii) a set of non-parametric tuning curves determining how each neuron contributes to the representation. Cross-validated comparisons of models with different topologies can be used to distinguish between candidate manifolds, and variational inference enables quantification of uncertainty. We demonstrate the validity of the approach on several synthetic datasets, as well as on calcium recordings from the ellipsoid body of Drosophila melanogaster and extracellular recordings from the mouse anterodorsal thalamic nucleus. These circuits are both known to encode head direction, and mGPLVM correctly recovers the ring topology expected from neural populations representing a single angular variable.
Distributed Distillation for On-Device Learning
https://papers.nips.cc/paper_files/paper/2020/hash/fef6f971605336724b5e6c0c12dc2534-Abstract.html
Ilai Bistritz, Ariana Mann, Nicholas Bambos
https://papers.nips.cc/paper_files/paper/2020/hash/fef6f971605336724b5e6c0c12dc2534-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/fef6f971605336724b5e6c0c12dc2534-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11619-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/fef6f971605336724b5e6c0c12dc2534-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/fef6f971605336724b5e6c0c12dc2534-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/fef6f971605336724b5e6c0c12dc2534-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/fef6f971605336724b5e6c0c12dc2534-Supplemental.pdf
On-device learning promises collaborative training of machine learning models across edge devices without the sharing of user data. In state-of-the-art on-device learning algorithms, devices communicate their model weights over a decentralized communication network. Transmitting model weights requires huge communication overhead and means only devices with identical model architectures can be included. To overcome these limitations, we introduce a distributed distillation algorithm where devices communicate and learn from soft-decision (softmax) outputs, which are inherently architecture-agnostic and scale only with the number of classes. The communicated soft-decisions are each model's outputs on a public, unlabeled reference dataset, which serves as a common vocabulary between devices. We prove that our algorithm converges with probability 1 to a stationary point where all devices in the communication network distill the entire network's knowledge on the reference data, regardless of their local connections. Our analysis assumes smooth loss functions, which can be non-convex. Simulations support our theoretical findings and show that even a naive implementation of our algorithm significantly reduces the communication overhead while achieving an overall comparable performance to state-of-the-art, depending on the regime. By requiring little communication overhead and allowing for cross-architecture training, we remove two main obstacles to scaling on-device learning.
COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/ff0abbcc0227c9124a804b084d161a2d-Abstract.html
Simon Ging, Mohammadreza Zolfaghari, Hamed Pirsiavash, Thomas Brox
https://papers.nips.cc/paper_files/paper/2020/hash/ff0abbcc0227c9124a804b084d161a2d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/ff0abbcc0227c9124a804b084d161a2d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11620-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/ff0abbcc0227c9124a804b084d161a2d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/ff0abbcc0227c9124a804b084d161a2d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/ff0abbcc0227c9124a804b084d161a2d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/ff0abbcc0227c9124a804b084d161a2d-Supplemental.zip
Many real-world video-text tasks involve different levels of granularity, such as frames and words, clip and sentences or videos and paragraphs, each with distinct semantics. In this paper, we propose a Cooperative hierarchical Transformer (COOT) to leverage this hierarchy information and model the interactions between different levels of granularity and different modalities. The method consists of three major components: an attention-aware feature aggregation layer, which leverages the local temporal context (intra-level, e.g., within a clip), a contextual transformer to learn the interactions between low-level and high-level semantics (inter-level, e.g. clip-video, sentence-paragraph), and a cross-modal cycle-consistency loss to connect video and text. The resulting method compares favorably to the state of the art on several benchmarks while having few parameters.
Passport-aware Normalization for Deep Model Protection
https://papers.nips.cc/paper_files/paper/2020/hash/ff1418e8cc993fe8abcfe3ce2003e5c5-Abstract.html
Jie Zhang, Dongdong Chen, Jing Liao, Weiming Zhang, Gang Hua, Nenghai Yu
https://papers.nips.cc/paper_files/paper/2020/hash/ff1418e8cc993fe8abcfe3ce2003e5c5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/ff1418e8cc993fe8abcfe3ce2003e5c5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11621-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/ff1418e8cc993fe8abcfe3ce2003e5c5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/ff1418e8cc993fe8abcfe3ce2003e5c5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/ff1418e8cc993fe8abcfe3ce2003e5c5-Review.html
null
Despite tremendous success in many application scenarios, deep learning faces serious intellectual property (IP) infringement threats. Considering the cost of designing and training a good model, infringements will significantly infringe the interests of the original model owner. Recently, many impressive works have emerged for deep model IP protection. However, they either are vulnerable to ambiguity attacks, or require changes in the target network structure by replacing its original normalization layers and hence cause significant performance drops. To this end, we propose a new passport-aware normalization formulation, which is generally applicable to most existing normalization layers and only needs to add another passport-aware branch for IP protection. This new branch is jointly trained with the target model but discarded in the inference stage. Therefore it causes no structure change in the target model. Only when the model IP is suspected to be stolen by someone, the private passport-aware branch is added back for ownership verification. Through extensive experiments, we verify its effectiveness in both image and 3D point recognition models. It is demonstrated to be robust not only to common attack techniques like fine-tuning and model compression, but also to ambiguity attacks. By further combining it with trigger-set based methods, both black-box and white-box verification can be achieved for enhanced security of deep learning models deployed in real systems.
Sampling-Decomposable Generative Adversarial Recommender
https://papers.nips.cc/paper_files/paper/2020/hash/ff42b03a06a1bed4e936f0e04958e168-Abstract.html
Binbin Jin, Defu Lian, Zheng Liu, Qi Liu, Jianhui Ma, Xing Xie, Enhong Chen
https://papers.nips.cc/paper_files/paper/2020/hash/ff42b03a06a1bed4e936f0e04958e168-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/ff42b03a06a1bed4e936f0e04958e168-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11622-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/ff42b03a06a1bed4e936f0e04958e168-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/ff42b03a06a1bed4e936f0e04958e168-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/ff42b03a06a1bed4e936f0e04958e168-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/ff42b03a06a1bed4e936f0e04958e168-Supplemental.pdf
Recommendation techniques are important approaches for alleviating information overload. Being often trained on implicit user feedback, many recommenders suffer from the sparsity challenge due to the lack of explicitly negative samples. The GAN-style recommenders (i.e., IRGAN) addresses the challenge by learning a generator and a discriminator adversarially, such that the generator produces increasingly difficult samples for the discriminator to accelerate optimizing the discrimination objective. However, producing samples from the generator is very time-consuming, and our empirical study shows that the discriminator performs poor in top-k item recommendation. To this end, a theoretical analysis is made for the GAN-style algorithms, showing that the generator of limit capacity is diverged from the optimal generator. This may interpret the limitation of discriminator's performance. Based on these findings, we propose a Sampling-Decomposable Generative Adversarial Recommender (SD-GAR). In the framework, the divergence between some generator and the optimum is compensated by self-normalized importance sampling; the efficiency of sample generation is improved with a sampling-decomposable generator, such that each sample can be generated in O(1) with the Vose-Alias method. Interestingly, due to decomposability of sampling, the generator can be optimized with the closed-form solutions in an alternating manner, being different from policy gradient in the GAN-style algorithms. We extensively evaluate the proposed algorithm with five real-world recommendation datasets. The results show that SD-GAR outperforms IRGAN by 12.4% and the SOTA recommender by 10% on average. Moreover, discriminator training can be 20x faster on the dataset with more than 120K items.
Limits to Depth Efficiencies of Self-Attention
https://papers.nips.cc/paper_files/paper/2020/hash/ff4dfdf5904e920ce52b48c1cef97829-Abstract.html
Yoav Levine, Noam Wies, Or Sharir, Hofit Bata, Amnon Shashua
https://papers.nips.cc/paper_files/paper/2020/hash/ff4dfdf5904e920ce52b48c1cef97829-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/ff4dfdf5904e920ce52b48c1cef97829-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11623-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/ff4dfdf5904e920ce52b48c1cef97829-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/ff4dfdf5904e920ce52b48c1cef97829-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/ff4dfdf5904e920ce52b48c1cef97829-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/ff4dfdf5904e920ce52b48c1cef97829-Supplemental.pdf
Self-attention architectures, which are rapidly pushing the frontier in natural language processing, demonstrate a surprising depth-inefficient behavior: Empirical signals indicate that increasing the internal representation (network width) is just as useful as increasing the number of self-attention layers (network depth). In this paper, we theoretically study the interplay between depth and width in self-attention. We shed light on the root of the above phenomenon, and establish two distinct parameter regimes of depth efficiency and inefficiency in self-attention. We invalidate the seemingly plausible hypothesis by which widening is as effective as deepening for self-attention, and show that in fact stacking self-attention layers is so effective that it quickly saturates a capacity of the network width. Specifically, we pinpoint a ``depth threshold" that is logarithmic in the network width: for networks of depth that is below the threshold, we establish a double-exponential depth-efficiency of the self-attention operation, while for depths over the threshold we show that depth-inefficiency kicks in. Our predictions accord with existing empirical ablations, and we further demonstrate the two depth-(in)efficiency regimes experimentally for common network depths of 6, 12, and 24. By identifying network width as a limiting factor, our analysis indicates that solutions for dramatically increasing the width can facilitate the next leap in self-attention expressivity.