Dataset Viewer
Auto-converted to Parquet
abs
stringlengths
45
62
Download PDF
stringlengths
50
84
OpenReview
stringlengths
42
42
title
stringlengths
10
168
url
stringlengths
45
62
authors
stringlengths
9
704
detail_url
stringlengths
45
62
tags
stringclasses
1 value
abstract
stringlengths
415
5.03k
https://proceedings.mlr.press/v202/aamand23a.html
https://proceedings.mlr.press/v202/aamand23a/aamand23a.pdf
https://openreview.net/forum?id=BVomXLJQoH
Data Structures for Density Estimation
https://proceedings.mlr.press/v202/aamand23a.html
Anders Aamand, Alexandr Andoni, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, Sandeep Silwal
https://proceedings.mlr.press/v202/aamand23a.html
ICML 2023
We study statistical/computational tradeoffs for the following density estimation problem: given $k$ distributions $v_1, \ldots, v_k$ over a discrete domain of size $n$, and sampling access to a distribution $p$, identify $v_i$ that is "close" to $p$. Our main result is the first data structure that, given a sublinear (in $n$) number of samples from $p$, identifies $v_i$ in time sublinear in $k$. We also give an improved version of the algorithm of Acharya et al. (2018) that reports $v_i$ in time linear in $k$. The experimental evaluation of the latter algorithm shows that it achieves a significant reduction in the number of operations needed to achieve a given accuracy compared to prior work.
https://proceedings.mlr.press/v202/abbas23a.html
https://proceedings.mlr.press/v202/abbas23a/abbas23a.pdf
https://openreview.net/forum?id=IK5SlumdGu
ClusterFuG: Clustering Fully connected Graphs by Multicut
https://proceedings.mlr.press/v202/abbas23a.html
Ahmed Abbas, Paul Swoboda
https://proceedings.mlr.press/v202/abbas23a.html
ICML 2023
We propose a graph clustering formulation based on multicut (a.k.a. weighted correlation clustering) on the complete graph. Our formulation does not need specification of the graph topology as in the original sparse formulation of multicut, making our approach simpler and potentially better performing. In contrast to unweighted correlation clustering we allow for a more expressive weighted cost structure. In dense multicut, the clustering objective is given in a factorized form as inner products of node feature vectors. This allows for an efficient formulation and inference in contrast to multicut/weighted correlation clustering, which has at least quadratic representation and computation complexity when working on the complete graph. We show how to rewrite classical greedy algorithms for multicut in our dense setting and how to modify them for greater efficiency and solution quality. In particular, our algorithms scale to graphs with tens of thousands of nodes. Empirical evidence on instance segmentation on Cityscapes and clustering of ImageNet datasets shows the merits of our approach.
https://proceedings.mlr.press/v202/abbe23a.html
https://proceedings.mlr.press/v202/abbe23a/abbe23a.pdf
https://openreview.net/forum?id=3dqwXb1te4
Generalization on the Unseen, Logic Reasoning and Degree Curriculum
https://proceedings.mlr.press/v202/abbe23a.html
Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Kevin Rizk
https://proceedings.mlr.press/v202/abbe23a.html
ICML 2023
This paper considers the learning of logical (Boolean) functions with focus on the generalization on the unseen (GOTU) setting, a strong case of out-of-distribution generalization. This is motivated by the fact that the rich combinatorial nature of data in certain reasoning tasks (e.g., arithmetic/logic) makes representative data sampling challenging, and learning successfully under GOTU gives a first vignette of an ’extrapolating’ or ’reasoning’ learner. We then study how different network architectures trained by (S)GD perform under GOTU and provide both theoretical and experimental evidence that for a class of network models including instances of Transformers, random features models, and diagonal linear networks, a min-degree-interpolator is learned on the unseen. We also provide evidence that other instances with larger learning rates or mean-field networks reach leaky min-degree solutions. These findings lead to two implications: (1) we provide an explanation to the length generalization problem (e.g., Anil et al. 2022); (2) we introduce a curriculum learning algorithm called Degree-Curriculum that learns monomials more efficiently by incrementing supports.
https://proceedings.mlr.press/v202/abedsoltan23a.html
https://proceedings.mlr.press/v202/abedsoltan23a/abedsoltan23a.pdf
https://openreview.net/forum?id=fCyg20LQsL
Toward Large Kernel Models
https://proceedings.mlr.press/v202/abedsoltan23a.html
Amirhesam Abedsoltan, Mikhail Belkin, Parthe Pandit
https://proceedings.mlr.press/v202/abedsoltan23a.html
ICML 2023
Recent studies indicate that kernel machines can often perform similarly or better than deep neural networks (DNNs) on small datasets. The interest in kernel machines has been additionally bolstered by the discovery of their equivalence to wide neural networks in certain regimes. However, a key feature of DNNs is their ability to scale the model size and training data size independently, whereas in traditional kernel machines model size is tied to data size. Because of this coupling, scaling kernel machines to large data has been computationally challenging. In this paper, we provide a way forward for constructing large-scale general kernel models, which are a generalization of kernel machines that decouples the model and data, allowing training on large datasets. Specifically, we introduce EigenPro 3.0, an algorithm based on projected dual preconditioned SGD and show scaling to model and data sizes which have not been possible with existing kernel methods. We provide a PyTorch based implementation which can take advantage of multiple GPUs.
https://proceedings.mlr.press/v202/abels23a.html
https://proceedings.mlr.press/v202/abels23a/abels23a.pdf
https://openreview.net/forum?id=Fd7NCsKLPF
Expertise Trees Resolve Knowledge Limitations in Collective Decision-Making
https://proceedings.mlr.press/v202/abels23a.html
Axel Abels, Tom Lenaerts, Vito Trianni, Ann Nowe
https://proceedings.mlr.press/v202/abels23a.html
ICML 2023
Experts advising decision-makers are likely to display expertise which varies as a function of the problem instance. In practice, this may lead to sub-optimal or discriminatory decisions against minority cases. In this work, we model such changes in depth and breadth of knowledge as a partitioning of the problem space into regions of differing expertise. We provide here new algorithms that explicitly consider and adapt to the relationship between problem instances and experts’ knowledge. We first propose and highlight the drawbacks of a naive approach based on nearest neighbor queries. To address these drawbacks we then introduce a novel algorithm — expertise trees — that constructs decision trees enabling the learner to select appropriate models. We provide theoretical insights and empirically validate the improved performance of our novel approach on a range of problems for which existing methods proved to be inadequate.
https://proceedings.mlr.press/v202/acharki23a.html
https://proceedings.mlr.press/v202/acharki23a/acharki23a.pdf
https://openreview.net/forum?id=lJaAPdXgxL
Comparison of meta-learners for estimating multi-valued treatment heterogeneous effects
https://proceedings.mlr.press/v202/acharki23a.html
Naoufal Acharki, Ramiro Lugo, Antoine Bertoncello, Josselin Garnier
https://proceedings.mlr.press/v202/acharki23a.html
ICML 2023
Conditional Average Treatment Effects (CATE) estimation is one of the main challenges in causal inference with observational data. In addition to Machine Learning based-models, nonparametric estimators called meta-learners have been developed to estimate the CATE with the main advantage of not restraining the estimation to a specific supervised learning method. This task becomes, however, more complicated when the treatment is not binary as some limitations of the naive extensions emerge. This paper looks into meta-learners for estimating the heterogeneous effects of multi-valued treatments. We consider different meta-learners, and we carry out a theoretical analysis of their error upper bounds as functions of important parameters such as the number of treatment levels, showing that the naive extensions do not always provide satisfactory results. We introduce and discuss meta-learners that perform well as the number of treatments increases. We empirically confirm the strengths and weaknesses of those methods with synthetic and semi-synthetic datasets.
https://proceedings.mlr.press/v202/adams23a.html
https://proceedings.mlr.press/v202/adams23a/adams23a.pdf
https://openreview.net/forum?id=wHPDEyYEps
BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming
https://proceedings.mlr.press/v202/adams23a.html
Steven Adams, Andrea Patane, Morteza Lahijanian, Luca Laurenti
https://proceedings.mlr.press/v202/adams23a.html
ICML 2023
In this paper, we introduce BNN-DP, an efficient algorithmic framework for analysis of adversarial robustness of Bayesian Neural Networks (BNNs). Given a compact set of input points $T\subset \mathbb{R}^n$, BNN-DP computes lower and upper bounds on the BNN’s predictions for all the points in $T$. The framework is based on an interpretation of BNNs as stochastic dynamical systems, which enables the use of Dynamic Programming (DP) algorithms to bound the prediction range along the layers of the network. Specifically, the method uses bound propagation techniques and convex relaxations to derive a backward recursion procedure to over-approximate the prediction range of the BNN with piecewise affine functions. The algorithm is general and can handle both regression and classification tasks. On a set of experiments on various regression and classification tasks and BNN architectures, we show that BNN-DP outperforms state-of-the-art methods by up to four orders of magnitude in both tightness of the bounds and computational efficiency.
https://proceedings.mlr.press/v202/agarwala23a.html
https://proceedings.mlr.press/v202/agarwala23a/agarwala23a.pdf
https://openreview.net/forum?id=5YAP9Ntq3L
SAM operates far from home: eigenvalue regularization as a dynamical phenomenon
https://proceedings.mlr.press/v202/agarwala23a.html
Atish Agarwala, Yann Dauphin
https://proceedings.mlr.press/v202/agarwala23a.html
ICML 2023
The Sharpness Aware Minimization (SAM) optimization algorithm has been shown to control large eigenvalues of the loss Hessian and provide generalization benefits in a variety of settings. The original motivation for SAM was a modified loss function which penalized sharp minima; subsequent analyses have also focused on the behavior near minima. However, our work reveals that SAM provides a strong regularization of the eigenvalues throughout the learning trajectory. We show that in a simplified setting, SAM dynamically induces a stabilization related to the edge of stability (EOS) phenomenon observed in large learning rate gradient descent. Our theory predicts the largest eigenvalue as a function of the learning rate and SAM radius parameters. Finally, we show that practical models can also exhibit this EOS stabilization, and that understanding SAM must account for these dynamics far away from any minima.
https://proceedings.mlr.press/v202/agarwala23b.html
https://proceedings.mlr.press/v202/agarwala23b/agarwala23b.pdf
https://openreview.net/forum?id=mP79L3pOke
Second-order regression models exhibit progressive sharpening to the edge of stability
https://proceedings.mlr.press/v202/agarwala23b.html
Atish Agarwala, Fabian Pedregosa, Jeffrey Pennington
https://proceedings.mlr.press/v202/agarwala23b.html
ICML 2023
Recent studies of gradient descent with large step sizes have shown that there is often a regime with an initial increase in the largest eigenvalue of the loss Hessian (progressive sharpening), followed by a stabilization of the eigenvalue near the maximum value which allows convergence (edge of stability). These phenomena are intrinsically non-linear and do not happen for models in the constant Neural Tangent Kernel (NTK) regime, for which the predictive function is approximately linear in the parameters. As such, we consider the next simplest class of predictive models, namely those that are quadratic in the parameters, which we call second-order regression models. For quadratic objectives in two dimensions, we prove that this second-order regression model exhibits progressive sharpening of the NTK eigenvalue towards a value that differs slightly from the edge of stability, which we explicitly compute. In higher dimensions, the model generically shows similar behavior, even without the specific structure of a neural network, suggesting that progressive sharpening and edge-of-stability behavior aren’t unique features of neural networks, and could be a more general property of discrete learning algorithms in high-dimensional non-linear models.
https://proceedings.mlr.press/v202/agazzi23a.html
https://proceedings.mlr.press/v202/agazzi23a/agazzi23a.pdf
https://openreview.net/forum?id=szQzz2H8er
Global optimality of Elman-type RNNs in the mean-field regime
https://proceedings.mlr.press/v202/agazzi23a.html
Andrea Agazzi, Jianfeng Lu, Sayan Mukherjee
https://proceedings.mlr.press/v202/agazzi23a.html
ICML 2023
We analyze Elman-type recurrent neural networks (RNNs) and their training in the mean-field regime. Specifically, we show convergence of gradient descent training dynamics of the RNN to the corresponding mean-field formulation in the large width limit. We also show that the fixed points of the limiting infinite-width dynamics are globally optimal, under some assumptions on the initialization of the weights. Our results establish optimality for feature-learning with wide RNNs in the mean-field regime.
https://proceedings.mlr.press/v202/aggarwal23a.html
https://proceedings.mlr.press/v202/aggarwal23a/aggarwal23a.pdf
https://openreview.net/forum?id=kwb6T6LP7f
SemSup-XC: Semantic Supervision for Zero and Few-shot Extreme Classification
https://proceedings.mlr.press/v202/aggarwal23a.html
Pranjal Aggarwal, Ameet Deshpande, Karthik R Narasimhan
https://proceedings.mlr.press/v202/aggarwal23a.html
ICML 2023
Extreme classification (XC) involves predicting over large numbers of classes (thousands to millions), with real-world applications like news article classification and e-commerce product tagging. The zero-shot version of this task requires generalization to novel classes without additional supervision. In this paper, we develop SemSup-XC, a model that achieves state-of-the-art zero-shot and few-shot performance on three XC datasets derived from legal, e-commerce, and Wikipedia data. To develop SemSup-XC, we use automatically collected semantic class descriptions to represent classes and facilitate generalization through a novel hybrid matching module that matches input instances to class descriptions using a combination of semantic and lexical similarity. Trained with contrastive learning, SemSup-XC significantly outperforms baselines and establishes state-of-the-art performance on all three datasets considered, gaining up to 12 precision points on zero-shot and more than 10 precision points on one-shot tests, with similar gains for recall@10. Our ablation studies highlight the relative importance of our hybrid matching module and automatically collected class descriptions.
https://proceedings.mlr.press/v202/aghabozorgi23a.html
https://proceedings.mlr.press/v202/aghabozorgi23a/aghabozorgi23a.pdf
https://openreview.net/forum?id=CNq0JvrDfw
Adaptive IMLE for Few-shot Pretraining-free Generative Modelling
https://proceedings.mlr.press/v202/aghabozorgi23a.html
Mehran Aghabozorgi, Shichong Peng, Ke Li
https://proceedings.mlr.press/v202/aghabozorgi23a.html
ICML 2023
Despite their success on large datasets, GANs have been difficult to apply in the few-shot setting, where only a limited number of training examples are provided. Due to mode collapse, GANs tend to ignore some training examples, causing overfitting to a subset of the training dataset, which is small in the first place. A recent method called Implicit Maximum Likelihood Estimation (IMLE) is an alternative to GAN that tries to address this issue. It uses the same kind of generators as GANs but trains it with a different objective that encourages mode coverage. However, the theoretical guarantees of IMLE hold under a restrictive condition that the optimal likelihood at all data points is the same. In this paper, we present a more generalized formulation of IMLE which includes the original formulation as a special case, and we prove that the theoretical guarantees hold under weaker conditions. Using this generalized formulation, we further derive a new algorithm, which we dub Adaptive IMLE, which can adapt to the varying difficulty of different training examples. We demonstrate on multiple few-shot image synthesis datasets that our method significantly outperforms existing methods. Our code is available at https://github.com/mehranagh20/AdaIMLE.
https://proceedings.mlr.press/v202/aghajanyan23a.html
https://proceedings.mlr.press/v202/aghajanyan23a/aghajanyan23a.pdf
https://openreview.net/forum?id=2n7dHVhwJf
Scaling Laws for Generative Mixed-Modal Language Models
https://proceedings.mlr.press/v202/aghajanyan23a.html
Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, Luke Zettlemoyer
https://proceedings.mlr.press/v202/aghajanyan23a.html
ICML 2023
Generative language models define distributions over sequences of tokens that can represent essentially any combination of data modalities (e.g., any permutation of image tokens from VQ-VAEs, speech tokens from HuBERT, BPE tokens for language or code, and so on). To better understand the scaling properties of such mixed-modal models, we conducted over 250 experiments using seven different modalities and model sizes ranging from 8 million to 30 billion, trained on 5-100 billion tokens. We report new mixed-modal scaling laws that unify the contributions of individual modalities and the interactions between them. Specifically, we explicitly model the optimal synergy and competition due to data and model size as an additive term to previous uni-modal scaling laws. We also find four empirical phenomena observed during the training, such as emergent coordinate-ascent style training that naturally alternates between modalities, guidelines for selecting critical hyper-parameters, and connections between mixed-modal competition and training stability. Finally, we test our scaling law by training a 30B speech-text model, which significantly outperforms the corresponding unimodal models. Overall, our research provides valuable insights into the design and training of mixed-modal generative models, an important new class of unified models that have unique distributional properties.
https://proceedings.mlr.press/v202/aghbalou23a.html
https://proceedings.mlr.press/v202/aghbalou23a/aghbalou23a.pdf
https://openreview.net/forum?id=Dg5H4Qd0dZ
Hypothesis Transfer Learning with Surrogate Classification Losses: Generalization Bounds through Algorithmic Stability
https://proceedings.mlr.press/v202/aghbalou23a.html
Anass Aghbalou, Guillaume Staerman
https://proceedings.mlr.press/v202/aghbalou23a.html
ICML 2023
Hypothesis transfer learning (HTL) contrasts domain adaptation by allowing for a previous task leverage, named the source, into a new one, the target, without requiring access to the source data. Indeed, HTL relies only on a hypothesis learnt from such source data, relieving the hurdle of expansive data storage and providing great practical benefits. Hence, HTL is highly beneficial for real-world applications relying on big data. The analysis of such a method from a theoretical perspective faces multiple challenges, particularly in classification tasks. This paper deals with this problem by studying the learning theory of HTL through algorithmic stability, an attractive theoretical framework for machine learning algorithms analysis. In particular, we are interested in the statistical behavior of the regularized empirical risk minimizers in the case of binary classification. Our stability analysis provides learning guarantees under mild assumptions. Consequently, we derive several complexity-free generalization bounds for essential statistical quantities like the training error, the excess risk and cross-validation estimates. These refined bounds allow understanding the benefits of transfer learning and comparing the behavior of standard losses in different scenarios, leading to valuable insights for practitioners.
https://proceedings.mlr.press/v202/aglietti23a.html
https://proceedings.mlr.press/v202/aglietti23a/aglietti23a.pdf
https://openreview.net/forum?id=60bhXDeTos
Constrained Causal Bayesian Optimization
https://proceedings.mlr.press/v202/aglietti23a.html
Virginia Aglietti, Alan Malek, Ira Ktena, Silvia Chiappa
https://proceedings.mlr.press/v202/aglietti23a.html
ICML 2023
We propose constrained causal Bayesian optimization (cCBO), an approach for finding interventions in a known causal graph that optimize a target variable under some constraints. cCBO first reduces the search space by exploiting the graph structure and, if available, an observational dataset; and then solves the restricted optimization problem by modelling target and constraint quantities using Gaussian processes and by sequentially selecting interventions via a constrained expected improvement acquisition function. We propose different surrogate models that enable to integrate observational and interventional data while capturing correlation among effects with increasing levels of sophistication. We evaluate cCBO on artificial and real-world causal graphs showing successful trade off between fast convergence and percentage of feasible interventions.
https://proceedings.mlr.press/v202/agoritsas23a.html
https://proceedings.mlr.press/v202/agoritsas23a/agoritsas23a.pdf
https://openreview.net/forum?id=DF9aUqGzsV
Explaining the effects of non-convergent MCMC in the training of Energy-Based Models
https://proceedings.mlr.press/v202/agoritsas23a.html
Elisabeth Agoritsas, Giovanni Catania, Aurélien Decelle, Beatriz Seoane
https://proceedings.mlr.press/v202/agoritsas23a.html
ICML 2023
In this paper, we quantify the impact of using non-convergent Markov chains to train Energy-Based models (EBMs). In particular, we show analytically that EBMs trained with non-persistent short runs to estimate the gradient can perfectly reproduce a set of empirical statistics of the data, not at the level of the equilibrium measure, but through a precise dynamical process. Our results provide a first-principles explanation for the observations of recent works proposing the strategy of using short runs starting from random initial conditions as an efficient way to generate high-quality samples in EBMs, and lay the groundwork for using EBMs as diffusion models. After explaining this effect in generic EBMs, we analyze two solvable models in which the effect of the non-convergent sampling in the trained parameters can be described in detail. Finally, we test these predictions numerically on a ConvNet EBM and a Boltzmann machine.
https://proceedings.mlr.press/v202/aher23a.html
https://proceedings.mlr.press/v202/aher23a/aher23a.pdf
https://openreview.net/forum?id=eYlLlvzngu
Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies
https://proceedings.mlr.press/v202/aher23a.html
Gati V Aher, Rosa I. Arriaga, Adam Tauman Kalai
https://proceedings.mlr.press/v202/aher23a.html
ICML 2023
We introduce a new type of test, called a Turing Experiment (TE), for evaluating to what extent a given language model, such as GPT models, can simulate different aspects of human behavior. A TE can also reveal consistent distortions in a language model’s simulation of a specific human behavior. Unlike the Turing Test, which involves simulating a single arbitrary individual, a TE requires simulating a representative sample of participants in human subject research. We carry out TEs that attempt to replicate well-established findings from prior studies. We design a methodology for simulating TEs and illustrate its use to compare how well different language models are able to reproduce classic economic, psycholinguistic, and social psychology experiments: Ultimatum Game, Garden Path Sentences, Milgram Shock Experiment, and Wisdom of Crowds. In the first three TEs, the existing findings were replicated using recent models, while the last TE reveals a “hyper-accuracy distortion” present in some language models (including ChatGPT and GPT-4), which could affect downstream applications in education and the arts.
https://proceedings.mlr.press/v202/ahuja23a.html
https://proceedings.mlr.press/v202/ahuja23a/ahuja23a.pdf
https://openreview.net/forum?id=YiWzhu9pl6
Interventional Causal Representation Learning
https://proceedings.mlr.press/v202/ahuja23a.html
Kartik Ahuja, Divyat Mahajan, Yixin Wang, Yoshua Bengio
https://proceedings.mlr.press/v202/ahuja23a.html
ICML 2023
Causal representation learning seeks to extract high-level latent factors from low-level sensory data. Most existing methods rely on observational data and structural assumptions (e.g., conditional independence) to identify the latent factors. However, interventional data is prevalent across applications. Can interventional data facilitate causal representation learning? We explore this question in this paper. The key observation is that interventional data often carries geometric signatures of the latent factors’ support (i.e. what values each latent can possibly take). For example, when the latent factors are causally connected, interventions can break the dependency between the intervened latents’ support and their ancestors’. Leveraging this fact, we prove that the latent causal factors can be identified up to permutation and scaling given data from perfect do interventions. Moreover, we can achieve block affine identification, namely the estimated latent factors are only entangled with a few other latents if we have access to data from imperfect interventions. These results highlight the unique power of interventional data in causal representation learning; they can enable provable identification of latent factors without any assumptions about their distributions or dependency structure.
https://proceedings.mlr.press/v202/ailer23a.html
https://proceedings.mlr.press/v202/ailer23a/ailer23a.pdf
https://openreview.net/forum?id=dT7uMuZJjf
Sequential Underspecified Instrument Selection for Cause-Effect Estimation
https://proceedings.mlr.press/v202/ailer23a.html
Elisabeth Ailer, Jason Hartford, Niki Kilbertus
https://proceedings.mlr.press/v202/ailer23a.html
ICML 2023
Instrumental variable (IV) methods are used to estimate causal effects in settings with unobserved confounding, where we cannot directly experiment on the treatment variable. Instruments are variables which only affect the outcome indirectly via the treatment variable(s). Most IV applications focus on low-dimensional treatments and crucially require at least as many instruments as treatments. This assumption is restrictive: in the natural sciences we often seek to infer causal effects of high-dimensional treatments (e.g., the effect of gene expressions or microbiota on health and disease), but can only run few experiments with a limited number of instruments (e.g., drugs or antibiotics). In such under-specified problems, the full treatment effect is not identifiable in a single experiment even in the linear case. We show that one can still reliably recover the projection of the treatment effect onto the instrumented subspace and develop techniques to consistently combine such partial estimates from different sets of instruments. We then leverage our combined estimators in an algorithm that iteratively proposes the most informative instruments at each round of experimentation to maximize the overall information about the full causal effect.
https://proceedings.mlr.press/v202/aitchison23a.html
https://proceedings.mlr.press/v202/aitchison23a/aitchison23a.pdf
https://openreview.net/forum?id=xRDHjO0YBo
Atari-5: Distilling the Arcade Learning Environment down to Five Games
https://proceedings.mlr.press/v202/aitchison23a.html
Matthew Aitchison, Penny Sweetser, Marcus Hutter
https://proceedings.mlr.press/v202/aitchison23a.html
ICML 2023
The Arcade Learning Environment (ALE) has become an essential benchmark for assessing the performance of reinforcement learning algorithms. However, the computational cost of generating results on the entire 57-game dataset limits ALE’s use and makes the reproducibility of many results infeasible. We propose a novel solution to this problem in the form of a principled methodology for selecting small but representative subsets of environments within a benchmark suite. We applied our method to identify a subset of five ALE games, we call Atari-5, which produces 57-game median score estimates within 10% of their true values. Extending the subset to 10-games recovers 80% of the variance for log-scores for all games within the 57-game set. We show this level of compression is possible due to a high degree of correlation between many of the games in ALE.
https://proceedings.mlr.press/v202/akhtar23a.html
https://proceedings.mlr.press/v202/akhtar23a/akhtar23a.pdf
https://openreview.net/forum?id=cHZBCZmfSo
Towards credible visual model interpretation with path attribution
https://proceedings.mlr.press/v202/akhtar23a.html
Naveed Akhtar, Mohammad A. A. K. Jalwana
https://proceedings.mlr.press/v202/akhtar23a.html
ICML 2023
With its inspirational roots in game-theory, path attribution framework stands out among the post-hoc model interpretation techniques due to its axiomatic nature. However, recent developments show that despite being axiomatic, path attribution methods can compute counter-intuitive feature attributions. Not only that, for deep visual models, the methods may also not conform to the original game-theoretic intuitions that are the basis of their axiomatic nature. To address these issues, we perform a systematic investigation of the path attribution framework. We first pinpoint the conditions in which the counter-intuitive attributions of deep visual models can be avoided under this framework. Then, we identify a mechanism of integrating the attributions over the paths such that they computationally conform to the original insights of game-theory. These insights are eventually combined into a method, which provides intuitive and reliable feature attributions. We also establish the findings empirically by evaluating the method on multiple datasets, models and evaluation metrics. Extensive experiments show a consistent quantitative and qualitative gain in the results over the baselines.
https://proceedings.mlr.press/v202/alacaoglu23a.html
https://proceedings.mlr.press/v202/alacaoglu23a/alacaoglu23a.pdf
https://openreview.net/forum?id=UZmfIzyTvW
Convergence of First-Order Methods for Constrained Nonconvex Optimization with Dependent Data
https://proceedings.mlr.press/v202/alacaoglu23a.html
Ahmet Alacaoglu, Hanbaek Lyu
https://proceedings.mlr.press/v202/alacaoglu23a.html
ICML 2023
We focus on analyzing the classical stochastic projected gradient methods under a general dependent data sampling scheme for constrained smooth nonconvex optimization. We show the worst-case rate of convergence $\tilde{O}(t^{-1/4})$ and complexity $\tilde{O}(\varepsilon^{-4})$ for achieving an $\varepsilon$-near stationary point in terms of the norm of the gradient of Moreau envelope and gradient mapping. While classical convergence guarantee requires i.i.d. data sampling from the target distribution, we only require a mild mixing condition of the conditional distribution, which holds for a wide class of Markov chain sampling algorithms. This improves the existing complexity for the constrained smooth nonconvex optimization with dependent data from $\tilde{O}(\varepsilon^{-8})$ to $\tilde{O}(\varepsilon^{-4})$ with a significantly simpler analysis. We illustrate the generality of our approach by deriving convergence results with dependent data for stochastic proximal gradient methods, adaptive stochastic gradient algorithm AdaGrad and stochastic gradient algorithm with heavy ball momentum. As an application, we obtain first online nonnegative matrix factorization algorithms for dependent data based on stochastic projected gradient methods with adaptive step sizes and optimal rate of convergence.
https://proceedings.mlr.press/v202/alam23a.html
https://proceedings.mlr.press/v202/alam23a/alam23a.pdf
https://openreview.net/forum?id=CTZHb6PrHF
Recasting Self-Attention with Holographic Reduced Representations
https://proceedings.mlr.press/v202/alam23a.html
Mohammad Mahmudul Alam, Edward Raff, Stella Biderman, Tim Oates, James Holt
https://proceedings.mlr.press/v202/alam23a.html
ICML 2023
In recent years, self-attention has become the dominant paradigm for sequence modeling in a variety of domains. However, in domains with very long sequence lengths the $\mathcal{O}(T^2)$ memory and $\mathcal{O}(T^2 H)$ compute costs can make using transformers infeasible. Motivated by problems in malware detection, where sequence lengths of $T \geq 100,000$ are a roadblock to deep learning, we re-cast self-attention using the neuro-symbolic approach of Holographic Reduced Representations (HRR). In doing so we perform the same high-level strategy of the standard self-attention: a set of queries matching against a set of keys, and returning a weighted response of the values for each key. Implemented as a “Hrrformer” we obtain several benefits including $\mathcal{O}(T H \log H)$ time complexity, $\mathcal{O}(T H)$ space complexity, and convergence in $10\times$ fewer epochs. Nevertheless, the Hrrformer achieves near state-of-the-art accuracy on LRA benchmarks and we are able to learn with just a single layer. Combined, these benefits make our Hrrformer the first viable Transformer for such long malware classification sequences and up to $280\times$ faster to train on the Long Range Arena benchmark.
https://proceedings.mlr.press/v202/alghamdi23a.html
https://proceedings.mlr.press/v202/alghamdi23a/alghamdi23a.pdf
https://openreview.net/forum?id=IK7UWsjhUp
The Saddle-Point Method in Differential Privacy
https://proceedings.mlr.press/v202/alghamdi23a.html
Wael Alghamdi, Juan Felipe Gomez, Shahab Asoodeh, Flavio Calmon, Oliver Kosut, Lalitha Sankar
https://proceedings.mlr.press/v202/alghamdi23a.html
ICML 2023
We characterize the differential privacy guarantees of privacy mechanisms in the large-composition regime, i.e., when a privacy mechanism is sequentially applied a large number of times to sensitive data. Via exponentially tilting the privacy loss random variable, we derive a new formula for the privacy curve expressing it as a contour integral over an integration path that runs parallel to the imaginary axis with a free real-axis intercept. Then, using the method of steepest descent from mathematical physics, we demonstrate that the choice of saddle-point as the real-axis intercept yields closed-form accurate approximations of the desired contour integral. This procedure—dubbed the saddle-point accountant (SPA)—yields a constant-time accurate approximation of the privacy curve. Theoretically, our results can be viewed as a refinement of both Gaussian Differential Privacy and the moments accountant method found in Rényi Differential Privacy. In practice, we demonstrate through numerical experiments that the SPA provides a precise approximation of privacy guarantees competitive with purely numerical-based methods (such as FFT-based accountants), while enjoying closed-form mathematical expressions.
https://proceedings.mlr.press/v202/ali-mehmeti-gopel23a.html
https://proceedings.mlr.press/v202/ali-mehmeti-gopel23a/ali-mehmeti-gopel23a.pdf
https://openreview.net/forum?id=tAa6ivLs6D
Nonlinear Advantage: Trained Networks Might Not Be As Complex as You Think
https://proceedings.mlr.press/v202/ali-mehmeti-gopel23a.html
Christian H.X. Ali Mehmeti-Göpel, Jan Disselhoff
https://proceedings.mlr.press/v202/ali-mehmeti-gopel23a.html
ICML 2023
We perform an empirical study of the behaviour of deep networks when fully linearizing some of its feature channels through a sparsity prior on the overall number of nonlinear units in the network. In experiments on image classification and machine translation tasks, we investigate how much we can simplify the network function towards linearity before performance collapses. First, we observe a significant performance gap when reducing nonlinearity in the network function early on as opposed to late in training, in-line with recent observations on the time-evolution of the data-dependent NTK. Second, we find that after training, we are able to linearize a significant number of nonlinear units while maintaining a high performance, indicating that much of a network’s expressivity remains unused but helps gradient descent in early stages of training. To characterize the depth of the resulting partially linearized network, we introduce a measure called average path length, representing the average number of active nonlinearities encountered along a path in the network graph. Under sparsity pressure, we find that the remaining nonlinear units organize into distinct structures, forming core-networks of near constant effective depth and width, which in turn depend on task difficulty.
https://proceedings.mlr.press/v202/allingham23a.html
https://proceedings.mlr.press/v202/allingham23a/allingham23a.pdf
https://openreview.net/forum?id=6MU5xdrO7t
A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models
https://proceedings.mlr.press/v202/allingham23a.html
James Urquhart Allingham, Jie Ren, Michael W Dusenberry, Xiuye Gu, Yin Cui, Dustin Tran, Jeremiah Zhe Liu, Balaji Lakshminarayanan
https://proceedings.mlr.press/v202/allingham23a.html
ICML 2023
Contrastively trained text-image models have the remarkable ability to perform zero-shot classification, that is, classifying previously unseen images into categories that the model has never been explicitly trained to identify. However, these zero-shot classifiers need prompt engineering to achieve high accuracy. Prompt engineering typically requires hand-crafting a set of prompts for individual downstream tasks. In this work, we aim to automate this prompt engineering and improve zero-shot accuracy through prompt ensembling. In particular, we ask “Given a large pool of prompts, can we automatically score the prompts and ensemble those that are most suitable for a particular downstream dataset, without needing access to labeled validation data?". We demonstrate that this is possible. In doing so, we identify several pathologies in a naive prompt scoring method where the score can be easily overconfident due to biases in pre-training and test data, and we propose a novel prompt scoring method that corrects for the biases. Using our proposed scoring method to create a weighted average prompt ensemble, our method overall outperforms equal average ensemble, as well as hand-crafted prompts, on ImageNet, 4 of its variants, and 11 fine-grained classification benchmarks. while being fully automatic, optimization-free, and not requiring access to labeled validation data.
https://proceedings.mlr.press/v202/allouah23a.html
https://proceedings.mlr.press/v202/allouah23a/allouah23a.pdf
https://openreview.net/forum?id=5WxdnjlCv7
On the Privacy-Robustness-Utility Trilemma in Distributed Learning
https://proceedings.mlr.press/v202/allouah23a.html
Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan
https://proceedings.mlr.press/v202/allouah23a.html
ICML 2023
The ubiquity of distributed machine learning (ML) in sensitive public domain applications calls for algorithms that protect data privacy, while being robust to faults and adversarial behaviors. Although privacy and robustness have been extensively studied independently in distributed ML, their synthesis remains poorly understood. We present the first tight analysis of the error incurred by any algorithm ensuring robustness against a fraction of adversarial machines, as well as differential privacy (DP) for honest machines’ data against any other curious entity. Our analysis exhibits a fundamental trade-off between privacy, robustness, and utility. To prove our lower bound, we consider the case of mean estimation, subject to distributed DP and robustness constraints, and devise reductions to centralized estimation of one-way marginals. We prove our matching upper bound by presenting a new distributed ML algorithm using a high-dimensional robust aggregation rule. The latter amortizes the dependence on the dimension in the error (caused by adversarial workers and DP), while being agnostic to the statistical properties of the data.
https://proceedings.mlr.press/v202/alparslan23a.html
https://proceedings.mlr.press/v202/alparslan23a/alparslan23a.pdf
https://openreview.net/forum?id=O3adXl7uBw
Differentially Private Distributed Bayesian Linear Regression with MCMC
https://proceedings.mlr.press/v202/alparslan23a.html
Baris Alparslan, Sinan Yıldırım, Ilker Birbil
https://proceedings.mlr.press/v202/alparslan23a.html
ICML 2023
We propose a novel Bayesian inference framework for distributed differentially private linear regression. We consider a distributed setting where multiple parties hold parts of the data and share certain summary statistics of their portions in privacy-preserving noise. We develop a novel generative statistical model for privately shared statistics, which exploits a useful distributional relation between the summary statistics of linear regression. We propose Bayesian estimation of the regression coefficients, mainly using Markov chain Monte Carlo algorithms, while we also provide a fast version that performs approximate Bayesian estimation in one iteration. The proposed methods have computational advantages over their competitors. We provide numerical results on both real and simulated data, which demonstrate that the proposed algorithms provide well-rounded estimation and prediction.
https://proceedings.mlr.press/v202/altamirano23a.html
https://proceedings.mlr.press/v202/altamirano23a/altamirano23a.pdf
https://openreview.net/forum?id=jWmHbfKeQF
Robust and Scalable Bayesian Online Changepoint Detection
https://proceedings.mlr.press/v202/altamirano23a.html
Matias Altamirano, Francois-Xavier Briol, Jeremias Knoblauch
https://proceedings.mlr.press/v202/altamirano23a.html
ICML 2023
This paper proposes an online, provably robust, and scalable Bayesian approach for changepoint detection. The resulting algorithm has key advantages over previous work: it provides provable robustness by leveraging the generalised Bayesian perspective, and also addresses the scalability issues of previous attempts. Specifically, the proposed generalised Bayesian formalism leads to conjugate posteriors whose parameters are available in closed form by leveraging diffusion score matching. The resulting algorithm is exact, can be updated through simple algebra, and is more than 10 times faster than its closest competitor.
https://proceedings.mlr.press/v202/altekruger23a.html
https://proceedings.mlr.press/v202/altekruger23a/altekruger23a.pdf
https://openreview.net/forum?id=Ur1Eckuj3V
Neural Wasserstein Gradient Flows for Discrepancies with Riesz Kernels
https://proceedings.mlr.press/v202/altekruger23a.html
Fabian Altekrüger, Johannes Hertrich, Gabriele Steidl
https://proceedings.mlr.press/v202/altekruger23a.html
ICML 2023
Wasserstein gradient flows of maximum mean discrepancy (MMD) functionals with non-smooth Riesz kernels show a rich structure as singular measures can become absolutely continuous ones and conversely. In this paper we contribute to the understanding of such flows. We propose to approximate the backward scheme of Jordan, Kinderlehrer and Otto for computing such Wasserstein gradient flows as well as a forward scheme for so-called Wasserstein steepest descent flows by neural networks (NNs). Since we cannot restrict ourselves to absolutely continuous measures, we have to deal with transport plans and velocity plans instead of usual transport maps and velocity fields. Indeed, we approximate the disintegration of both plans by generative NNs which are learned with respect to appropriate loss functions. In order to evaluate the quality of both neural schemes, we benchmark them on the interaction energy. Here we provide analytic formulas for Wasserstein schemes starting at a Dirac measure and show their convergence as the time step size tends to zero. Finally, we illustrate our neural MMD flows by numerical examples.
https://proceedings.mlr.press/v202/amani23a.html
https://proceedings.mlr.press/v202/amani23a/amani23a.pdf
https://openreview.net/forum?id=vTSLiw1GfJ
Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost
https://proceedings.mlr.press/v202/amani23a.html
Sanae Amani, Tor Lattimore, András György, Lin Yang
https://proceedings.mlr.press/v202/amani23a.html
ICML 2023
We study distributed contextual linear bandits with stochastic contexts, where $N$ agents/learners act cooperatively to solve a linear bandit-optimization problem with $d$-dimensional features over the course of $T$ rounds. For this problem, we derive the first ever information-theoretic lower bound $\Omega(dN)$ on the communication cost of any algorithm that performs optimally in a regret minimization setup. We then propose a distributed batch elimination version of the LinUCB algorithm, DisBE-LUCB, where the agents share information among each other through a central server. We prove that the communication cost of DisBE-LUCB, matches our lower bound up to logarithmic factors. In particular, for scenarios with known context distribution, the communication cost of DisBE-LUCB is only $\tilde{\mathcal{O}}(dN)$ and its regret is $\tilde{\mathcal{O}}(\sqrt{dNT})$, which is of the same order as that incurred by an optimal single-agent algorithm for $NT$ rounds. We also provide similar bounds for practical settings where the context distribution can only be estimated. Therefore, our proposed algorithm is nearly minimax optimal in terms of both regret and communication cost. Finally, we propose DecBE-LUCB, a fully decentralized version of DisBE-LUCB, which operates without a central server, where agents share information with their immediate neighbors through a carefully designed consensus procedure.
https://proceedings.mlr.press/v202/amin23a.html
https://proceedings.mlr.press/v202/amin23a/amin23a.pdf
https://openreview.net/forum?id=8LdBTjylEw
A Kernelized Stein Discrepancy for Biological Sequences
https://proceedings.mlr.press/v202/amin23a.html
Alan Nawzad Amin, Eli N Weinstein, Debora Susan Marks
https://proceedings.mlr.press/v202/amin23a.html
ICML 2023
Generative models of biological sequences are a powerful tool for learning from complex sequence data, predicting the effects of mutations, and designing novel biomolecules with desired properties. To evaluate generative models it is important to accurately measure differences between high-dimensional distributions. In this paper we propose the “KSD-B”, a novel divergence measure for distributions over biological sequences that is based on the kernelized Stein discrepancy (KSD). The KSD-B can be evaluated even when the normalizing constant of the model is unknown; it allows for variable length sequences and can take into account biological notions of sequence distance. Unlike previous KSDs over discrete spaces the KSD-B (a) is theoretically guaranteed to detect convergence and non-convergence of distributions over sequence space and (b) can be efficiently estimated in practice. We demonstrate the advantages of the KSD-B on problems with synthetic and real data, and apply it to measure the fit of state-of-the-art machine learning models. Overall, the KSD-B enables rigorous evaluation of generative biological sequence models, allowing the accuracy of models, sampling procedures, and library designs to be checked reliably.
https://proceedings.mlr.press/v202/amortila23a.html
https://proceedings.mlr.press/v202/amortila23a/amortila23a.pdf
https://openreview.net/forum?id=OT6gRRMmcE
The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation
https://proceedings.mlr.press/v202/amortila23a.html
Philip Amortila, Nan Jiang, Csaba Szepesvari
https://proceedings.mlr.press/v202/amortila23a.html
ICML 2023
Theoretical guarantees in reinforcement learning (RL) are known to suffer multiplicative blow-up factors with respect to the misspecification error of function approximation. Yet, the nature of such approximation factors—especially their optimal form in a given learning problem—is poorly understood. In this paper we study this question in linear off-policy value function estimation, where many open questions remain. We study the approximation factor in a broad spectrum of settings, such as presence vs. absence of state aliasing and full vs. partial coverage of the state space. Our core results include instance-dependent upper bounds on the approximation factors with respect to both the weighted $L_2$-norm (where the weighting is the offline state distribution) and the $L_\infty$ norm. We show that these approximation factors are optimal (in an instance-dependent sense) for a number of these settings. In other cases, we show that the instance-dependent parameters which appear in the upper bounds are necessary, and that the finiteness of either alone cannot guarantee a finite approximation factor even in the limit of infinite data.
https://proceedings.mlr.press/v202/amos23a.html
https://proceedings.mlr.press/v202/amos23a/amos23a.pdf
https://openreview.net/forum?id=vinsvrSJmd
Meta Optimal Transport
https://proceedings.mlr.press/v202/amos23a.html
Brandon Amos, Giulia Luise, Samuel Cohen, Ievgen Redko
https://proceedings.mlr.press/v202/amos23a.html
ICML 2023
We study the use of amortized optimization to predict optimal transport (OT) maps from the input measures, which we call Meta OT. This helps repeatedly solve similar OT problems between different measures by leveraging the knowledge and information present from past problems to rapidly predict and solve new problems. Otherwise, standard methods ignore the knowledge of the past solutions and suboptimally re-solve each problem from scratch. We instantiate Meta OT models in discrete and continuous settings between grayscale images, spherical data, classification labels, and color palettes and use them to improve the computational time of standard OT solvers. Our source code is available at http://github.com/facebookresearch/meta-ot
https://proceedings.mlr.press/v202/anagnostides23a.html
https://proceedings.mlr.press/v202/anagnostides23a/anagnostides23a.pdf
https://openreview.net/forum?id=FK18BRc1vL
Near-Optimal $Φ$-Regret Learning in Extensive-Form Games
https://proceedings.mlr.press/v202/anagnostides23a.html
Ioannis Anagnostides, Gabriele Farina, Tuomas Sandholm
https://proceedings.mlr.press/v202/anagnostides23a.html
ICML 2023
In this paper, we establish efficient and uncoupled learning dynamics so that, when employed by all players in multiplayer perfect-recall imperfect-information extensive-form games, the trigger regret of each player grows as $O(\log T)$ after $T$ repetitions of play. This improves exponentially over the prior best known trigger-regret bound of $O(T^{1/4})$, and settles a recent open question by Bai et al. (2022). As an immediate consequence, we guarantee convergence to the set of extensive-form correlated equilibria and coarse correlated equilibria at a near-optimal rate of $\frac{\log T}{T}$. Building on prior work, at the heart of our construction lies a more general result regarding fixed points deriving from rational functions with polynomial degree, a property that we establish for the fixed points of (coarse) trigger deviation functions. Moreover, our construction leverages a refined regret circuit for the convex hull, which—unlike prior guarantees—preserves the RVU property introduced by Syrgkanis et al. (NIPS, 2015); this observation has an independent interest in establishing near-optimal regret under learning dynamics based on a CFR-type decomposition of the regret.
https://proceedings.mlr.press/v202/andriushchenko23a.html
https://proceedings.mlr.press/v202/andriushchenko23a/andriushchenko23a.pdf
https://openreview.net/forum?id=VZp9X410D3
A Modern Look at the Relationship between Sharpness and Generalization
https://proceedings.mlr.press/v202/andriushchenko23a.html
Maksym Andriushchenko, Francesco Croce, Maximilian Müller, Matthias Hein, Nicolas Flammarion
https://proceedings.mlr.press/v202/andriushchenko23a.html
ICML 2023
Sharpness of minima is a promising quantity that can correlate with generalization in deep networks and, when optimized during training, can improve generalization. However, standard sharpness is not invariant under reparametrizations of neural networks, and, to fix this, reparametrization-invariant sharpness definitions have been proposed, most prominently adaptive sharpness (Kwon et al., 2021). But does it really capture generalization in modern practical settings? We comprehensively explore this question in a detailed study of various definitions of adaptive sharpness in settings ranging from training from scratch on ImageNet and CIFAR-10 to fine-tuning CLIP on ImageNet and BERT on MNLI. We focus mostly on transformers for which little is known in terms of sharpness despite their widespread usage. Overall, we observe that sharpness does not correlate well with generalization but rather with some training parameters like the learning rate that can be positively or negatively correlated with generalization depending on the setup. Interestingly, in multiple cases, we observe a consistent negative correlation of sharpness with OOD generalization implying that sharper minima can generalize better. Finally, we illustrate on a simple model that the right sharpness measure is highly data-dependent, and that we do not understand well this aspect for realistic data distributions.
https://proceedings.mlr.press/v202/andriushchenko23b.html
https://proceedings.mlr.press/v202/andriushchenko23b/andriushchenko23b.pdf
https://openreview.net/forum?id=DnTuz0ziwN
SGD with Large Step Sizes Learns Sparse Features
https://proceedings.mlr.press/v202/andriushchenko23b.html
Maksym Andriushchenko, Aditya Vardhan Varre, Loucas Pillaud-Vivien, Nicolas Flammarion
https://proceedings.mlr.press/v202/andriushchenko23b.html
ICML 2023
We showcase important features of the dynamics of the Stochastic Gradient Descent (SGD) in the training of neural networks. We present empirical observations that commonly used large step sizes (i) may lead the iterates to jump from one side of a valley to the other causing loss stabilization, and (ii) this stabilization induces a hidden stochastic dynamics that biases it implicitly toward simple predictors. Furthermore, we show empirically that the longer large step sizes keep SGD high in the loss landscape valleys, the better the implicit regularization can operate and find sparse representations. Notably, no explicit regularization is used: the regularization effect comes solely from the SGD dynamics influenced by the large step sizes schedule. Therefore, these observations unveil how, through the step size schedules, both gradient and noise drive together the SGD dynamics through the loss landscape of neural networks. We justify these findings theoretically through the study of simple neural network models as well as qualitative arguments inspired from stochastic processes. This analysis allows us to shed new light on some common practices and observed phenomena when training deep networks.
https://proceedings.mlr.press/v202/ansari23a.html
https://proceedings.mlr.press/v202/ansari23a/ansari23a.pdf
https://openreview.net/forum?id=GTos8jbYUa
Neural Continuous-Discrete State Space Models for Irregularly-Sampled Time Series
https://proceedings.mlr.press/v202/ansari23a.html
Abdul Fatir Ansari, Alvin Heng, Andre Lim, Harold Soh
https://proceedings.mlr.press/v202/ansari23a.html
ICML 2023
Learning accurate predictive models of real-world dynamic phenomena (e.g., climate, biological) remains a challenging task. One key issue is that the data generated by both natural and artificial processes often comprise time series that are irregularly sampled and/or contain missing observations. In this work, we propose the Neural Continuous-Discrete State Space Model (NCDSSM) for continuous-time modeling of time series through discrete-time observations. NCDSSM employs auxiliary variables to disentangle recognition from dynamics, thus requiring amortized inference only for the auxiliary variables. Leveraging techniques from continuous-discrete filtering theory, we demonstrate how to perform accurate Bayesian inference for the dynamic states. We propose three flexible parameterizations of the latent dynamics and an efficient training objective that marginalizes the dynamic states during inference. Empirical results on multiple benchmark datasets across various domains show improved imputation and forecasting performance of NCDSSM over existing models.
https://proceedings.mlr.press/v202/antoniadis23a.html
https://proceedings.mlr.press/v202/antoniadis23a/antoniadis23a.pdf
https://openreview.net/forum?id=NG8f2j1EKb
Paging with Succinct Predictions
https://proceedings.mlr.press/v202/antoniadis23a.html
Antonios Antoniadis, Joan Boyar, Marek Elias, Lene Monrad Favrholdt, Ruben Hoeksma, Kim S. Larsen, Adam Polak, Bertrand Simon
https://proceedings.mlr.press/v202/antoniadis23a.html
ICML 2023
Paging is a prototypical problem in the area of online algorithms. It has also played a central role in the development of learning-augmented algorithms. Previous work on learning-augmented paging has investigated predictions on (i) when the current page will be requested again (reoccurrence predictions), (ii) the current state of the cache in an optimal algorithm (state predictions), (iii) all requests until the current page gets requested again, and (iv) the relative order in which pages are requested. We study learning-augmented paging from the new perspective of requiring the least possible amount of predicted information. More specifically, the predictions obtained alongside each page request are limited to one bit only. We develop algorithms satisfy all three desirable properties of learning-augmented algorithms – that is, they are consistent, robust and smooth – despite being limited to a one-bit prediction per request. We also present lower bounds establishing that our algorithms are essentially best possible.
https://proceedings.mlr.press/v202/antoniadis23b.html
https://proceedings.mlr.press/v202/antoniadis23b/antoniadis23b.pdf
https://openreview.net/forum?id=HqQIt6mt5B
Mixing Predictions for Online Metric Algorithms
https://proceedings.mlr.press/v202/antoniadis23b.html
Antonios Antoniadis, Christian Coester, Marek Elias, Adam Polak, Bertrand Simon
https://proceedings.mlr.press/v202/antoniadis23b.html
ICML 2023
A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of $\ell$ predictors, we obtain a competitive ratio of $O(\ell^2)$, and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a $(1+\epsilon)$-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
https://proceedings.mlr.press/v202/aouali23a.html
https://proceedings.mlr.press/v202/aouali23a/aouali23a.pdf
https://openreview.net/forum?id=LJ9iKElXpl
Exponential Smoothing for Off-Policy Learning
https://proceedings.mlr.press/v202/aouali23a.html
Imad Aouali, Victor-Emmanuel Brunel, David Rohde, Anna Korba
https://proceedings.mlr.press/v202/aouali23a.html
ICML 2023
Off-policy learning (OPL) aims at finding improved policies from logged bandit data, often by minimizing the inverse propensity scoring (IPS) estimator of the risk. In this work, we investigate a smooth regularization for IPS, for which we derive a two-sided PAC-Bayes generalization bound. The bound is tractable, scalable, interpretable and provides learning certificates. In particular, it is also valid for standard IPS without making the assumption that the importance weights are bounded. We demonstrate the relevance of our approach and its favorable performance through a set of learning tasks. Since our bound holds for standard IPS, we are able to provide insight into when regularizing IPS is useful. Namely, we identify cases where regularization might not be needed. This goes against the belief that, in practice, clipped IPS often enjoys favorable performance than standard IPS in OPL.
https://proceedings.mlr.press/v202/arbas23a.html
https://proceedings.mlr.press/v202/arbas23a/arbas23a.pdf
https://openreview.net/forum?id=b6Hxt4Jw10
Polynomial Time and Private Learning of Unbounded Gaussian Mixture Models
https://proceedings.mlr.press/v202/arbas23a.html
Jamil Arbas, Hassan Ashtiani, Christopher Liaw
https://proceedings.mlr.press/v202/arbas23a.html
ICML 2023
We study the problem of privately estimating the parameters of $d$-dimensional Gaussian Mixture Models (GMMs) with $k$ components. For this, we develop a technique to reduce the problem to its non-private counterpart. This allows us to privatize existing non-private algorithms in a blackbox manner, while incurring only a small overhead in the sample complexity and running time. As the main application of our framework, we develop an $(\varepsilon, \delta)$-differentially private algorithm to learn GMMs using the non-private algorithm of Moitra and Valiant (2010) as a blackbox. Consequently, this gives the first sample complexity upper bound and first polynomial time algorithm for privately learning GMMs without any boundedness assumptions on the parameters. As part of our analysis, we prove a tight (up to a constant factor) lower bound on the total variation distance of high-dimensional Gaussians which can be of independent interest.
https://proceedings.mlr.press/v202/arisaka23a.html
https://proceedings.mlr.press/v202/arisaka23a/arisaka23a.pdf
https://openreview.net/forum?id=2MbU8qSWL1
Principled Acceleration of Iterative Numerical Methods Using Machine Learning
https://proceedings.mlr.press/v202/arisaka23a.html
Sohei Arisaka, Qianxiao Li
https://proceedings.mlr.press/v202/arisaka23a.html
ICML 2023
Iterative methods are ubiquitous in large-scale scientific computing applications, and a number of approaches based on meta-learning have been recently proposed to accelerate them. However, a systematic study of these approaches and how they differ from meta-learning is lacking. In this paper, we propose a framework to analyze such learning-based acceleration approaches, where one can immediately identify a departure from classical meta-learning. We theoretically show that this departure may lead to arbitrary deterioration of model performance, and at the same time, we identify a methodology to ameliorate it by modifying the loss objective, leading to a novel training method for learning-based acceleration of iterative algorithms. We demonstrate the significant advantage and versatility of the proposed approach through various numerical applications.
https://proceedings.mlr.press/v202/arora23a.html
https://proceedings.mlr.press/v202/arora23a/arora23a.pdf
https://openreview.net/forum?id=kOUBFwYd2D
Faster Rates of Convergence to Stationary Points in Differentially Private Optimization
https://proceedings.mlr.press/v202/arora23a.html
Raman Arora, Raef Bassily, Tomás González, Cristóbal A Guzmán, Michael Menart, Enayat Ullah
https://proceedings.mlr.press/v202/arora23a.html
ICML 2023
We study the problem of approximating stationary points of Lipschitz and smooth functions under $(\varepsilon,\delta)$-differential privacy (DP) in both the finite-sum and stochastic settings. A point $\widehat{w}$ is called an $\alpha$-stationary point of a function $F:\mathbb{R}^d\rightarrow\mathbb{R}$ if $\|\nabla F(\widehat{w})\|\leq \alpha$. We give a new construction that improves over the existing rates in the stochastic optimization setting, where the goal is to find approximate stationary points of the population risk given $n$ samples. Our construction finds a $\tilde{O}\big(\frac{1}{n^{1/3}} + \big[\frac{\sqrt{d}}{n\varepsilon}\big]^{1/2}\big)$-stationary point of the population risk in time linear in $n$. We also provide an efficient algorithm that finds an $\tilde{O}\big(\big[\frac{\sqrt{d}}{n\varepsilon}\big]^{2/3}\big)$-stationary point in the finite-sum setting. This improves on the previous best rate of $\tilde{O}\big(\big[\frac{\sqrt{d}}{n\varepsilon}\big]^{1/2}\big)$. Furthermore, under the additional assumption of convexity, we completely characterize the sample complexity of finding stationary points of the population risk (up to polylog factors) and show that the optimal rate on population stationarity is $\tilde \Theta\big(\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{n\varepsilon}\big)$. Finally, we show that our methods can be used to provide dimension-independent rates of $O\big(\frac{1}{\sqrt{n}}+\min\big(\big[\frac{\sqrt{rank}}{n\varepsilon}\big]^{2/3},\frac{1}{(n\varepsilon)^{2/5}}\big)\big)$ on population stationarity for Generalized Linear Models (GLM), where $rank$ is the rank of the design matrix, which improves upon the previous best known rate.
https://proceedings.mlr.press/v202/asadi23a.html
https://proceedings.mlr.press/v202/asadi23a/asadi23a.pdf
https://openreview.net/forum?id=ywwdhhqNj7
Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning
https://proceedings.mlr.press/v202/asadi23a.html
Nader Asadi, Mohammadreza Davari, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
https://proceedings.mlr.press/v202/asadi23a.html
ICML 2023
In Continual learning (CL) balancing effective adaptation while combating catastrophic forgetting is a central challenge. Many of the recent best-performing methods utilize various forms of prior task data, e.g. a replay buffer, to tackle the catastrophic forgetting problem. Having access to previous task data can be restrictive in many real-world scenarios, for example when task data is sensitive or proprietary. To overcome the necessity of using previous tasks’ data, in this work, we start with strong representation learning methods that have been shown to be less prone to forgetting. We propose a holistic approach to jointly learn the representation and class prototypes while maintaining the relevance of old class prototypes and their embedded similarities. Specifically, samples are mapped to an embedding space where the representations are learned using a supervised contrastive loss. Class prototypes are evolved continually in the same latent space, enabling learning and prediction at any point. To continually adapt the prototypes without keeping any prior task data, we propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data. This method yields state-of-the-art performance in the task-incremental setting, outperforming methods relying on large amounts of data, and provides strong performance in the class-incremental setting without using any stored data points.
https://proceedings.mlr.press/v202/asi23a.html
https://proceedings.mlr.press/v202/asi23a/asi23a.pdf
https://openreview.net/forum?id=SjwWVAyYKh
Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime
https://proceedings.mlr.press/v202/asi23a.html
Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
https://proceedings.mlr.press/v202/asi23a.html
ICML 2023
We consider online learning problems in the realizable setting, where there is a zero-loss solution, and propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds. For the problem of online prediction from experts, we design new algorithms that obtain near-optimal regret $O \big( \varepsilon^{-1} \mathsf{poly}(\log{d}) \big)$ where $d$ is the number of experts. This significantly improves over the best existing regret bounds for the DP non-realizable setting which are $O \big( \varepsilon^{-1} \min\big\{d, \sqrt{T\log d}\big\} \big)$. We also develop an adaptive algorithm for the small-loss setting with regret $(L^\star+ \varepsilon^{-1}) \cdot O(\mathsf{poly}(\log{d}))$ where $L^\star$ is the total loss of the best expert. Additionally, we consider DP online convex optimization in the realizable setting and propose an algorithm with near-optimal regret $O \big(\varepsilon^{-1} \mathsf{poly}(d) \big)$, as well as an algorithm for the smooth case with regret $O \big( (\sqrt{Td}/\varepsilon)^{2/3} \big)$, both significantly improving over existing bounds in the non-realizable regime.
https://proceedings.mlr.press/v202/asi23b.html
https://proceedings.mlr.press/v202/asi23b/asi23b.pdf
https://openreview.net/forum?id=9viDfxnY3q
From Robustness to Privacy and Back
https://proceedings.mlr.press/v202/asi23b.html
Hilal Asi, Jonathan Ullman, Lydia Zakynthinou
https://proceedings.mlr.press/v202/asi23b.html
ICML 2023
We study the relationship between two desiderata of algorithms in statistical inference and machine learning—differential privacy and robustness to adversarial data corruptions. Their conceptual similarity was first observed by Dwork and Lei (STOC 2009), who observed that private algorithms satisfy robustness, and gave a general method for converting robust algorithms to private ones. However, all general methods for transforming robust algorithms into private ones lead to suboptimal error rates. Our work gives the first black-box transformation that converts any adversarially robust algorithm into one that satisfies pure differential privacy. Moreover, we show that for any low-dimensional estimation task, applying our transformation to an optimal robust estimator results in an optimal private estimator. Thus, we conclude that for any low-dimensional task, the optimal error rate for $\varepsilon$-differentially private estimators is essentially the same as the optimal error rate for estimators that are robust to adversarially corrupting $1/\varepsilon$ training samples. We apply our transformation to obtain new optimal private estimators for several high-dimensional statistical tasks, including Gaussian linear regression and PCA. Finally, we present an extension of our transformation that leads to approximately differentially private algorithms whose error does not depend on the range of the output space, which is impossible under pure differential privacy.
https://proceedings.mlr.press/v202/attia23a.html
https://proceedings.mlr.press/v202/attia23a/attia23a.pdf
https://openreview.net/forum?id=X7jMTrwuCz
SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to Unknown Parameters, Unbounded Gradients and Affine Variance
https://proceedings.mlr.press/v202/attia23a.html
Amit Attia, Tomer Koren
https://proceedings.mlr.press/v202/attia23a.html
ICML 2023
We study Stochastic Gradient Descent with AdaGrad stepsizes: a popular adaptive (self-tuning) method for first-order stochastic optimization. Despite being well studied, existing analyses of this method suffer from various shortcomings: they either assume some knowledge of the problem parameters, impose strong global Lipschitz conditions, or fail to give bounds that hold with high probability. We provide a comprehensive analysis of this basic method without any of these limitations, in both the convex and non-convex (smooth) cases, that additionally supports a general “affine variance” noise model and provides sharp rates of convergence in both the low-noise and high-noise regimes.
https://proceedings.mlr.press/v202/attias23a.html
https://proceedings.mlr.press/v202/attias23a/attias23a.pdf
https://openreview.net/forum?id=fcDq3BIbe9
Adversarially Robust PAC Learnability of Real-Valued Functions
https://proceedings.mlr.press/v202/attias23a.html
Idan Attias, Steve Hanneke
https://proceedings.mlr.press/v202/attias23a.html
ICML 2023
We study robustness to test-time adversarial attacks in the regression setting with $\ell_p$ losses and arbitrary perturbation sets. We address the question of which function classes are PAC learnable in this setting. We show that classes of finite fat-shattering dimension are learnable in both the realizable and agnostic settings. Moreover, for convex function classes, they are even properly learnable. In contrast, some non-convex function classes provably require improper learning algorithms. Our main technique is based on a construction of an adversarially robust sample compression scheme of a size determined by the fat-shattering dimension. Along the way, we introduce a novel agnostic sample compression scheme for real-valued functions, which may be of independent interest.
https://proceedings.mlr.press/v202/atzeni23a.html
https://proceedings.mlr.press/v202/atzeni23a/atzeni23a.pdf
https://openreview.net/forum?id=tE3BMOyUl5
Infusing Lattice Symmetry Priors in Attention Mechanisms for Sample-Efficient Abstract Geometric Reasoning
https://proceedings.mlr.press/v202/atzeni23a.html
Mattia Atzeni, Mrinmaya Sachan, Andreas Loukas
https://proceedings.mlr.press/v202/atzeni23a.html
ICML 2023
The Abstraction and Reasoning Corpus (ARC) (Chollet, 2019) and its most recent language-complete instantiation (LARC) has been postulated as an important step towards general AI. Yet, even state-of-the-art machine learning models struggle to achieve meaningful performance on these problems, falling behind non-learning based approaches. We argue that solving these tasks requires extreme generalization that can only be achieved by proper accounting for core knowledge priors. As a step towards this goal, we focus on geometry priors and introduce LatFormer, a model that incorporates lattice symmetry priors in attention masks. We show that, for any transformation of the hypercubic lattice, there exists a binary attention mask that implements that group action. Hence, our study motivates a modification to the standard attention mechanism, where attention weights are scaled using soft masks generated by a convolutional network. Experiments on synthetic geometric reasoning show that LatFormer requires 2 orders of magnitude fewer data than standard attention and transformers. Moreover, our results on ARC and LARC tasks that incorporate geometric priors provide preliminary evidence that these complex datasets do not lie out of the reach of deep learning models.
https://proceedings.mlr.press/v202/atzmon23a.html
https://proceedings.mlr.press/v202/atzmon23a/atzmon23a.pdf
https://openreview.net/forum?id=BJc95DyFNG
Learning to Initiate and Reason in Event-Driven Cascading Processes
https://proceedings.mlr.press/v202/atzmon23a.html
Yuval Atzmon, Eli Meirom, Shie Mannor, Gal Chechik
https://proceedings.mlr.press/v202/atzmon23a.html
ICML 2023
Training agents to control a dynamic environment is a fundamental task in AI. In many environments, the dynamics can be summarized by a small set of events that capture the semantic behavior of the system. Typically, these events form chains or cascades. We often wish to change the system behavior using a single intervention that propagates through the cascade. For instance, one may trigger a biochemical cascade to switch the state of a cell or, in logistics, reroute a truck to meet an unexpected, urgent delivery. We introduce a new supervised learning setup called Cascade. An agent observes a system with known dynamics evolving from some initial state. The agent is given a structured semantic instruction and needs to make an intervention that triggers a cascade of events, such that the system reaches an alternative (counterfactual) behavior. We provide a test-bed for this problem, consisting of physical objects. We combine semantic tree search with an event-driven forward model and devise an algorithm that learns to efficiently search in exponentially large semantic trees. We demonstrate that our approach learns to follow instructions to intervene in new complex scenes. When provided with an observed cascade of events, it can also reason about alternative outcomes.
https://proceedings.mlr.press/v202/aubert23a.html
https://proceedings.mlr.press/v202/aubert23a/aubert23a.pdf
https://openreview.net/forum?id=YvrxWGWg9E
On the convergence of the MLE as an estimator of the learning rate in the Exp3 algorithm
https://proceedings.mlr.press/v202/aubert23a.html
Julien Aubert, Luc Lehéricy, Patricia Reynaud-Bouret
https://proceedings.mlr.press/v202/aubert23a.html
ICML 2023
When fitting the learning data of an individual to algorithm-like learning models, the observations are so dependent and non-stationary that one may wonder what the classical Maximum Likelihood Estimator (MLE) could do, even if it is the usual tool applied to experimental cognition. Our objective in this work is to show that the estimation of the learning rate cannot be efficient if the learning rate is constant in the classical Exp3 (Exponential weights for Exploration and Exploitation) algorithm. Secondly, we show that if the learning rate decreases polynomially with the sample size, then the prediction error and in some cases the estimation error of the MLE satisfy bounds in probability that decrease at a polynomial rate.
https://proceedings.mlr.press/v202/avdeyev23a.html
https://proceedings.mlr.press/v202/avdeyev23a/avdeyev23a.pdf
https://openreview.net/forum?id=O3jUIakvK7
Dirichlet Diffusion Score Model for Biological Sequence Generation
https://proceedings.mlr.press/v202/avdeyev23a.html
Pavel Avdeyev, Chenlai Shi, Yuhao Tan, Kseniia Dudnyk, Jian Zhou
https://proceedings.mlr.press/v202/avdeyev23a.html
ICML 2023
Designing biological sequences is an important challenge that requires satisfying complex constraints and thus is a natural problem to address with deep generative modeling. Diffusion generative models have achieved considerable success in many applications. Score-based generative stochastic differential equations (SDE) model is a continuous-time diffusion model framework that enjoys many benefits, but the originally proposed SDEs are not naturally designed for modeling discrete data. To develop generative SDE models for discrete data such as biological sequences, here we introduce a diffusion process defined in the probability simplex space with stationary distribution being the Dirichlet distribution. This makes diffusion in continuous space natural for modeling discrete data. We refer to this approach as Dirchlet diffusion score model. We demonstrate that this technique can generate samples that satisfy hard constraints using a Sudoku generation task. This generative model can also solve Sudoku, including hard puzzles, without additional training. Finally, we applied this approach to develop the first human promoter DNA sequence design model and showed that designed sequences share similar properties with natural promoter sequences.
https://proceedings.mlr.press/v202/axiotis23a.html
https://proceedings.mlr.press/v202/axiotis23a/axiotis23a.pdf
https://openreview.net/forum?id=a4bMHPm0Ji
Gradient Descent Converges Linearly for Logistic Regression on Separable Data
https://proceedings.mlr.press/v202/axiotis23a.html
Kyriakos Axiotis, Maxim Sviridenko
https://proceedings.mlr.press/v202/axiotis23a.html
ICML 2023
We show that running gradient descent with variable learning rate guarantees loss $f(x) ≤ 1.1 \cdot f(x^*)+\epsilon$ for the logistic regression objective, where the error $\epsilon$ decays exponentially with the number of iterations and polynomially with the magnitude of the entries of an arbitrary fixed solution $x$. This is in contrast to the common intuition that the absence of strong convexity precludes linear convergence of first-order methods, and highlights the importance of variable learning rates for gradient descent. We also apply our ideas to sparse logistic regression, where they lead to an exponential improvement of the sparsity-error tradeoff.
https://proceedings.mlr.press/v202/ayme23a.html
https://proceedings.mlr.press/v202/ayme23a/ayme23a.pdf
https://openreview.net/forum?id=gfSLvfVf0w
Naive imputation implicitly regularizes high-dimensional linear models
https://proceedings.mlr.press/v202/ayme23a.html
Alexis Ayme, Claire Boyer, Aymeric Dieuleveut, Erwan Scornet
https://proceedings.mlr.press/v202/ayme23a.html
ICML 2023
Two different approaches exist to handle missing values for prediction: either imputation, prior to fitting any predictive algorithms, or dedicated methods able to natively incorporate missing values. While imputation is widely (and easily) use, it is unfortunately biased when low-capacity predictors (such as linear models) are applied afterward. However, in practice, naive imputation exhibits good predictive performance. In this paper, we study the impact of imputation in a high-dimensional linear model with MCAR missing data. We prove that zero imputation performs an implicit regularization closely related to the ridge method, often used in high-dimensional problems. Leveraging on this connection, we establish that the imputation bias is controlled by a ridge bias, which vanishes in high dimension. As a predictor, we argue in favor of the averaged SGD strategy, applied to zero-imputed data. We establish an upper bound on its generalization error, highlighting that imputation is benign in the $d \gg \sqrt{n}$ regime. Experiments illustrate our findings.
https://proceedings.mlr.press/v202/azabou23a.html
https://proceedings.mlr.press/v202/azabou23a/azabou23a.pdf
https://openreview.net/forum?id=lXczFIwQkv
Half-Hop: A graph upsampling approach for slowing down message passing
https://proceedings.mlr.press/v202/azabou23a.html
Mehdi Azabou, Venkataramana Ganesh, Shantanu Thakoor, Chi-Heng Lin, Lakshmi Sathidevi, Ran Liu, Michal Valko, Petar Veličković, Eva L Dyer
https://proceedings.mlr.press/v202/azabou23a.html
ICML 2023
Message passing neural networks have shown a lot of success on graph-structured data. However, there are many instances where message passing can lead to over-smoothing or fail when neighboring nodes belong to different classes. In this work, we introduce a simple yet general framework for improving learning in message passing neural networks. Our approach essentially upsamples edges in the original graph by adding "slow nodes" at each edge that can mediate communication between a source and a target node. Our method only modifies the input graph, making it plug-and-play and easy to use with existing models. To understand the benefits of slowing down message passing, we provide theoretical and empirical analyses. We report results on several supervised and self-supervised benchmarks, and show improvements across the board, notably in heterophilic conditions where adjacent nodes are more likely to have different labels. Finally, we show how our approach can be used to generate augmentations for self-supervised learning, where slow nodes are randomly introduced into different edges in the graph to generate multi-scale views with variable path lengths.
https://proceedings.mlr.press/v202/azad23a.html
https://proceedings.mlr.press/v202/azad23a/azad23a.pdf
https://openreview.net/forum?id=wagsJnR5GO
CLUTR: Curriculum Learning via Unsupervised Task Representation Learning
https://proceedings.mlr.press/v202/azad23a.html
Abdus Salam Azad, Izzeddin Gur, Jasper Emhoff, Nathaniel Alexis, Aleksandra Faust, Pieter Abbeel, Ion Stoica
https://proceedings.mlr.press/v202/azad23a.html
ICML 2023
Reinforcement Learning (RL) algorithms are often known for sample inefficiency and difficult generalization. Recently, Unsupervised Environment Design (UED) emerged as a new paradigm for zero-shot generalization by simultaneously learning a task distribution and agent policies on the generated tasks. This is a non-stationary process where the task distribution evolves along with agent policies; creating an instability over time. While past works demonstrated the potential of such approaches, sampling effectively from the task space remains an open challenge, bottlenecking these approaches. To this end, we introduce CLUTR: a novel unsupervised curriculum learning algorithm that decouples task representation and curriculum learning into a two-stage optimization. It first trains a recurrent variational autoencoder on randomly generated tasks to learn a latent task manifold. Next, a teacher agent creates a curriculum by maximizing a minimax REGRET-based objective on a set of latent tasks sampled from this manifold. Using the fixed-pretrained task manifold, we show that CLUTR successfully overcomes the non-stationarity problem and improves stability. Our experimental results show CLUTR outperforms PAIRED, a principled and popular UED method, in the challenging CarRacing and navigation environments: achieving 10.6X and 45% improvement in zero-shot generalization, respectively. CLUTR also performs comparably to the non-UED state-of-the-art for CarRacing, while requiring 500X fewer environment interactions. We open source our code at https://github.com/clutr/clutr.
https://proceedings.mlr.press/v202/baek23a.html
https://proceedings.mlr.press/v202/baek23a/baek23a.pdf
https://openreview.net/forum?id=GXHL8ZS1GX
Personalized Subgraph Federated Learning
https://proceedings.mlr.press/v202/baek23a.html
Jinheon Baek, Wonyong Jeong, Jiongdao Jin, Jaehong Yoon, Sung Ju Hwang
https://proceedings.mlr.press/v202/baek23a.html
ICML 2023
Subgraphs of a larger global graph may be distributed across multiple devices, and only locally accessible due to privacy restrictions, although there may be links between subgraphs. Recently proposed subgraph Federated Learning (FL) methods deal with those missing links across local subgraphs while distributively training Graph Neural Networks (GNNs) on them. However, they have overlooked the inevitable heterogeneity between subgraphs comprising different communities of a global graph, consequently collapsing the incompatible knowledge from local GNN models. To this end, we introduce a new subgraph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNNs rather than learning a single global model, and propose a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it. Since the server cannot access the subgraph in each client, FED-PUB utilizes functional embeddings of the local GNNs using random graphs as inputs to compute similarities between them, and use the similarities to perform weighted averaging for server-side aggregation. Further, it learns a personalized sparse mask at each client to select and update only the subgraph-relevant subset of the aggregated parameters. We validate our FED-PUB for its subgraph FL performance on six datasets, considering both non-overlapping and overlapping subgraphs, on which it significantly outperforms relevant baselines. Our code is available at https://github.com/JinheonBaek/FED-PUB.
https://proceedings.mlr.press/v202/baevski23a.html
https://proceedings.mlr.press/v202/baevski23a/baevski23a.pdf
https://openreview.net/forum?id=Jc5QwxfyyQ
Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and Language
https://proceedings.mlr.press/v202/baevski23a.html
Alexei Baevski, Arun Babu, Wei-Ning Hsu, Michael Auli
https://proceedings.mlr.press/v202/baevski23a.html
ICML 2023
Current self-supervised learning algorithms are often modality-specific and require large amounts of computational resources. To address these issues, we increase the training efficiency of data2vec, a learning objective that generalizes across several modalities. We do not encode masked tokens, use a fast convolutional decoder and amortize the effort to build teacher representations. data2vec 2.0 benefits from the rich contextualized target representations introduced in data2vec which enable a fast self-supervised learner. Experiments on ImageNet-1K image classification show that data2vec 2.0 matches the accuracy of Masked Autoencoders in 16.4x lower pre-training time, on Librispeech speech recognition it performs as well as wav2vec 2.0 in 10.6x less time, and on GLUE natural language understanding it matches a retrained RoBERTa model in half the time. Trading some speed for accuracy results in ImageNet-1K top-1 accuracy of 86.8% with a ViT-L model trained for 150 epochs.
https://proceedings.mlr.press/v202/baey23a.html
https://proceedings.mlr.press/v202/baey23a/baey23a.pdf
https://openreview.net/forum?id=ikbUw7okHD
Efficient preconditioned stochastic gradient descent for estimation in latent variable models
https://proceedings.mlr.press/v202/baey23a.html
Charlotte Baey, Maud Delattre, Estelle Kuhn, Jean-Benoist Leger, Sarah Lemler
https://proceedings.mlr.press/v202/baey23a.html
ICML 2023
Latent variable models are powerful tools for modeling complex phenomena involving in particular partially observed data, unobserved variables or underlying complex unknown structures. Inference is often difficult due to the latent structure of the model. To deal with parameter estimation in the presence of latent variables, well-known efficient methods exist, such as gradient-based and EM-type algorithms, but with practical and theoretical limitations. In this paper, we propose as an alternative for parameter estimation an efficient preconditioned stochastic gradient algorithm. Our method includes a preconditioning step based on a positive definite Fisher information matrix estimate. We prove convergence results for the proposed algorithm under mild assumptions for very general latent variables models. We illustrate through relevant simulations the performance of the proposed methodology in a nonlinear mixed effects model and in a stochastic block model.
https://proceedings.mlr.press/v202/bai23a.html
https://proceedings.mlr.press/v202/bai23a/bai23a.pdf
https://openreview.net/forum?id=3FydczZwkJ
Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection
https://proceedings.mlr.press/v202/bai23a.html
Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert D Nowak, Yixuan Li
https://proceedings.mlr.press/v202/bai23a.html
ICML 2023
Modern machine learning models deployed in the wild can encounter both covariate and semantic shifts, giving rise to the problems of out-of-distribution (OOD) generalization and OOD detection respectively. While both problems have received significant research attention lately, they have been pursued independently. This may not be surprising, since the two tasks have seemingly conflicting goals. This paper provides a new unified approach that is capable of simultaneously generalizing to covariate shifts while robustly detecting semantic shifts. We propose a margin-based learning framework that exploits freely available unlabeled data in the wild that captures the environmental test-time OOD distributions under both covariate and semantic shifts. We show both empirically and theoretically that the proposed margin constraint is the key to achieving both OOD generalization and detection. Extensive experiments show the superiority of our framework, outperforming competitive baselines that specialize in either OOD generalization or OOD detection. Code is publicly available at https://github.com/deeplearning-wisc/scone.
https://proceedings.mlr.press/v202/bai23b.html
https://proceedings.mlr.press/v202/bai23b/bai23b.pdf
https://openreview.net/forum?id=KTJ6E8t9Cy
Answering Complex Logical Queries on Knowledge Graphs via Query Computation Tree Optimization
https://proceedings.mlr.press/v202/bai23b.html
Yushi Bai, Xin Lv, Juanzi Li, Lei Hou
https://proceedings.mlr.press/v202/bai23b.html
ICML 2023
Answering complex logical queries on incomplete knowledge graphs is a challenging task, and has been widely studied. Embedding-based methods require training on complex queries and may not generalize well to out-of-distribution query structures. Recent work frames this task as an end-to-end optimization problem, and it only requires a pretrained link predictor. However, due to the exponentially large combinatorial search space, the optimal solution can only be approximated, limiting the final accuracy. In this work, we propose QTO (Query Computation Tree Optimization) that can efficiently find the exact optimal solution. QTO finds the optimal solution by a forward-backward propagation on the tree-like computation graph, i.e., query computation tree. In particular, QTO utilizes the independence encoded in the query computation tree to reduce the search space, where only local computations are involved during the optimization procedure. Experiments on 3 datasets show that QTO obtains state-of-the-art performance on complex query answering, outperforming previous best results by an average of 22%. Moreover, QTO can interpret the intermediate solutions for each of the one-hop atoms in the query with over 90% accuracy.
https://proceedings.mlr.press/v202/bai23c.html
https://proceedings.mlr.press/v202/bai23c/bai23c.pdf
https://openreview.net/forum?id=ftLm9QAqwc
Linear optimal partial transport embedding
https://proceedings.mlr.press/v202/bai23c.html
Yikun Bai, Ivan Vladimir Medri, Rocio Diaz Martin, Rana Shahroz, Soheil Kolouri
https://proceedings.mlr.press/v202/bai23c.html
ICML 2023
Optimal transport (OT) has gained popularity due to its various applications in fields such as machine learning, statistics, and signal processing. However, the balanced mass requirement limits its performance in practical problems. To address these limitations, variants of the OT problem, including unbalanced OT, Optimal partial transport (OPT), and Hellinger Kantorovich (HK), have been proposed. In this paper, we propose the Linear optimal partial transport (LOPT) embedding, which extends the (local) linearization technique on OT and HK to the OPT problem. The proposed embedding allows for faster computation of OPT distance between pairs of positive measures. Besides our theoretical contributions, we demonstrate the LOPT embedding technique in point-cloud interpolation and PCA analysis. Our code is available at https://github.com/Baio0/LinearOPT.
https://proceedings.mlr.press/v202/baker23a.html
https://proceedings.mlr.press/v202/baker23a/baker23a.pdf
https://openreview.net/forum?id=Q8k4WzGgnK
Implicit Graph Neural Networks: A Monotone Operator Viewpoint
https://proceedings.mlr.press/v202/baker23a.html
Justin Baker, Qingsong Wang, Cory D Hauck, Bao Wang
https://proceedings.mlr.press/v202/baker23a.html
ICML 2023
Implicit graph neural networks (IGNNs) – that solve a fixed-point equilibrium equation using Picard iteration for representation learning – have shown remarkable performance in learning long-range dependencies (LRD) in the underlying graphs. However, IGNNs suffer from several issues, including 1) their expressivity is limited by their parameterizations for the well-posedness guarantee, 2) IGNNs are unstable in learning LRD, and 3) IGNNs become computationally inefficient when learning LRD. In this paper, we provide a new well-posedness characterization for IGNNs leveraging monotone operator theory, resulting in a much more expressive parameterization than the existing one. We also propose an orthogonal parameterization for IGNN based on Cayley transform to stabilize learning LRD. Furthermore, we leverage Anderson-accelerated operator splitting schemes to efficiently solve for the fixed point of the equilibrium equation of IGNN with monotone or orthogonal parameterization. We verify the computational efficiency and accuracy of the new models over existing IGNNs on various graph learning tasks at both graph and node levels.
https://proceedings.mlr.press/v202/bakshi23a.html
https://proceedings.mlr.press/v202/bakshi23a/bakshi23a.pdf
https://openreview.net/forum?id=lxRIOSlTbb
Tensor Decompositions Meet Control Theory: Learning General Mixtures of Linear Dynamical Systems
https://proceedings.mlr.press/v202/bakshi23a.html
Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau
https://proceedings.mlr.press/v202/bakshi23a.html
ICML 2023
Recently Chen and Poor initiated the study of learning mixtures of linear dynamical systems. While linear dynamical systems already have wide-ranging applications in modeling time-series data, using mixture models can lead to a better fit or even a richer understanding of underlying subpopulations represented in the data. In this work we give a new approach to learning mixtures of linear dynamical systems that is based on tensor decompositions. As a result, our algorithm succeeds without strong separation conditions on the components, and can be used to compete with the Bayes optimal clustering of the trajectories. Moreover our algorithm works in the challenging partially-observed setting. Our starting point is the simple but powerful observation that the classic Ho-Kalman algorithm is a relative of modern tensor decomposition methods for learning latent variable models. This gives us a playbook for how to extend it to work with more complicated generative models.
https://proceedings.mlr.press/v202/balabanov23a.html
https://proceedings.mlr.press/v202/balabanov23a/balabanov23a.pdf
https://openreview.net/forum?id=EMN99LtfYA
Block Subsampled Randomized Hadamard Transform for Nyström Approximation on Distributed Architectures
https://proceedings.mlr.press/v202/balabanov23a.html
Oleg Balabanov, Matthias Beaupère, Laura Grigori, Victor Lederer
https://proceedings.mlr.press/v202/balabanov23a.html
ICML 2023
This article introduces a novel structured random matrix composed blockwise from subsampled randomized Hadamard transforms (SRHTs). The block SRHT is expected to outperform well-known dimension reduction maps, including SRHT and Gaussian matrices on distributed architectures. We prove that a block SRHT with enough rows is an oblivious subspace embedding, i.e., an approximate isometry for an arbitrary low-dimensional subspace with high probability. Our estimate of the required number of rows is similar to that of the standard SRHT. This suggests that the two transforms should provide the same accuracy of approximation in the algorithms. The block SRHT can be readily incorporated into randomized methods for computing a low-rank approximation of a large-scale matrix, such as the Nyström method. For completeness, we revisit this method with a discussion of its implementation on distributed architectures.
https://proceedings.mlr.press/v202/ball23a.html
https://proceedings.mlr.press/v202/ball23a/ball23a.pdf
https://openreview.net/forum?id=h11j9w1ucU
Efficient Online Reinforcement Learning with Offline Data
https://proceedings.mlr.press/v202/ball23a.html
Philip J. Ball, Laura Smith, Ilya Kostrikov, Sergey Levine
https://proceedings.mlr.press/v202/ball23a.html
ICML 2023
Sample efficiency and exploration remain major challenges in online reinforcement learning (RL). A powerful approach that can be applied to address these issues is the inclusion of offline data, such as prior trajectories from a human expert or a sub-optimal exploration policy. Previous methods have relied on extensive modifications and additional complexity to ensure the effective use of this data. Instead, we ask: can we simply apply existing off-policy methods to leverage offline data when learning online? In this work, we demonstrate that the answer is yes; however, a set of minimal but important changes to existing off-policy RL algorithms are required to achieve reliable performance. We extensively ablate these design choices, demonstrating the key factors that most affect performance, and arrive at a set of recommendations that practitioners can readily apply, whether their data comprise a small number of expert demonstrations or large volumes of sub-optimal trajectories. We see that correct application of these simple recommendations can provide a $\mathbf{2.5\times}$ improvement over existing approaches across a diverse set of competitive benchmarks, with no additional computational overhead.
https://proceedings.mlr.press/v202/ballu23a.html
https://proceedings.mlr.press/v202/ballu23a/ballu23a.pdf
https://openreview.net/forum?id=ImQC3p9wlm
Mirror Sinkhorn: Fast Online Optimization on Transport Polytopes
https://proceedings.mlr.press/v202/ballu23a.html
Marin Ballu, Quentin Berthet
https://proceedings.mlr.press/v202/ballu23a.html
ICML 2023
Optimal transport is an important tool in machine learning, allowing to capture geometric properties of the data through a linear program on transport polytopes. We present a single-loop optimization algorithm for minimizing general convex objectives on these domains, utilizing the principles of Sinkhorn matrix scaling and mirror descent. The proposed algorithm is robust to noise, and can be used in an online setting. We provide theoretical guarantees for convex objectives and experimental results showcasing it effectiveness on both synthetic and real-world data.
https://proceedings.mlr.press/v202/balogh23a.html
https://proceedings.mlr.press/v202/balogh23a/balogh23a.pdf
https://openreview.net/forum?id=sFqfXphJh5
On the Functional Similarity of Robust and Non-Robust Neural Representations
https://proceedings.mlr.press/v202/balogh23a.html
András Balogh, Márk Jelasity
https://proceedings.mlr.press/v202/balogh23a.html
ICML 2023
Model stitching—where the internal representations of two neural networks are aligned linearly—helped demonstrate that the representations of different neural networks for the same task are surprisingly similar in a functional sense. At the same time, the representations of adversarially robust networks are considered to be different from non-robust representations. For example, robust image classifiers are invertible, while non-robust networks are not. Here, we investigate the functional similarity of robust and non-robust representations for image classification with the help of model stitching. We find that robust and non-robust networks indeed have different representations. However, these representations are compatible regarding accuracy. From the point of view of robust accuracy, compatibility decreases quickly after the first few layers but the representations become compatible again in the last layers, in the sense that the properties of the front model can be recovered. Moreover, this is true even in the case of cross-task stitching. Our results suggest that stitching in the initial, preprocessing layers and the final, abstract layers test different kinds of compatibilities. In particular, the final layers are easy to match, because their representations depend mostly on the same abstract task specification, in our case, the classification of the input into $n$ classes.
https://proceedings.mlr.press/v202/balseiro23a.html
https://proceedings.mlr.press/v202/balseiro23a/balseiro23a.pdf
https://openreview.net/forum?id=5h42xM0pwn
Robust Budget Pacing with a Single Sample
https://proceedings.mlr.press/v202/balseiro23a.html
Santiago R. Balseiro, Rachitesh Kumar, Vahab Mirrokni, Balasubramanian Sivan, Di Wang
https://proceedings.mlr.press/v202/balseiro23a.html
ICML 2023
Major Internet advertising platforms offer budget pacing tools as a standard service for advertisers to manage their ad campaigns. Given the inherent non-stationarity in an advertiser’s value and also competing advertisers’ values over time, a commonly used approach is to learn a target expenditure plan that specifies a target spend as a function of time, and then run a controller that tracks this plan. This raises the question: how many historical samples are required to learn a good expenditure plan? We study this question by considering an advertiser repeatedly participating in $T$ second-price auctions, where the tuple of her value and the highest competing bid is drawn from an unknown time-varying distribution. The advertiser seeks to maximize her total utility subject to her budget constraint. Prior work has shown the sufficiency of $T\log T$ samples per distribution to achieve the optimal $O(\sqrt{T})$-regret. We dramatically improve this state-of-the-art and show that just one sample per distribution is enough to achieve the near-optimal $\tilde O(\sqrt{T})$-regret, while still being robust to noise in the sampling distributions.
https://proceedings.mlr.press/v202/banihashem23a.html
https://proceedings.mlr.press/v202/banihashem23a/banihashem23a.pdf
https://openreview.net/forum?id=2hF9MnBfUk
Dynamic Constrained Submodular Optimization with Polylogarithmic Update Time
https://proceedings.mlr.press/v202/banihashem23a.html
Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, Mohammadtaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh
https://proceedings.mlr.press/v202/banihashem23a.html
ICML 2023
Maximizing a monotone submodular function under cardinality constraint $k$ is a core problem in machine learning and database with many basic applications, including video and data summarization, recommendation systems, feature extraction, exemplar clustering, and coverage problems. We study this classic problem in the fully dynamic model where a stream of insertions and deletions of elements of an underlying ground set is given and the goal is to maintain an approximate solution using a fast update time. A recent paper at NeurIPS’20 by Lattanzi, Mitrovic, Norouzi-Fard, Tarnawski, Zadimoghaddam claims to obtain a dynamic algorithm for this problem with a $(\frac{1}{2} -\epsilon)$ approximation ratio and a query complexity bounded by $\mathrm{poly}(\log(n),\log(k),\epsilon^{-1})$. However, as we explain in this paper, the analysis has some important gaps. Having a dynamic algorithm for the problem with polylogarithmic update time is even more important in light of a recent result by Chen and Peng at STOC’22 who show a matching lower bound for the problem – any randomized algorithm with a $\frac{1}{2}+\epsilon$ approximation ratio must have an amortized query complexity that is polynomial in $n$. In this paper, we develop a simpler algorithm for the problem that maintains a $(\frac{1}{2}-\epsilon)$-approximate solution for submodular maximization under cardinality constraint $k$ using a polylogarithmic amortized update time.
https://proceedings.mlr.press/v202/bao23a.html
https://proceedings.mlr.press/v202/bao23a/bao23a.pdf
https://openreview.net/forum?id=Urp3atR1Z3
One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale
https://proceedings.mlr.press/v202/bao23a.html
Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu
https://proceedings.mlr.press/v202/bao23a.html
ICML 2023
This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is – learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model – perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation).
https://proceedings.mlr.press/v202/bao23b.html
https://proceedings.mlr.press/v202/bao23b/bao23b.pdf
https://openreview.net/forum?id=rnNBSMOWvA
Optimizing the Collaboration Structure in Cross-Silo Federated Learning
https://proceedings.mlr.press/v202/bao23b.html
Wenxuan Bao, Haohan Wang, Jun Wu, Jingrui He
https://proceedings.mlr.press/v202/bao23b.html
ICML 2023
In federated learning (FL), multiple clients collaborate to train machine learning models together while keeping their data decentralized. Through utilizing more training data, FL suffers from the potential negative transfer problem: the global FL model may even perform worse than the models trained with local data only. In this paper, we propose FedCollab, a novel FL framework that alleviates negative transfer by clustering clients into non-overlapping coalitions based on their distribution distances and data quantities. As a result, each client only collaborates with the clients having similar data distributions, and tends to collaborate with more clients when it has less data. We evaluate our framework with a variety of datasets, models, and types of non-IIDness. Our results demonstrate that FedCollab effectively mitigates negative transfer across a wide range of FL algorithms and consistently outperforms other clustered FL algorithms.
https://proceedings.mlr.press/v202/bar-tal23a.html
https://proceedings.mlr.press/v202/bar-tal23a/bar-tal23a.pdf
https://openreview.net/forum?id=D4ajVWmgLB
MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation
https://proceedings.mlr.press/v202/bar-tal23a.html
Omer Bar-Tal, Lior Yariv, Yaron Lipman, Tali Dekel
https://proceedings.mlr.press/v202/bar-tal23a.html
ICML 2023
Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.
https://proceedings.mlr.press/v202/barakat23a.html
https://proceedings.mlr.press/v202/barakat23a/barakat23a.pdf
https://openreview.net/forum?id=ZnHXYHx70x
Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space
https://proceedings.mlr.press/v202/barakat23a.html
Anas Barakat, Ilyas Fatkhullin, Niao He
https://proceedings.mlr.press/v202/barakat23a.html
ICML 2023
We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among others. For this problem, we propose a simpler single-loop parameter-free normalized policy gradient algorithm. Implementing a recursive momentum variance reduction mechanism, our algorithm achieves $\tilde{\mathcal{O}}(\epsilon^{-3})$ and $\tilde{\mathcal{O}}(\epsilon^{-2})$ sample complexities for $\epsilon$-first-order stationarity and $\epsilon$-global optimality respectively, under adequate assumptions. We further address the setting of large finite state action spaces via linear function approximation of the occupancy measure and show a $\tilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity for a simple policy gradient method with a linear regression subroutine.
https://proceedings.mlr.press/v202/barbiero23a.html
https://proceedings.mlr.press/v202/barbiero23a/barbiero23a.pdf
https://openreview.net/forum?id=KbvON8xOCJ
Interpretable Neural-Symbolic Concept Reasoning
https://proceedings.mlr.press/v202/barbiero23a.html
Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Mateo Espinosa Zarlenga, Lucie Charlotte Magister, Alberto Tonda, Pietro Lio, Frederic Precioso, Mateja Jamnik, Giuseppe Marra
https://proceedings.mlr.press/v202/barbiero23a.html
ICML 2023
Deep learning methods are highly accurate, yet their opaque decision process prevents them from earning full human trust. Concept-based models aim to address this issue by learning tasks based on a set of human-understandable concepts. However, state-of-the-art concept-based models rely on high-dimensional concept embedding representations which lack a clear semantic meaning, thus questioning the interpretability of their decision process. To overcome this limitation, we propose the Deep Concept Reasoner (DCR), the first interpretable concept-based model that builds upon concept embeddings. In DCR, neural networks do not make task predictions directly, but they build syntactic rule structures using concept embeddings. DCR then executes these rules on meaningful concept truth degrees to provide a final interpretable and semantically-consistent prediction in a differentiable manner. Our experiments show that DCR: (i) improves up to +25% w.r.t. state-of-the-art interpretable concept-based models on challenging benchmarks (ii) discovers meaningful logic rules matching known ground truths even in the absence of concept supervision during training, and (iii), facilitates the generation of counterfactual examples providing the learnt rules as guidance.
https://proceedings.mlr.press/v202/bartan23a.html
https://proceedings.mlr.press/v202/bartan23a/bartan23a.pdf
https://openreview.net/forum?id=GN9bGEWvkx
Moccasin: Efficient Tensor Rematerialization for Neural Networks
https://proceedings.mlr.press/v202/bartan23a.html
Burak Bartan, Haoming Li, Harris Teague, Christopher Lott, Bistra Dilkina
https://proceedings.mlr.press/v202/bartan23a.html
ICML 2023
The deployment and training of neural networks on edge computing devices pose many challenges. The low memory nature of edge devices is often one of the biggest limiting factors encountered in the deployment of large neural network models. Tensor rematerialization or recompute is a way to address high memory requirements for neural network training and inference. In this paper we consider the problem of execution time minimization of compute graphs subject to a memory budget. In particular, we develop a new constraint programming formulation called Moccasin with only $O(n)$ integer variables, where $n$ is the number of nodes in the compute graph. This is a significant improvement over the works in the recent literature that propose formulations with $O(n^2)$ Boolean variables. We present numerical studies that show that our approach is up to an order of magnitude faster than recent work especially for large-scale graphs.
https://proceedings.mlr.press/v202/bassily23a.html
https://proceedings.mlr.press/v202/bassily23a/bassily23a.pdf
https://openreview.net/forum?id=4UStsbnfVT
User-level Private Stochastic Convex Optimization with Optimal Rates
https://proceedings.mlr.press/v202/bassily23a.html
Raef Bassily, Ziteng Sun
https://proceedings.mlr.press/v202/bassily23a.html
ICML 2023
We study the problem of differentially private (DP) stochastic convex optimization (SCO) under the notion of user-level differential privacy. In this problem, there are $n$ users, each contributing $m>1$ samples to the input dataset of the private SCO algorithm, and the notion of indistinguishability embedded in DP is w.r.t. replacing the entire local dataset of any given user. Under smoothness conditions of the loss, we establish the optimal rates for user-level DP-SCO in both the central and local models of DP. In particular, we show, roughly, that the optimal rate is $\frac{1}{\sqrt{nm}}+\frac{\sqrt{d}}{\varepsilon n \sqrt{m}}$ in the central setting and is $\frac{\sqrt{d}}{\varepsilon \sqrt{nm}}$ in the local setting, where $d$ is the dimensionality of the problem and $\varepsilon$ is the privacy parameter. Our algorithms combine new user-level DP mean estimation techniques with carefully designed first-order stochastic optimization methods. For the central DP setting, our optimal rate improves over the rate attained for the same setting in Levy et al. (2021) by $\sqrt{d}$ factor. One of the main ingredients that enabled such an improvement is a novel application of the generalization properties of DP in the context of multi-pass stochastic gradient methods.
https://proceedings.mlr.press/v202/basu23a.html
https://proceedings.mlr.press/v202/basu23a/basu23a.pdf
https://openreview.net/forum?id=0bR5JuxaoN
A Statistical Perspective on Retrieval-Based Models
https://proceedings.mlr.press/v202/basu23a.html
Soumya Basu, Ankit Singh Rawat, Manzil Zaheer
https://proceedings.mlr.press/v202/basu23a.html
ICML 2023
Many modern high-performing machine learning models increasingly rely on scaling up models, e.g., transformer networks. Simultaneously, a parallel line of work aims to improve the model performance by augmenting an input instance with other (labeled) instances during inference. Examples of such augmentations include task-specific prompts and similar examples retrieved from the training data by a nonparametric component. Despite a growing literature showcasing the promise of these retrieval-based models, their theoretical underpinnings %for such models remain under-explored. In this paper, we present a formal treatment of retrieval-based models to characterize their performance via a novel statistical perspective. In particular, we study two broad classes of retrieval-based classification approaches: First, we analyze a local learning framework that employs an explicit local empirical risk minimization based on retrieved examples for each input instance. Interestingly, we show that breaking down the underlying learning task into local sub-tasks enables the model to employ a low complexity parametric component to ensure good overall performance. The second class of retrieval-based approaches we explore learns a global model using kernel methods to directly map an input instance and retrieved examples to a prediction, without explicitly solving a local learning task.
https://proceedings.mlr.press/v202/bauer23a.html
https://proceedings.mlr.press/v202/bauer23a/bauer23a.pdf
https://openreview.net/forum?id=thUjOwfzzv
Human-Timescale Adaptation in an Open-Ended Task Space
https://proceedings.mlr.press/v202/bauer23a.html
Jakob Bauer, Kate Baumli, Feryal Behbahani, Avishkar Bhoopchand, Nathalie Bradley-Schmieg, Michael Chang, Natalie Clay, Adrian Collister, Vibhavari Dasagi, Lucy Gonzalez, Karol Gregor, Edward Hughes, Sheleem Kashem, Maria Loks-Thompson, Hannah Openshaw, Jack Parker-Holder, Shreya Pathak, Nicolas Perez-Nieves, Nemanja Rakicevic, Tim Rocktäschel, Yannick Schroecker, Satinder Singh, Jakub Sygnowski, Karl Tuyls, Sarah York, Alexander Zacherl, Lei M Zhang
https://proceedings.mlr.press/v202/bauer23a.html
ICML 2023
Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL). In this work, we demonstrate that training an RL agent at scale leads to a general in-context learning algorithm that can adapt to open-ended novel embodied 3D problems as quickly as humans. In a vast space of held-out environment dynamics, our adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration, efficient exploitation of acquired knowledge, and can successfully be prompted with first-person demonstrations. Adaptation emerges from three ingredients: (1) meta-reinforcement learning across a vast, smooth and diverse task distribution, (2) a policy parameterised as a large-scale attention-based memory architecture, and (3) an effective automated curriculum that prioritises tasks at the frontier of an agent’s capabilities. We demonstrate characteristic scaling laws with respect to network size, memory length, and richness of the training task distribution. We believe our results lay the foundation for increasingly general and adaptive RL agents that perform well across ever-larger open-ended domains.
https://proceedings.mlr.press/v202/baum23a.html
https://proceedings.mlr.press/v202/baum23a/baum23a.pdf
https://openreview.net/forum?id=XxMRhjbDGq
A Kernel Stein Test of Goodness of Fit for Sequential Models
https://proceedings.mlr.press/v202/baum23a.html
Jerome Baum, Heishiro Kanagawa, Arthur Gretton
https://proceedings.mlr.press/v202/baum23a.html
ICML 2023
We propose a goodness-of-fit measure for probability densities modeling observations with varying dimensionality, such as text documents of differing lengths or variable-length sequences. The proposed measure is an instance of the kernel Stein discrepancy (KSD), which has been used to construct goodness-of-fit tests for unnormalized densities. The KSD is defined by its Stein operator: current operators used in testing apply to fixed-dimensional spaces. As our main contribution, we extend the KSD to the variable-dimension setting by identifying appropriate Stein operators, and propose a novel KSD goodness-of-fit test. As with the previous variants, the proposed KSD does not require the density to be normalized, allowing the evaluation of a large class of models. Our test is shown to perform well in practice on discrete sequential data benchmarks.
https://proceedings.mlr.press/v202/bechavod23a.html
https://proceedings.mlr.press/v202/bechavod23a/bechavod23a.pdf
https://openreview.net/forum?id=DOdfxTZLyq
Individually Fair Learning with One-Sided Feedback
https://proceedings.mlr.press/v202/bechavod23a.html
Yahav Bechavod, Aaron Roth
https://proceedings.mlr.press/v202/bechavod23a.html
ICML 2023
We consider an online learning problem with one-sided feedback, in which the learner is able to observe the true label only for positively predicted instances. On each round, $k$ instances arrive and receive classification outcomes according to a randomized policy deployed by the learner, whose goal is to maximize accuracy while deploying individually fair policies. We first present a novel auditing scheme, capable of utilizing feedback from dynamically-selected panels of multiple, possibly inconsistent, auditors regarding fairness violations. In particular, we show how our proposed auditing scheme allows for algorithmically exploring the resulting accuracy-fairness frontier, with no need for additional feedback from auditors. We then present an efficient reduction from our problem of online learning with one-sided feedback and a panel reporting fairness violations to the contextual combinatorial semi-bandit problem (Cesa-Bianchi & Lugosi, 2009; Gyorgy et al., 2007), allowing us to leverage algorithms for contextual combinatorial semi-bandits to establish multi-criteria no regret guarantees in our setting, simultaneously for accuracy and fairness. Our results eliminate two potential sources of bias from prior work: the “hidden outcomes” that are not available to an algorithm operating in the full information setting, and human biases that might be present in any single human auditor, but can be mitigated by selecting a well-chosen panel.
https://proceedings.mlr.press/v202/becker23a.html
https://proceedings.mlr.press/v202/becker23a/becker23a.pdf
https://openreview.net/forum?id=LztkK0UZxS
Predicting Ordinary Differential Equations with Transformers
https://proceedings.mlr.press/v202/becker23a.html
Sören Becker, Michal Klein, Alexander Neitz, Giambattista Parascandolo, Niki Kilbertus
https://proceedings.mlr.press/v202/becker23a.html
ICML 2023
We develop a transformer-based sequence-to-sequence model that recovers scalar ordinary differential equations (ODEs) in symbolic form from irregularly sampled and noisy observations of a single solution trajectory. We demonstrate in extensive empirical evaluations that our model performs better or on par with existing methods in terms of accurate recovery across various settings. Moreover, our method is efficiently scalable: after one-time pretraining on a large set of ODEs, we can infer the governing law of a new observed solution in a few forward passes of the model.
https://proceedings.mlr.press/v202/beechey23a.html
https://proceedings.mlr.press/v202/beechey23a/beechey23a.pdf
https://openreview.net/forum?id=R1blujRwj1
Explaining Reinforcement Learning with Shapley Values
https://proceedings.mlr.press/v202/beechey23a.html
Daniel Beechey, Thomas M. S. Smith, Özgür Şimşek
https://proceedings.mlr.press/v202/beechey23a.html
ICML 2023
For reinforcement learning systems to be widely adopted, their users must understand and trust them. We present a theoretical analysis of explaining reinforcement learning using Shapley values, following a principled approach from game theory for identifying the contribution of individual players to the outcome of a cooperative game. We call this general framework Shapley Values for Explaining Reinforcement Learning (SVERL). Our analysis exposes the limitations of earlier uses of Shapley values in reinforcement learning. We then develop an approach that uses Shapley values to explain agent performance. In a variety of domains, SVERL produces meaningful explanations that match and supplement human intuition.
https://proceedings.mlr.press/v202/behmanesh23a.html
https://proceedings.mlr.press/v202/behmanesh23a/behmanesh23a.pdf
https://openreview.net/forum?id=PWRIIwBJFo
TIDE: Time Derivative Diffusion for Deep Learning on Graphs
https://proceedings.mlr.press/v202/behmanesh23a.html
Maysam Behmanesh, Maximilian Krahn, Maks Ovsjanikov
https://proceedings.mlr.press/v202/behmanesh23a.html
ICML 2023
A prominent paradigm for graph neural networks is based on the message-passing framework. In this framework, information communication is realized only between neighboring nodes. The challenge of approaches that use this paradigm is to ensure efficient and accurate long-distance communication between nodes, as deep convolutional networks are prone to over smoothing. In this paper, we present a novel method based on time derivative graph diffusion (TIDE) to overcome these structural limitations of the message-passing framework. Our approach allows for optimizing the spatial extent of diffusion across various tasks and network channels, thus enabling medium and long-distance communication efficiently. Furthermore, we show that our architecture design also enables local message-passing and thus inherits from the capabilities of local message-passing approaches. We show that on both widely used graph benchmarks and synthetic mesh and graph datasets, the proposed framework outperforms state-of-the-art methods by a significant margin.
https://proceedings.mlr.press/v202/benbaki23a.html
https://proceedings.mlr.press/v202/benbaki23a/benbaki23a.pdf
https://openreview.net/forum?id=RAeN6s9RZV
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
https://proceedings.mlr.press/v202/benbaki23a.html
Riade Benbaki, Wenyu Chen, Xiang Meng, Hussein Hazimeh, Natalia Ponomareva, Zhe Zhao, Rahul Mazumder
https://proceedings.mlr.press/v202/benbaki23a.html
ICML 2023
The sheer size of modern neural networks makes model serving a serious computational challenge. A popular class of compression techniques overcomes this challenge by pruning or sparsifying the weights of pretrained networks. While useful, these techniques often face serious tradeoffs between computational requirements and compression quality. In this work, we propose a novel optimization-based pruning framework that considers the combined effect of pruning (and updating) multiple weights subject to a sparsity constraint. Our approach, CHITA, extends the classical Optimal Brain Surgeon framework and results in significant improvements in speed, memory, and performance over existing optimization-based approaches for network pruning. CHITA’s main workhorse performs combinatorial optimization updates on a memory-friendly representation of local quadratic approximation(s) of the loss function. On a standard benchmark of pretrained models and datasets, CHITA leads to superior sparsity-accuracy tradeoffs than competing methods. For example, for MLPNet with only 2% of the weights retained, our approach improves the accuracy by 63% relative to the state of the art. Furthermore, when used in conjunction with fine-tuning SGD steps, our method achieves significant accuracy gains over state-of-the-art approaches. Our code is publicly available at: https://github.com/mazumder-lab/CHITA .
https://proceedings.mlr.press/v202/bender23a.html
https://proceedings.mlr.press/v202/bender23a/bender23a.pdf
https://openreview.net/forum?id=3UHmUaOVWp
Continuously Parameterized Mixture Models
https://proceedings.mlr.press/v202/bender23a.html
Christopher M Bender, Yifeng Shi, Marc Niethammer, Junier Oliva
https://proceedings.mlr.press/v202/bender23a.html
ICML 2023
Mixture models are universal approximators of smooth densities but are difficult to utilize in complicated datasets due to restrictions on typically available modes and challenges with initialiations. We show that by continuously parameterizing a mixture of factor analyzers using a learned ordinary differential equation, we can improve the fit of mixture models over direct methods. Once trained, the mixture components can be extracted and the neural ODE can be discarded, leaving us with an effective, but low-resource model. We additionally explore the use of a training curriculum from an easy-to-model latent space extracted from a normalizing flow to the more complex input space and show that the smooth curriculum helps to stabilize and improve results with and without the continuous parameterization. Finally, we introduce a hierarchical version of the model to enable more flexible, robust classification and clustering, and show substantial improvements against traditional parameterizations of GMMs.
https://proceedings.mlr.press/v202/bendinelli23a.html
https://proceedings.mlr.press/v202/bendinelli23a/bendinelli23a.pdf
https://openreview.net/forum?id=EiHX7MfAG0
Controllable Neural Symbolic Regression
https://proceedings.mlr.press/v202/bendinelli23a.html
Tommaso Bendinelli, Luca Biggio, Pierre-Alexandre Kamienny
https://proceedings.mlr.press/v202/bendinelli23a.html
ICML 2023
In symbolic regression, the objective is to find an analytical expression that accurately fits experimental data with the minimal use of mathematical symbols such as operators, variables, and constants. However, the combinatorial space of possible expressions can make it challenging for traditional evolutionary algorithms to find the correct expression in a reasonable amount of time. To address this issue, Neural Symbolic Regression (NSR) algorithms have been developed that can quickly identify patterns in the data and generate analytical expressions. However, these methods, in their current form, lack the capability to incorporate user-defined prior knowledge, which is often required in natural sciences and engineering fields. To overcome this limitation, we propose a novel neural symbolic regression method, named Neural Symbolic Regression with Hypothesis (NSRwH) that enables the explicit incorporation of assumptions about the expected structure of the ground-truth expression into the prediction process. Our experiments demonstrate that the proposed conditioned deep learning model outperforms its unconditioned counterparts in terms of accuracy while also providing control over the predicted expression structure.
https://proceedings.mlr.press/v202/bengs23a.html
https://proceedings.mlr.press/v202/bengs23a/bengs23a.pdf
https://openreview.net/forum?id=MUC7ASJiBT
On Second-Order Scoring Rules for Epistemic Uncertainty Quantification
https://proceedings.mlr.press/v202/bengs23a.html
Viktor Bengs, Eyke Hüllermeier, Willem Waegeman
https://proceedings.mlr.press/v202/bengs23a.html
ICML 2023
It is well known that accurate probabilistic predictors can be trained through empirical risk minimisation with proper scoring rules as loss functions. While such learners capture so-called aleatoric uncertainty of predictions, various machine learning methods have recently been developed with the goal to let the learner also represent its epistemic uncertainty, i.e., the uncertainty caused by a lack of knowledge and data. An emerging branch of the literature proposes the use of a second-order learner that provides predictions in terms of distributions on probability distributions. However, recent work has revealed serious theoretical shortcomings for second-order predictors based on loss minimisation. In this paper, we generalise these findings and prove a more fundamental result: There seems to be no loss function that provides an incentive for a second-order learner to faithfully represent its epistemic uncertainty in the same manner as proper scoring rules do for standard (first-order) learners. As a main mathematical tool to prove this result, we introduce the generalised notion of second-order scoring rules.
https://proceedings.mlr.press/v202/bennouna23a.html
https://proceedings.mlr.press/v202/bennouna23a/bennouna23a.pdf
https://openreview.net/forum?id=4cvSExetbO
Certified Robust Neural Networks: Generalization and Corruption Resistance
https://proceedings.mlr.press/v202/bennouna23a.html
Amine Bennouna, Ryan Lucas, Bart Van Parys
https://proceedings.mlr.press/v202/bennouna23a.html
ICML 2023
Recent work have demonstrated that robustness (to "corruption") can be at odds with generalization. Adversarial training, for instance, aims to reduce the problematic susceptibility of modern neural networks to small data perturbations. Surprisingly, overfitting is a major concern in adversarial training despite being mostly absent in standard training. We provide here theoretical evidence for this peculiar “robust overfitting” phenomenon. Subsequently, we advance a novel distributionally robust loss function bridging robustness and generalization. We demonstrate both theoretically as well as empirically the loss to enjoy a certified level of robustness against two common types of corruption|data evasion and poisoning attacks|while ensuring guaranteed generalization. We show through careful numerical experiments that our resulting holistic robust (HR) training procedure yields SOTA performance. Finally, we indicate that HR training can be interpreted as a direct extension of adversarial training and comes with a negligible additional computational burden. A ready-to-use python library implementing our algorithm is available at https://github.com/RyanLucas3/HR_Neural_Networks.
https://proceedings.mlr.press/v202/berlinghieri23a.html
https://proceedings.mlr.press/v202/berlinghieri23a/berlinghieri23a.pdf
https://openreview.net/forum?id=Qtix8HLmDx
Gaussian processes at the Helm(holtz): A more fluid model for ocean currents
https://proceedings.mlr.press/v202/berlinghieri23a.html
Renato Berlinghieri, Brian L. Trippe, David R. Burt, Ryan James Giordano, Kaushik Srinivasan, Tamay Özgökmen, Junfei Xia, Tamara Broderick
https://proceedings.mlr.press/v202/berlinghieri23a.html
ICML 2023
Oceanographers are interested in predicting ocean currents and identifying divergences in a current vector field based on sparse observations of buoy velocities. Since we expect current dynamics to be smooth but highly non-linear, Gaussian processes (GPs) offer an attractive model. But we show that applying a GP with a standard stationary kernel directly to buoy data can struggle at both current prediction and divergence identification – due to some physically unrealistic prior assumptions. To better reflect known physical properties of currents, we propose to instead put a standard stationary kernel on the divergence and curl-free components of a vector field obtained through a Helmholtz decomposition. We show that, because this decomposition relates to the original vector field just via mixed partial derivatives, we can still perform inference given the original data with only a small constant multiple of additional computational expense. We illustrate the benefits of our method on synthetic and real oceans data.
https://proceedings.mlr.press/v202/bernasconi23a.html
https://proceedings.mlr.press/v202/bernasconi23a/bernasconi23a.pdf
https://openreview.net/forum?id=jiC1uCDIEe
Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion
https://proceedings.mlr.press/v202/bernasconi23a.html
Martino Bernasconi, Matteo Castiglioni, Andrea Celli, Alberto Marchesi, Francesco Trovò, Nicola Gatti
https://proceedings.mlr.press/v202/bernasconi23a.html
ICML 2023
Bayesian persuasion studies how an informed sender should influence beliefs of rational receivers that take decisions through Bayesian updating of a common prior. We focus on the online Bayesian persuasion framework, in which the sender repeatedly faces one or more receivers with unknown and adversarially selected types. First, we show how to obtain a tight $\tilde O(T^{1/2})$ regret bound in the case in which the sender faces a single receiver and has bandit feedback, improving over the best previously known bound of $\tilde O(T^{4/5})$. Then, we provide the first no-regret guarantees for the multi-receiver setting under bandit feedback. Finally, we show how to design no-regret algorithms with polynomial per-iteration running time by exploiting type reporting, thereby circumventing known complexity results on online Bayesian persuasion. We provide efficient algorithms guaranteeing a $O(T^{1/2})$ regret upper bound both in the single- and multi-receiver scenario when type reporting is allowed.
https://proceedings.mlr.press/v202/bernasconi23b.html
https://proceedings.mlr.press/v202/bernasconi23b/bernasconi23b.pdf
https://openreview.net/forum?id=RgwqlatND7
Constrained Phi-Equilibria
https://proceedings.mlr.press/v202/bernasconi23b.html
Martino Bernasconi, Matteo Castiglioni, Alberto Marchesi, Francesco Trovò, Nicola Gatti
https://proceedings.mlr.press/v202/bernasconi23b.html
ICML 2023
The computational study of equilibria involving constraints on players’ strategies has been largely neglected. However, in real-world applications, players are usually subject to constraints ruling out the feasibility of some of their strategies, such as, e.g., safety requirements and budget caps. Computational studies on constrained versions of the Nash equilibrium have lead to some results under very stringent assumptions, while finding constrained versions of the correlated equilibrium (CE) is still unexplored. In this paper, we introduce and computationally characterize constrained Phi-equilibria—a more general notion than constrained CEs—in normal-form games. We show that computing such equilibria is in general computationally intractable, and also that the set of the equilibria may not be convex, providing a sharp divide with unconstrained CEs. Nevertheless, we provide a polynomial-time algorithm for computing a constrained (approximate) Phi-equilibrium maximizing a given linear function, when either the number of constraints or that of players’ actions is fixed. Moreover, in the special case in which a player’s constraints do not depend on other players’ strategies, we show that an exact, function-maximizing equilibrium can be computed in polynomial time, while one (approximate) equilibrium can be found with an efficient decentralized no-regret learning algorithm.
https://proceedings.mlr.press/v202/berrevoets23a.html
https://proceedings.mlr.press/v202/berrevoets23a/berrevoets23a.pdf
https://openreview.net/forum?id=8pCLQsEMPQ
Differentiable and Transportable Structure Learning
https://proceedings.mlr.press/v202/berrevoets23a.html
Jeroen Berrevoets, Nabeel Seedat, Fergus Imrie, Mihaela Van Der Schaar
https://proceedings.mlr.press/v202/berrevoets23a.html
ICML 2023
Directed acyclic graphs (DAGs) encode a lot of information about a particular distribution in their structure. However, compute required to infer these structures is typically super-exponential in the number of variables, as inference requires a sweep of a combinatorially large space of potential structures. That is, until recent advances made it possible to search this space using a differentiable metric, drastically reducing search time. While this technique— named NOTEARS —is widely considered a seminal work in DAG-discovery, it concedes an important property in favour of differentiability: transportability. To be transportable, the structures discovered on one dataset must apply to another dataset from the same domain. We introduce D-Struct which recovers transportability in the discovered structures through a novel architecture and loss function while remaining fully differentiable. Because D-Struct remains differentiable, our method can be easily adopted in existing differentiable architectures, as was previously done with NOTEARS. In our experiments, we empirically validate D-Struct with respect to edge accuracy and structural Hamming distance in a variety of settings.
https://proceedings.mlr.press/v202/berzins23a.html
https://proceedings.mlr.press/v202/berzins23a/berzins23a.pdf
https://openreview.net/forum?id=F2OjOG4j55
Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision
https://proceedings.mlr.press/v202/berzins23a.html
Arturs Berzins
https://proceedings.mlr.press/v202/berzins23a.html
ICML 2023
A neural network consisting of piecewise affine building blocks, such as fully-connected layers and ReLU activations, is itself a piecewise affine function supported on a polyhedral complex. This complex has been previously studied to characterize theoretical properties of neural networks, but, in practice, extracting it remains a challenge due to its high combinatorial complexity. A natural idea described in previous works is to subdivide the regions via intersections with hyperplanes induced by each neuron. However, we argue that this view leads to computational redundancy. Instead of regions, we propose to subdivide edges, leading to a novel method for polyhedral complex extraction. A key to this are sign-vectors, which encode the combinatorial structure of the complex. Our approach allows to use standard tensor operations on a GPU, taking seconds for millions of cells on a consumer grade machine. Motivated by the growing interest in neural shape representation, we use the speed and differentiablility of our method to optimize geometric properties of the complex. The code is available at https://github.com/arturs-berzins/relu_edge_subdivision.
https://proceedings.mlr.press/v202/bethune23a.html
https://proceedings.mlr.press/v202/bethune23a/bethune23a.pdf
https://openreview.net/forum?id=g68Q7mL0P5
Robust One-Class Classification with Signed Distance Function using 1-Lipschitz Neural Networks
https://proceedings.mlr.press/v202/bethune23a.html
Louis Béthune, Paul Novello, Guillaume Coiffier, Thibaut Boissin, Mathieu Serrurier, Quentin Vincenot, Andres Troya-Galvis
https://proceedings.mlr.press/v202/bethune23a.html
ICML 2023
We propose a new method, dubbed One Class Signed Distance Function (OCSDF), to perform One Class Classification (OCC) by provably learning the Signed Distance Function (SDF) to the boundary of the support of any distribution. The distance to the support can be interpreted as a normality score, and its approximation using 1-Lipschitz neural networks provides robustness bounds against $l2$ adversarial attacks, an under-explored weakness of deep learning-based OCC algorithms. As a result, OCSDF comes with a new metric, certified AUROC, that can be computed at the same cost as any classical AUROC. We show that OCSDF is competitive against concurrent methods on tabular and image data while being way more robust to adversarial attacks, illustrating its theoretical properties. Finally, as exploratory research perspectives, we theoretically and empirically show how OCSDF connects OCC with image generation and implicit neural surface parametrization.
https://proceedings.mlr.press/v202/bevilacqua23a.html
https://proceedings.mlr.press/v202/bevilacqua23a/bevilacqua23a.pdf
https://openreview.net/forum?id=kP2p67F4G7
Neural Algorithmic Reasoning with Causal Regularisation
https://proceedings.mlr.press/v202/bevilacqua23a.html
Beatrice Bevilacqua, Kyriacos Nikiforou, Borja Ibarz, Ioana Bica, Michela Paganini, Charles Blundell, Jovana Mitrovic, Petar Veličković
https://proceedings.mlr.press/v202/bevilacqua23a.html
ICML 2023
Recent work on neural algorithmic reasoning has investigated the reasoning capabilities of neural networks, effectively demonstrating they can learn to execute classical algorithms on unseen data coming from the train distribution. However, the performance of existing neural reasoners significantly degrades on out-of-distribution (OOD) test data, where inputs have larger sizes. In this work, we make an important observation: there are many different inputs for which an algorithm will perform certain intermediate computations identically. This insight allows us to develop data augmentation procedures that, given an algorithm’s intermediate trajectory, produce inputs for which the target algorithm would have exactly the same next trajectory step. We ensure invariance in the next-step prediction across such inputs, by employing a self-supervised objective derived by our observation, formalised in a causal graph. We prove that the resulting method, which we call Hint-ReLIC, improves the OOD generalisation capabilities of the reasoner. We evaluate our method on the CLRS algorithmic reasoning benchmark, where we show up to 3x improvements on the OOD test data.
https://proceedings.mlr.press/v202/bharti23a.html
https://proceedings.mlr.press/v202/bharti23a/bharti23a.pdf
https://openreview.net/forum?id=s4dX9ymHrP
Optimally-weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference
https://proceedings.mlr.press/v202/bharti23a.html
Ayush Bharti, Masha Naslidnyk, Oscar Key, Samuel Kaski, Francois-Xavier Briol
https://proceedings.mlr.press/v202/bharti23a.html
ICML 2023
Likelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the nonparametric learning framework. The MMD is commonly estimated at a root-$m$ rate, where $m$ is the number of simulated samples. This can lead to significant computational challenges since a large $m$ is required to obtain an accurate estimate, which is crucial for parameter estimation. In this paper, we propose a novel estimator for the MMD with significantly improved sample complexity. The estimator is particularly well suited for computationally expensive smooth simulators with low- to mid-dimensional inputs. This claim is supported through both theoretical results and an extensive simulation study on benchmark simulators.
https://proceedings.mlr.press/v202/bhaskara23a.html
https://proceedings.mlr.press/v202/bhaskara23a/bhaskara23a.pdf
https://openreview.net/forum?id=SgeIqUvo4w
Bandit Online Linear Optimization with Hints and Queries
https://proceedings.mlr.press/v202/bhaskara23a.html
Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit
https://proceedings.mlr.press/v202/bhaskara23a.html
ICML 2023
We study variants of the online linear optimization (OLO) problem with bandit feedback, where the algorithm has access to external information about the unknown cost vector. Our motivation is the recent body of work on using such “hints” towards improving regret bounds for OLO problems in the full-information setting. Unlike in the full-information OLO setting, with bandit feedback, we first show that one cannot improve the standard regret bounds of $\tilde{O}(\sqrt{T})$ by using hints, even if they are always well-correlated with the cost vector. In contrast, if the algorithm is empowered to issue queries and if all the responses are correct, then we show $O(\log T)$ regret is achievable. We then show how to make this result more robust—when some of the query responses can be adversarial—by using a little feedback on the quality of the responses.
https://proceedings.mlr.press/v202/bhatnagar23a.html
https://proceedings.mlr.press/v202/bhatnagar23a/bhatnagar23a.pdf
https://openreview.net/forum?id=qqMcym6AmS
Improved Online Conformal Prediction via Strongly Adaptive Online Learning
https://proceedings.mlr.press/v202/bhatnagar23a.html
Aadyot Bhatnagar, Huan Wang, Caiming Xiong, Yu Bai
https://proceedings.mlr.press/v202/bhatnagar23a.html
ICML 2023
We study the problem of uncertainty quantification via prediction sets, in an online setting where the data distribution may vary arbitrarily over time. Recent work develops online conformal prediction techniques that leverage regret minimization algorithms from the online learning literature to learn prediction sets with approximately valid coverage and small regret. However, standard regret minimization is insufficient for handling changing environments, where performance guarantees may be desired not only over the full time horizon but also in all (sub-)intervals of time. We develop new online conformal prediction methods that minimize the strongly adaptive regret, which measures the worst-case regret over all intervals of a fixed length. We prove that our methods achieve near-optimal strongly adaptive regret for all interval lengths simultaneously, and approximately valid coverage. Experiments show that our methods consistently obtain better coverage and smaller prediction sets than existing methods on real-world tasks such as time series forecasting and image classification under distribution shift.
End of preview. Expand in Data Studio

ICML 2023 International Conference on Machine Learning 2023 Accepted Paper Meta Info Dataset

This dataset is collect from the ICML 2024 OpenReview website (https://openreview.net/group?id=ICML.cc/2023/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/icml2023). For researchers who are interested in doing analysis of ICML 2023 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the ICML 2023 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.

Meta Information of Json File

{
    "abs": "https://proceedings.mlr.press/v202/aamand23a.html",
    "Download PDF": "https://proceedings.mlr.press/v202/aamand23a/aamand23a.pdf",
    "OpenReview": "https://openreview.net/forum?id=BVomXLJQoH",
    "title": "Data Structures for Density Estimation",
    "url": "https://proceedings.mlr.press/v202/aamand23a.html",
    "authors": "Anders Aamand, Alexandr Andoni, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, Sandeep Silwal",
    "detail_url": "https://proceedings.mlr.press/v202/aamand23a.html",
    "tags": "ICML 2023",
    "abstract": "We study statistical/computational tradeoffs for the following density estimation problem: given $k$ distributions $v_1, \\ldots, v_k$ over a discrete domain of size $n$, and sampling access to a distribution $p$, identify $v_i$ that is \"close\" to $p$. Our main result is the first data structure that, given a sublinear (in $n$) number of samples from $p$, identifies $v_i$ in time sublinear in $k$. We also give an improved version of the algorithm of Acharya et al. (2018) that reports $v_i$ in time linear in $k$. The experimental evaluation of the latter algorithm shows that it achieves a significant reduction in the number of operations needed to achieve a given accuracy compared to prior work."
}

Related

AI Equation

List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex

AI Agent Marketplace and Search

AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog

AI Agent Reviews

AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews

Downloads last month
36