output
bool 2
classes | input
stringlengths 345
2.91k
|
---|---|
true |
Revisiting Auxiliary Latent Variables in Generative Models
variational inference monte carlo objectives VAE IWAE sampling contrastive predictive coding CPC noise contrastive estimation NCE auxiliary variable variational inference generative modeling energy-based models
Extending models with auxiliary latent variables is a well-known technique to in-crease model expressivity. Bachman & Precup (2015); Naesseth et al. (2018); Cremer et al. (2017); Domke & Sheldon (2018) show that Importance Weighted Autoencoders (IWAE) (Burda et al., 2015) can be viewed as extending the variational family with auxiliary latent variables. Similarly, we show that this view encompasses many of the recent developments in variational bounds (Maddisonet al., 2017; Naesseth et al., 2018; Le et al., 2017; Yin & Zhou, 2018; Molchanovet al., 2018; Sobolev & Vetrov, 2018). The success of enriching the variational family with auxiliary latent variables motivates applying the same techniques to the generative model. We develop a generative model analogous to the IWAE bound and empirically show that it outperforms the recently proposed Learned Accept/Reject Sampling algorithm (Bauer & Mnih, 2018), while being substantially easier to implement. Furthermore, we show that this generative process provides new insights on ranking Noise Contrastive Estimation (Jozefowicz et al.,2016; Ma & Collins, 2018) and Contrastive Predictive Coding (Oord et al., 2018).
|
false |
Deep Learning Is Composite Kernel Learning
deep learning kernel methods
Recent works have connected deep learning and kernel methods. In this paper, we show that architectural choices such as convolutional layers with pooling, skip connections, make deep learning a composite kernel learning method, where the kernel is a (architecture dependent) composition of base kernels: even before training, standard deep networks have in-built structural properties that ensure their success. In particular, we build on the recently developed `neural path' framework that characterises the role of gates/masks in fully connected deep networks with ReLU activations.
|
false |
DFMTDS: Distribution-free martingale test Distribution shift
Conformal prediction Martingales Distribution shift detection Distribution-free testing Transductive inference
A standard assumption in machine learning is the data are generated by a fixed but unknown probability, which is equivalent to assuming the examples are generated from the same probability distribution independently. So for most of the learning research we usually randomly shuffle the whole data into training and test data set. However, for real-life application reality is that the data points are observed one by one. This paper is devoted to testing the assumption of distribution shift on-line: the observed data arrive one by one, and after receiving each object, the machine learning algorithms give a prediction label, we would like to have a valid measure of the degree to which the evidence to against the assumption of non-distribution-shift. Such measures are provided under the framework of distribution-free methods, also called martingales measure, which is a general empirical theory of probability developed in 1993-2003. We report the experimental performance of martingales to measure on the real-life data sets and the results to show a bona fide fact that the distribution shift testing is an inescapable reality when we adaptive machine learning algorithms to the original order.
|
false |
Toward Trustworthy Neural Program Synthesis
LLM code generation trustworthy AI
We develop an approach to estimate the probability that a program sampled from
a large language model is correct. Given a natural language description of a programming problem, our method samples both candidate programs as well as candidate predicates specifying how the program should behave. This allows learning
a model that forms a well-calibrated probabilistic prediction of program correctness. Our system also infers the which predicates are useful to explain the behavior of the generated code, and humans preferred these in a human study over raw
language model outputs. Our method is simple, easy to implement, and maintains
state of the art generation accuracy results.
|
false |
More Side Information, Better Pruning: Shared-Label Classification as a Case Study
Pruning Compression CNN LSTM Image classification
Pruning of neural networks, also known as compression or sparsification, is the task of converting a given network, which may be too expensive to use (in prediction) on low resource platforms, with another 'lean' network which performs almost as well as the original one, while using considerably fewer resources. By turning the compression ratio knob, the practitioner can trade off the information gain versus the necessary computational resources, where information gain is a measure of reduction of uncertainty in the prediction.
In certain cases, however, the practitioner may readily possess some information on the prediction from other sources. The main question we study here is, whether it is possible to take advantage of the additional side information, in order to further reduce the computational resources, in tandem with the pruning process?
Motivated by a real-world application, we distill the following elegantly stated problem. We are given a multi-class prediction problem, combined with a (possibly pre-trained) network architecture for solving it on a given instance distribution, and also a method for pruning the network to allow trading off prediction speed with accuracy. We assume the network and the pruning methods are state-of-the-art, and it is not our goal here to improve them. However, instead of being asked to predict a single drawn instance $x$, we are being asked to predict the label of an $n$-tuple of instances $(x_1,\dots x_n)$, with the additional side information of all tuple instances share the same label. The shared label distribution is identical to the distribution on which the network was trained.
One trivial way to do this is by obtaining individual raw predictions for each of the $n$ instances (separately), using our given network, pruned for a desired accuracy, then taking the average to obtain a single more accurate prediction. This is simple to implement but intuitively sub-optimal, because the $n$ independent instantiations of the network do not share any information, and would probably waste resources on overlapping computation.
We propose various methods for performing this task, and compare them using extensive experiments on public benchmark data sets for image classification. Our comparison is based on measures of relative information (RI) and $n$-accuracy, which we define. Interestingly, we empirically find that I) sharing information between the $n$ independently computed hidden representations of $x_1,..,x_n$, using an LSTM based gadget, performs best, among all methods we experiment with, ii) for all methods studied, we exhibit a sweet spot phenomenon, which sheds light on the compression-information trade-off and may assist a practitioner to choose the desired compression ratio.
|
true |
Self-Ablating Transformers: More Interpretability, Less Sparsity
Mechanistic Interpretability Sparsity Language Models Transformer
A growing intuition in machine learning suggests a link between sparsity and interpretability. We introduce a novel self-ablation mechanism to investigate this connection ante-hoc in the context of language transformers. Our approach dynamically enforces a k-winner-takes-all constraint, forcing the model to demonstrate selective activation across neuron and attention units. Unlike post-hoc methods that analyze already-trained models, our approach integrates interpretability directly into model training, promoting feature localization from inception. Training small models on the TinyStories dataset and employing interpretability tests, we find that self-ablation leads to more localized circuits, concentrated feature representations, and increased neuron specialization without compromising language modelling performance. Surprisingly, our method also decreased overall sparsity, indicating that self-ablation promotes specialization rather than widespread inactivity. This reveals a complex interplay between sparsity and interpretability, where decreased global sparsity can coexist with increased local specialization, leading to enhanced interpretability. To facilitate reproducibility, we make our code available at https://github.com/keenanpepper/self-ablating-transformers.
|
true |
Agentic AI for Scientific Discovery: A Survey of Progress, Challenges, and Future Directions
Agentic AI; Scientific Discovery; Literature Review
The integration of Agentic AI into scientific discovery marks a new frontier in research automation. These AI systems, capable of reasoning, planning, and autonomous decision-making, are transforming how scientists perform literature review, generate hypotheses, conduct experiments, and analyze results. This survey provides a comprehensive overview of Agentic AI for scientific discovery, categorizing existing systems and tools, and highlighting recent progress across fields such as chemistry, biology, and materials science. We discuss key evaluation metrics, implementation frameworks, and commonly used datasets to offer a detailed understanding of the current state of the field. Finally, we address critical challenges, such as literature review automation, system reliability, and ethical concerns, while outlining future research directions that emphasize human-AI collaboration and enhanced system calibration.
|
false |
Contrastive Code Representation Learning
programming languages representation learning contrastive learning unsupervised learning self-supervised learning transfer learning nlp pretraining type inference summarization
Machine-aided programming tools such as automated type predictors and autocomplete are increasingly learning-based. However, current approaches predominantly rely on supervised learning with task-specific datasets. We propose Contrastive Code Representation Learning (ContraCode), a self-supervised algorithm for learning task-agnostic semantic representations of programs via contrastive learning. Our approach uses no human-provided labels, only the raw text of programs. ContraCode optimizes for a representation that is invariant to semantic-preserving code transformations. We develop an automated source-to-source compiler that generates textually divergent variants of source programs. We then train a neural network to identify variants of anchor programs within a large batch of non-equivalent negatives. To solve this task, the network must extract features representing the functionality, not form, of the program. In experiments, we pre-train ContraCode with 1.8M unannotated JavaScript methods mined from GitHub, then transfer to downstream tasks by fine-tuning. Pre-training with ContraCode consistently improves the F1 score of code summarization baselines and top-1 accuracy of type inference baselines by 2% to 13%. ContraCode achieves 9% higher top-1 accuracy than the current state-of-the-art static type analyzer for TypeScript. Finally, representations learned through a hybrid contrastive and reconstruction objective transfer in zero-shot to code clone detection with +10% AUROC over a static text similarity measure and +5% over reconstruction alone.
|
false |
R-MONet: Region-Based Unsupervised Scene Decomposition and Representation via Consistency of Object Representations
unsupervised representation learning unsupervised scene representation unsupervised scene decomposition generative models
Decomposing a complex scene into multiple objects is a natural instinct of an intelligent vision system. Recently, the interest in unsupervised scene representation learning emerged and many previous works tackle this by decomposing scenes into object representations either in the form of segmentation masks or position and scale latent variables (i.e. bounding boxes). We observe that these two types of representation both contain object geometric information and should be consistent with each other. Inspired by this observation, we provide an unsupervised generative framework called R-MONet that can generate objects geometric representation in the form of bounding boxes and segmentation masks simultaneously. While bounding boxes can represent the region of interest (ROI) for generating foreground segmentation masks, the foreground segmentation masks can also be used to supervise bounding boxes learning with the Multi-Otsu Thresholding method. Through the experiments on CLEVR and Multi-dSprites datasets, we show that ensuring the consistency of two types of representation can help the model to decompose the scene and learn better object geometric representations.
|
true |
The geometry of integration in text classification RNNs
Recurrent neural networks dynamical systems interpretability document classification reverse engineering
Despite the widespread application of recurrent neural networks (RNNs), a unified understanding of how RNNs solve particular tasks remains elusive. In particular, it is unclear what dynamical patterns arise in trained RNNs, and how those pat-terns depend on the training dataset or task. This work addresses these questions in the context of text classification, building on earlier work studying the dynamics of binary sentiment-classification networks (Maheswaranathan et al., 2019). We study text-classification tasks beyond the binary case, exploring the dynamics ofRNNs trained on both natural and synthetic datasets. These dynamics, which we find to be both interpretable and low-dimensional, share a common mechanism across architectures and datasets: specifically, these text-classification networks use low-dimensional attractor manifolds to accumulate evidence for each class as they process the text. The dimensionality and geometry of the attractor manifold are determined by the structure of the training dataset, with the dimensionality reflecting the number of scalar quantities the network remembers in order to classify.In categorical classification, for example, we show that this dimensionality is one less than the number of classes. Correlations in the dataset, such as those induced by ordering, can further reduce the dimensionality of the attractor manifold; we show how to predict this reduction using simple word-count statistics computed on the training dataset. To the degree that integration of evidence towards a decision is a common computational primitive, this work continues to lay the foundation for using dynamical systems techniques to study the inner workings of RNNs.
|
false |
Distantly supervised end-to-end medical entity extraction from electronic health records with human-level quality
entity extraction medical entity extraction named entity recognition named entity normalization electronic health records unsupervised learning distant supervision.
Medical entity extraction (EE) is a standard procedure used as a first stage inmedical texts processing. Usually Medical EE is a two-step process: named entityrecognition (NER) and named entity normalization (NEN). We propose a novelmethod of doing medical EE from electronic health records (EHR) as a single-step multi-label classification task by fine-tuning a transformer model pretrainedon a large EHR dataset. Our model is trained end-to-end in an distantly supervisedmanner using targets automatically extracted from medical knowledge base. Weshow that our model learns to generalize for entities that are present frequentlyenough, achieving human-level classification quality for most frequent entities.Our work demonstrates that medical entity extraction can be done end-to-endwithout human supervision and with human quality given the availability of alarge enough amount of unlabeled EHR and a medical knowledge base.
|
false |
Decentralized Deterministic Multi-Agent Reinforcement Learning
multiagent reinforcement learning MARL decentralized actor-critic algorithm
Recent work in multi-agent reinforcement learning (MARL) by [Zhang, ICML12018] provided the first decentralized actor-critic algorithm to offer convergence guarantees. In that work, policies are stochastic and are defined on finite action spaces. We extend those results to develop a provably-convergent decentralized actor-critic algorithm for learning deterministic policies on continuous action spaces. Deterministic policies are important in many real-world settings. To handle the lack of exploration inherent in deterministic policies we provide results for the off-policy setting as well as the on-policy setting. We provide the main ingredients needed for this problem: the expression of a local deterministic policy gradient, a decentralized deterministic actor-critic algorithm, and convergence guarantees when the value functions are approximated linearly. This work enables decentralized MARL in high-dimensional action spaces and paves the way for more widespread application of MARL.
|
true |
CoCon: A Self-Supervised Approach for Controlled Text Generation
Language modeling text generation controlled generation self-supervised learning
Pretrained Transformer-based language models (LMs) display remarkable natural language generation capabilities. With their immense potential, controlling text generation of such LMs is getting attention. While there are studies that seek to control high-level attributes (such as sentiment and topic) of generated text, there is still a lack of more precise control over its content at the word- and phrase-level. Here, we propose Content-Conditioner (CoCon) to control an LM's output text with a content input, at a fine-grained level. In our self-supervised approach, the CoCon block learns to help the LM complete a partially-observed text sequence by conditioning with content inputs that are withheld from the LM. Through experiments, we show that CoCon can naturally incorporate target content into generated texts and control high-level text attributes in a zero-shot manner.
|
true |
Deep Generative Models for Generating Labeled Graphs
deep generative models generative adversarial networks data graphs labeled graphs new way generative models gans considerable success
As a new way to train generative models, generative adversarial networks (GANs) have achieved considerable success in image generation, and this framework has also recently been applied to data with graph structures. We identify the drawbacks of existing deep frameworks for generating graphs, and we propose labeled-graph generative adversarial networks (LGGAN) to train deep generative models for graph-structured data with node labels. We test the approach with different discriminative models as well as different GAN frameworks on various types of graph datasets, such as collections of citation networks and protein graphs. Experiment results show that our model can generate diverse labeled graphs that match the structural characteristics of the training data and outperforms all baselines in terms of quality, generality, and scalability.
|
false |
Precondition Layer and Its Use for GANs
GAN Preconditioning Condition Number
One of the major challenges when training generative adversarial nets (GANs) is instability. To address this instability spectral normalization (SN) is remarkably successful. However, SN-GAN still suffers from training instabilities, especially when working with higher-dimensional data. We find that those instabilities are accompanied by large condition numbers of the discriminator weight matrices. To improve training stability we study common linear-algebra practice and employ preconditioning. Specifically, we introduce a preconditioning layer (PC-layer)that performs a low-degree polynomial preconditioning. We use this PC-layer in two ways: 1) fixed preconditioning (FPC) adds a fixed PC-layer to all layers, and 2) adaptive preconditioning (APC) adaptively controls the strength of preconditioning. Empirically, we show that FPC and APC stabilize the training of un-conditional GANs using classical architectures. On LSUN256×256 data, APC improves FID scores by around 5 points over baselines.
|
true |
Adaptive Test-Time Intervention for Concept Bottleneck Models
interpretable machine learning distillation test-time intervention
Concept bottleneck models (CBM) aim to improve model interpretability by predicting human level "concepts" in a bottleneck within a deep learning model architecture. However, how the predicted concepts are used in predicting the target still either remains black-box or is simplified to maintain interpretability at the cost of prediction performance. We propose to use Fast Interpretable Greedy Sum-Trees (FIGS) to obtain Binary Distillation (BD). This new method, called FIGS-BD, distills a binary-augmented concept-to-target portion of the CBM into an interpretable tree-based model, while maintaining the competitive prediction performance of the CBM teacher. FIGS-BD can be used in downstream tasks to explain and decompose CBM predictions into interpretable binary-concept-interaction attributions and guide adaptive test-time intervention. Across $4$ datasets, we demonstrate that our adaptive test-time intervention identifies key concepts that significantly improve performance for realistic human-in-the-loop settings that only allow for limited concept interventions.
|
false |
R-LAtte: Attention Module for Visual Control via Reinforcement Learning
attention module visual control reinforcement attention mechanisms reinforcement learning generic inductive biases critical role supervised learning unsupervised generative modeling
Attention mechanisms are generic inductive biases that have played a critical role in improving the state-of-the-art in supervised learning, unsupervised pre-training and generative modeling for multiple domains including vision, language and speech. However, they remain relatively under-explored for neural network architectures typically used in reinforcement learning (RL) from high dimensional inputs such as pixels. In this paper, we propose and study the effectiveness of augmenting a simple attention module in the convolutional encoder of an RL agent. Through experiments on the widely benchmarked DeepMind Control Suite environments, we demonstrate that our proposed module can (i) extract interpretable task-relevant information such as agent locations and movements without the need for data augmentations or contrastive losses; (ii) significantly improve the sample-efficiency and final performance of the agents. We hope our simple and effective approach will serve as a strong baseline for future research incorporating attention mechanisms in reinforcement learning and control.
|
true |
FastVPINNs: A fast, versatile and robust Variational PINNs framework for forward and inverse problems in science
Physics informed neural networks Domain decomposition Forward modelling Inverse modelling Scalable PINNs Variational Physics informed neural networks Petrov-Galerkin formulation hp-refinement
Variational physics-informed neural networks (VPINNs), with h- and p-refinement, show promise over conventional PINNs. But existing frameworks are computationally inefficient and unable to deal with complex meshes. As such, VPINNs have had limited application when it comes to practical problems in science and engineering. In the present work, we propose a novel VPINNs framework, that achieves up to a 100x speed-up over SOTA codes. We demonstrate the flexibility of this framework by solving different forward and inverse problems on complex geometries, and by applying VPINNs to vector-valued partial differential equations.
|
false |
Driving through the Lens: Improving Generalization of Learning-based Steering using Simulated Adversarial Examples
model robustness data augmentation adversarial training image quality autonomous driving benchmark
To ensure the wide adoption and safety of autonomous driving, the vehicles need to be able to drive under various lighting, weather, and visibility conditions in different environments. These external and environmental factors, along with internal factors associated with sensors, can pose significant challenges to perceptual data processing, hence affecting the decision-making of the vehicle. In this work, we address this critical issue by analyzing the sensitivity of the learning algorithm with respect to varying quality in the image input for autonomous driving. Using the results of sensitivity analysis, we further propose an algorithm to improve the overall performance of the task of ``learning to steer''. The results show that our approach is able to enhance the learning outcomes up to 48%. A comparative study drawn between our approach and other related techniques, such as data augmentation and adversarial training, confirms the effectiveness of our algorithm as a way to improve the robustness and generalization of neural network training for self-driving cars.
|
false |
Partial Rejection Control for Robust Variational Inference in Sequential Latent Variable Models
dice enterprise partial rejection control sequential Monte-Carlo Bernoulli factory variational Inference Rejection Sampling
Effective variational inference crucially depends on a flexible variational family of distributions. Recent work has explored sequential Monte-Carlo (SMC) methods to construct variational distributions, which can, in principle, approximate the target posterior arbitrarily well, which is especially appealing for models with inherent sequential structure. However, SMC, which represents the posterior using a weighted set of particles, often suffers from particle weight degeneracy, leading to a large variance of the resulting estimators. To address this issue, we present a novel approach that leverages the idea of \emph{partial} rejection control (PRC) for developing a robust variational inference (VI) framework. In addition to developing a superior VI bound, we propose a novel marginal likelihood estimator constructed via a dice-enterprise: a generalization of the Bernoulli factory to construct unbiased estimators for SMC-PRC. The resulting variational lower bound can be optimized efficiently with respect to the variational parameters and generalizes several existing approaches in the VI literature into a single framework. We show theoretical properties of the lower bound and report experiments on various sequential models, such as the Gaussian state-space model and variational RNN, on which our approach outperforms existing methods.
|
true |
Learning Stochastic Dynamics from Data
Stochastic Dynamics Random Noise System Identification
We present a noise guided trajectory based system identification method for inferring the dynamical structure from observation generated by stochastic differential equations. Our method can handle various kinds of noise, including the case when the components of the noise are correlated. Our method can also learn both the noise level and drift term together from trajectory. We present various numerical tests for showcasing the superior performance of our learning algorithm.
|
true |
Data-driven Multi-Fidelity Modelling for Time-dependent Partial Differential Equations using Convolutional Neural Networks
CNN Convolutional neural network multi fidelity partial differential equation pde finite difference method fdm scientific machine learning numerical analysis
We present a general multi-fidelity (MF) framework which is applied through utilizing flexible-order explicit finite difference numerical schemes using convolutional neural networks (CNNs) by combining low-order simulation data with higher order simulation data obtained from numerical simulations based on partial differential equations (PDEs). This allows for improving the performance of low-order numerical simulation through learning from the data how to correct the numerical schemes to achieve improved accuracy. Through the lens of numerical analysis we evaluate the accuracy, efficiency and generalizability of constructed data-driven MF-models. To illustrate the concept, the construction of the MF models uses CNNs and is evaluated for numerical schemes designed for solving linear PDEs; the heat, the linear advection equation and linearized 1D shallow water equations. The numerical schemes allow for a high level of explainability of data-driven correction terms obtained via CNNs through numerical analysis of truncation errors. It is demonstrated that data-driven MF models is a means to improve the accuracy of LF models through operator correction.
|
false |
Adversarial Deep Metric Learning
Deep metric learning adversarial robustness adversarial examples adversarial perturbations adversarial training
Learning a distance metric between pairs of examples is widely important for various tasks. Deep Metric Learning (DML) utilizes deep neural network architectures to learn semantic feature embeddings where the distance between similar examples is close and dissimilar examples are far. While the underlying neural networks produce good accuracy on naturally occurring samples, they are vulnerable to adversarially-perturbed samples that can reduce their accuracy. To create robust versions of DML models, we introduce a robust training approach. A key challenge is that metric losses are not independent --- they depend on all samples in a mini-batch. This sensitivity to samples, if not accounted for, can lead to incorrect robust training. To the best of our knowledge, we are the first to systematically analyze this dependence effect and propose a principled approach for robust training of deep metric learning networks that accounts for the nuances of metric losses. Using experiments on three popular datasets in metric learning, we demonstrate the DML models trained using our techniques display robustness against strong iterative attacks while their performance on unperturbed (natural) samples remains largely unaffected.
|
false |
Learning Weighted Representations for Generalization Across Designs
Distributional shift causal effects domain adaptation
Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing results in this space. Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation.
|
true |
Multigrid-Augmented Deep Learning Preconditioners for the Helmholtz Equation using Compact Implicit Layers
Helmholtz equation shifted Laplacian multigrid iterative methods deep learning convolutional neural networks implicit methods Lippmann-Schwinger equation
We present a deep learning-based iterative approach to solve the discrete heterogeneous Helmholtz equation for high wavenumbers.
Combining classical iterative multigrid solvers and neural networks via preconditioning, we obtain a faster, learned neural solver that scales better than a standard multigrid solver.
We construct a multilevel U-Net-like encoder-solver CNN with an implicit layer on the coarsest level, where convolution kernels are inverted.
This alleviates the field of view problem in CNNs and allows better scalability.
Furthermore, we propose a multiscale training approach that enables to scale to problems of previously unseen dimensions while still maintaining a reasonable training procedure.
|
false |
WordsWorth Scores for Attacking CNNs and LSTMs for Text Classification
cnns lstms scores word importance wordsworth scores text single words
Black box attacks on traditional deep learning models trained for text classifica- tion target important words in a piece of text, in order to change model prediction. Current approaches towards highlighting important features are time consuming and require large number of model queries. We present a simple yet novel method to calculate word importance scores, based on model predictions on single words. These scores, which we call WordsWorth scores, need to be calculated only once for the training vocabulary. They can be used to speed up any attack method that requires word importance, with negligible loss of attack performance. We run ex- periments on a number of datasets trained on word-level CNNs and LSTMs, for sentiment analysis and topic classification and compare to state-of-the-art base- lines. Our results show the effectiveness of our method in attacking these models with success rates that are close to the original baselines. We argue that global importance scores act as a very good proxy for word importance in a local context because words are a highly informative form of data. This aligns with the manner in which humans interpret language, with individual words having well- defined meaning and powerful connotations. We further show that these scores can be used as a debugging tool to interpret a trained model by highlighting rele- vant words for each class. Additionally, we demonstrate the effect of overtraining on word importance, compare the robustness of CNNs and LSTMs, and explain the transferability of adversarial examples across a CNN and an LSTM using these scores. We highlight the fact that neural networks make highly informative pre- dictions on single words.
|
false |
Learning to Solve Multi-Robot Task Allocation with a Covariant-Attention based Neural Architecture
Graph neural network Attention mechanism Reinforcement learning Multi-robotic task allocation
This paper presents a new graph neural network architecture over which reinforcement learning can be performed to yield online policies for an important class of multi-robot task allocation (MRTA) problems, one that involves tasks with deadlines, and robots with ferry range and payload constraints and multi-tour capability. While drawing motivation from recent graph learning methods that learn to solve combinatorial optimization problems of the mTSP/VRP type, this paper seeks to provide better convergence and generalizability specifically for MRTA problems. The proposed neural architecture, called Covariant Attention-based Model or CAM, includes three main components: 1) an encoder: a covariant compositional node-based embedding is used to represent each task as a learnable feature vector in manner that preserves the local structure of the task graph while being invariant to the ordering of graph nodes; 2) context: a vector representation of the mission time and state of the concerned robot and its peers; and 2) a decoder: builds upon the attention mechanism to facilitate a sequential output. In order to train the CAM model, a policy-gradient method based on REINFORCE is used. While the new architecture can solve the broad class of MRTA problems stated above, to demonstrate real-world applicability we use a multi-unmanned aerial vehicle or multi-UAV-based flood response problem for evaluation purposes. For comparison, the well-known attention-based approach (designed to solve mTSP/VRP problems) is extended and applied to the MRTA problem, as a baseline. The results show that the proposed CAM method is not only superior to the baseline AM method in terms of the cost function (over training and unseen test scenarios), but also provide significantly faster convergence and yields learnt policies that can be executed within 2.4ms/robot, thereby allowing real-time application.
|
false |
Secure Federated Learning of User Verification Models
Federated learning User verification models
We consider the problem of training User Verification (UV) models in federated setup, where the conventional loss functions are not applicable due to the constraints that each user has access to the data of only one class and user embeddings cannot be shared with the server or other users. To address this problem, we propose Federated User Verification (FedUV), a framework for private and secure training of UV models. In FedUV, users jointly learn a set of vectors and maximize the correlation of their instance embeddings with a secret user-defined linear combination of those vectors. We show that choosing the linear combinations from the codewords of an error-correcting code allows users to collaboratively train the model without revealing their embedding vectors. We present the experimental results for user verification with voice, face, and handwriting data and show that FedUV is on par with existing approaches, while not sharing the embeddings with other users or the server.
|
true |
MARS: Markov Molecular Sampling for Multi-objective Drug Discovery
drug discovery molecular graph generation MCMC sampling
Searching for novel molecules with desired chemical properties is crucial in drug discovery. Existing work focuses on developing neural models to generate either molecular sequences or chemical graphs. However, it remains a big challenge to find novel and diverse compounds satisfying several properties. In this paper, we propose MARS, a method for multi-objective drug molecule discovery. MARS is based on the idea of generating the chemical candidates by iteratively editing fragments of molecular graphs. To search for high-quality candidates, it employs Markov chain Monte Carlo sampling (MCMC) on molecules with an annealing scheme and an adaptive proposal. To further improve sample efficiency, MARS uses a graph neural network (GNN) to represent and select candidate edits, where the GNN is trained on-the-fly with samples from MCMC. Experiments show that MARS achieves state-of-the-art performance in various multi-objective settings where molecular bio-activity, drug-likeness, and synthesizability are considered. Remarkably, in the most challenging setting where all four objectives are simultaneously optimized, our approach outperforms previous methods significantly in comprehensive evaluations. The code is available at https://github.com/yutxie/mars.
|
true |
Looking at the Performer from a Hopfield Point of View
deep learning hopfield networks associative memory attention transformer
The recent paper Rethinking Attention with Performers constructs a new efficient attention mechanism in an elegant way. It strongly reduces the computational cost for long sequences, while keeping the intriguing properties of the original attention mechanism. In doing so, Performers have a complexity only linear in the input length, in contrast to the quadratic complexity of standard Transformers. This is a major breakthrough in the strive of improving Transformer models. In this blog post, we look at the Performer from a Hopfield Network point of view and relate aspects of the Performer architecture to findings in the field of associative memories and Hopfield Networks. This blog post sheds light on the Performer from three different directions: (i) Performers resemble classical Hopfield Networks, (ii) Sparseness increases memory capacity, and (iii) Performer normalization relates to the activation function of continuous Hopfield Networks.
|
true |
Efficient Fourier Neural Operators by Group Convolution and Channel Shuffling
Fourier neural operator Group convolution
Fourier neural operators (FNOs) have emerged as data-driven alternatives to conventional numerical simulators for solving partial differential equations (PDEs). However, these models typically require a substantial number of learnable parameters. In this study, we explore parameter-efficient FNO architectures through modifications in their width, depth, and applications of group convolutiopn and channel shuffling. We perform benchmark on different problems on learning the operator of Maxwell's equations and Darcy flow equations. Our approach leads to significant improvement in prediction accuracy for both small and large FNO models. The proposed methods are widely adaptable across various problem types and neural operator architectures, aiming to boost prediction accuracy.
|
true |
Contrastive Explanations for Reinforcement Learning via Embedded Self Predictions
Explainable AI Deep Reinforcement Learning
We investigate a deep reinforcement learning (RL) architecture that supports explaining why a learned agent prefers one action over another. The key idea is to learn action-values that are directly represented via human-understandable properties of expected futures. This is realized via the embedded self-prediction (ESP) model, which learns said properties in terms of human provided features. Action preferences can then be explained by contrasting the future properties predicted for each action. To address cases where there are a large number of features, we develop a novel method for computing minimal sufficient explanations from an ESP. Our case studies in three domains, including a complex strategy game, show that ESP models can be effectively learned and support insightful explanations.
|
false |
ARELU: ATTENTION-BASED RECTIFIED LINEAR UNIT
activation function attention mechanism rectified linear unit
Element-wise activation functions play a critical role in deep neural networks via affecting the expressivity power and the learning dynamics. Learning-based activation functions have recently gained increasing attention and success. We propose a new perspective of learnable activation function through formulating them with element-wise attention mechanism. In each network layer, we devise an attention module which learns an element-wise, sign-based attention map for the pre-activation feature map. The attention map scales an element based on its sign. Adding the attention module with a rectified linear unit (ReLU) results in an amplification of positive elements and a suppression of negative ones, both with learned, data-adaptive parameters. We coin the resulting activation function Attention-based Rectified Linear Unit (AReLU). The attention module essentially learns an element-wise residue of the activated part of the input, as ReLU can be viewed as an identity transformation. This makes the network training more resis- tant to gradient vanishing. The learned attentive activation leads to well-focused activation of relevant regions of a feature map. Through extensive evaluations, we show that AReLU significantly boosts the performance of most mainstream network architectures with only two extra learnable parameters per layer introduced. Notably, AReLU facilitates fast network training under small learning rates, which makes it especially suited in the case of transfer learning and meta learning.
|
true |
Model-based micro-data reinforcement learning: what are the crucial model properties and which model to choose?
model-based reinforcement learning generative models mixture density nets dynamic systems heteroscedasticity
We contribute to micro-data model-based reinforcement learning (MBRL) by rigorously comparing popular generative models using a fixed (random shooting) control agent. We find that on an environment that requires multimodal posterior predictives, mixture density nets outperform all other models by a large margin. When multimodality is not required, our surprising finding is that we do not need probabilistic posterior predictives: deterministic models are on par, in fact they consistently (although non-significantly) outperform their probabilistic counterparts. We also found that heteroscedasticity at training time, perhaps acting as a regularizer, improves predictions at longer horizons. At the methodological side, we design metrics and an experimental protocol which can be used to evaluate the various models, predicting their asymptotic performance when using them on the control problem. Using this framework, we improve the state-of-the-art sample complexity of MBRL on Acrobot by two to four folds, using an aggressive training schedule which is outside of the hyperparameter interval usually considered.
|
true |
Smoothing Nonlinear Variational Objectives with Sequential Monte Carlo
sequential monte carlo variational inference time series
The task of recovering nonlinear dynamics and latent structure from a population recording is a challenging problem in statistical neuroscience motivating the development of novel techniques in time series analysis. Recent work has focused on connections between Variational Inference and Sequential Monte Carlo for performing inference and parameter estimation on sequential data. Inspired by this work, we present a framework to develop Smoothed Variational Objectives (SVOs) that condition proposal distributions on the full time-ordered sequence of observations. SVO maintains both expressiveness and tractability by sharing parameters of the transition function between the proposal and target. We apply the method to several dimensionality reduction/expansion tasks and examine the dynamics learned with a quantitative metric. SVO performs favorably against the state of the art.
|
false |
Game-Theoretic Multi-Agent Collaboration for AI-Driven Scientific Discovery
Multi-Agent Systems Game Theory Scientific AI Nash Equilibrium Cooperative Bargaining AI for Science HPC Scheduling AI Collaboration
This paper introduces a game-theoretic multi-agent AI framework where autonomous AI agents negotiate and refine hypotheses in either a cooperative or competitive scientific environment. By leveraging tools from Nash equilibrium analysis and cooperative game theory, agents can independently validate scientific hypotheses, manage shared computational resources, and optimize discovery pathways.
Experimental results in climate modeling, astrophysics, and biomedical research show that this agentic AI approach significantly accelerates scientific exploration while providing robust conflict resolution among heterogeneous domain tasks. Our findings highlight both the theoretical foundations of multi-agent negotiation for scientific hypothesis generation and the practical potential to transform decentralized scientific collaborations.
|
false |
Contrastive Self-Supervised Learning of Global-Local Audio-Visual Representations
Contrastive learning self-supervised learning video representation learning audio-visual representation learning multimodal representation learning
Contrastive self-supervised learning has delivered impressive results in many audio-visual recognition tasks. However, existing approaches optimize for learning either global representations useful for high-level understanding tasks such as classification, or local representations useful for tasks such as audio-visual source localization and separation. While they produce satisfactory results in their intended downstream scenarios, they often fail to generalize to tasks that they were not originally designed for. In this work, we propose a versatile self-supervised approach to learn audio-visual representations that can generalize to both the tasks which require global semantic information (e.g., classification) and the tasks that require fine-grained spatio-temporal information (e.g. localization). We achieve this by optimizing two cross-modal contrastive objectives that together encourage our model to learn discriminative global-local visual information given audio signals. To show that our approach learns generalizable video representations, we evaluate it on various downstream scenarios including action/sound classification, lip reading, deepfake detection, and sound source localization.
|
true |
How Does Entropy Influence Modern Text-to-SQL Systems?
Text-to-SQL Entropy Uncertainty Quantification
In the field of text-to-SQL candidate generation, a critical challenge remains in quantifying and assessing the confidence in the generated SQL queries. Existing approaches often rely on large language models (LLMs) that function as opaque processing units, producing outputs for every input without a mechanism to measure their confidence. Current uncertainty quantification techniques for LLMs do not incorporate domain-specific information. In this study, we introduce the concept of query entropy for Text-to-SQL candidate confidence estimation and integrate it into existing popular self-correction pipelines to guide generations and prevent resource overuse by including a novel clustering technique for generated SQL candidates based on entropy. We further study the treatment of different candidate generation techniques under this paradigm.
|
true |
Approximating Family of Steep Traveling Wave Solutions to Fisher's Equation with PINNs
Physics-informed neural networks Fisher's equation steep traveling wave solutions residual weighting
In this paper, we adapt Physics-Informed Neural Networks (PINNs) to solve Fisher's equation with solutions characterized by steep traveling wave fronts.
We introduce a residual weighting scheme that is based on the underlying reaction dynamics and helps in tracking the propagating wave fronts.
Furthermore, we explore a network architecture tailored for solutions in the form of traveling waves.
Lastly, we assess the capacity of PINNs to approximate an entire family of traveling wave solutions by incorporating the reaction rate coefficient as an additional input to the network architecture.
|
false |
Better sampling in explanation methods can prevent dieselgate-like deception
Explaniable AI explanation methods robust explanations
Machine learning models are used in many sensitive areas where besides predictive accuracy their comprehensibility is also important. Interpretability of prediction models is necessary to determine their biases and causes of errors, and is a necessary prerequisite for users' confidence. For complex state-of-the-art black-box models post-hoc model-independent explanation techniques are an established solution. Popular and effective techniques, such as IME, LIME, and SHAP, use perturbation of instance features to explain individual predictions. Recently, Slack et al. (2020) put their robustness into question by showing that their outcomes can be manipulated due to poor perturbation sampling employed. This weakness would allow dieselgate type cheating of owners of sensitive models who could deceive inspection and hide potentially unethical or illegal biases existing in their predictive models. This could undermine public trust in machine learning models and give rise to legal restrictions on their use.
We show that better sampling in these explanation methods prevents malicious manipulations. The proposed sampling uses data generators that learn the training set distribution and generate new perturbation instances much more similar to the training set. We show that the improved sampling increases the robustness of the LIME and SHAP, while previously untested method IME is already the most robust of all.
|
true |
Predicting Infectiousness for Proactive Contact Tracing
covid-19 contact tracing distributed inference set transformer deepset epidemiology applications domain randomization retraining simulation
The COVID-19 pandemic has spread rapidly worldwide, overwhelming manual contact tracing in many countries and resulting in widespread lockdowns for emergency containment. Large-scale digital contact tracing (DCT) has emerged as a potential solution to resume economic and social activity while minimizing spread of the virus. Various DCT methods have been proposed, each making trade-offs be-tween privacy, mobility restrictions, and public health. The most common approach, binary contact tracing (BCT), models infection as a binary event, informed only by an individual’s test results, with corresponding binary recommendations that either all or none of the individual’s contacts quarantine. BCT ignores the inherent uncertainty in contacts and the infection process, which could be used to tailor messaging to high-risk individuals, and prompt proactive testing or earlier warnings. It also does not make use of observations such as symptoms or pre-existing medical conditions, which could be used to make more accurate infectiousness predictions. In this paper, we use a recently-proposed COVID-19 epidemiological simulator to develop and test methods that can be deployed to a smartphone to locally and proactively predict an individual’s infectiousness (risk of infecting others) based on their contact history and other information, while respecting strong privacy constraints. Predictions are used to provide personalized recommendations to the individual via an app, as well as to send anonymized messages to the individual’s contacts, who use this information to better predict their own infectiousness, an approach we call proactive contact tracing (PCT). Similarly to other works, we find that compared to no tracing, all DCT methods tested are able to reduce spread of the disease and thus save lives, even at low adoption rates, strongly supporting a role for DCT methods in managing the pandemic. Further, we find a deep-learning based PCT method which improves over BCT for equivalent average mobility, suggesting PCT could help in safe re-opening and second-wave prevention.
|
false |
Attacking Few-Shot Classifiers with Adversarial Support Sets
meta-learning few-shot learning adversarial attacks poisoning
Few-shot learning systems, especially those based on meta-learning, have recently made significant advances, and are now being considered for real world problems in healthcare, personalization, and science. In this paper, we examine the robustness of such deployed few-shot learning systems when they are fed an imperceptibly perturbed few-shot dataset, showing that the resulting predictions on test inputs can become worse than chance. This is achieved by developing a novel Adversarial Support Set Attack which crafts a poisoned set of examples. When even a small subset of malicious data points is inserted into the support set of a meta-learner, accuracy is significantly reduced. For example, the average classification accuracy of CNAPs on the Aircraft dataset in the META-DATASET benchmark drops from 69.2% to 9.1% when only 20% of the support set is poisoned by imperceptible perturbations. We evaluate the new attack on a variety of few-shot classification algorithms including MAML, prototypical networks, and CNAPs, on both small scale (miniImageNet) and large scale (META-DATASET) few-shot classification problems. Interestingly, adversarial support sets produced by attacking a meta-learning based few-shot classifier can also reduce the accuracy of a fine-tuning based few-shot classifier when both models use similar feature extractors.
|
true |
Fantastic Four: Differentiable and Efficient Bounds on Singular Values of Convolution Layers
spectral regularization spectral normalization
In deep neural networks, the spectral norm of the Jacobian of a layer bounds the factor by which the norm of a signal changes during forward/backward propagation. Spectral norm regularizations have been shown to improve generalization, robustness and optimization of deep learning methods. Existing methods to compute the spectral norm of convolution layers either rely on heuristics that are efficient in computation but lack guarantees or are theoretically-sound but computationally expensive. In this work, we obtain the best of both worlds by deriving {\it four} provable upper bounds on the spectral norm of a standard 2D multi-channel convolution layer. These bounds are differentiable and can be computed efficiently during training with negligible overhead. One of these bounds is in fact the popular heuristic method of Miyato et al. (multiplied by a constant factor depending on filter sizes). Each of these four bounds can achieve the tightest gap depending on convolution filters. Thus, we propose to use the minimum of these four bounds as a tight, differentiable and efficient upper bound on the spectral norm of convolution layers. Moreover, our spectral bound is an effective regularizer and can be used to bound either the lipschitz constant or curvature values (eigenvalues of the Hessian) of neural networks. Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and robustness of deep networks.
|
true |
Vectorized Conditional Neural Fields: A Framework for Solving Time-dependent PDEs
Partial Differential Equations PDEs Implicit Neural Representations INRs Continuous Models Neural Operators Conditional Neural Fields Transformers Superresolution Vision Transformers Linear Transformers
Transformer models are increasingly used for solving Partial Differential Equations (PDEs). However, they lack at least one of several desirable properties of an ideal surrogate model such as (i) generalization to PDE parameters not seen during training, (ii) spatial and temporal zero-shot super-resolution, (iii) continuous temporal extrapolation, (iv) applicability to PDEs of different dimensionalities, and (v) efficient inference for longer temporal rollouts. To address these limitations, we propose Vectorized Conditional Neural Fields (VCNeFs) which represent the solution of time-dependent PDEs as neural fields. Contrary to prior methods, VCNeFs compute, for a set of multiple spatio-temporal query points, their solutions in parallel while also modeling their dependencies through attention mechanisms. Moreover, VCNeF can condition the neural field on both the initial conditions and the parameters of the PDEs. An extensive set of experiments demonstrates that VCNeFs are competitive with and often outperform existing ML-based surrogate models.
|
true |
Emergent Covert Signaling in Adversarial Reference Games
emergent communication reference covert signaling
Emergent communication is often studied in dyadic, fully-cooperative reference games, yet many real-world scenarios involve multiparty communication in adversarial settings. We introduce an adversarial reference game, where a speaker and listener must learn to generate referring expressions without leaking information to an adversary, and study the ability of emergent communication systems to learn covert signaling protocols on this task. We show that agents can develop covert signaling when given access to additional training time or shared knowledge over the adversary. Finally, we show that adversarial training results in the emergent languages having fewer and more polysemous messages.
|
false |
Jumpy Recurrent Neural Networks
RNNs temporal abstraction planning intuitive physics
Recurrent neural networks (RNNs) can learn complex, long-range structure in time series data simply by predicting one point at a time. Because of this ability, they have enjoyed widespread adoption in commercial and academic contexts. Yet RNNs have a fundamental limitation: they represent time as a series of discrete, uniform time steps. As a result, they force a tradeoff between temporal resolution and the computational expense of predicting far into the future. To resolve this tension, we propose a Jumpy RNN model which does not predict state transitions over uniform intervals of time. Instead, it predicts a sequence of linear dynamics functions in latent space and intervals of time over which their predictions can be expected to be accurate. This structure enables our model to jump over long time intervals while retaining the ability to produce fine-grained or continuous-time predictions when necessary. In simple physics simulations, our model can skip over long spans of predictable motion and focus on key events such as collisions between two balls. On a set of physics tasks including coordinate and pixel observations of a small-scale billiards environment, our model matches the performance of a baseline RNN while using a fifth of the compute. On a real-world weather forecasting dataset, it makes more accurate predictions while using fewer sampling steps. When used for model-based planning, our method matches a baseline RNN while using half the compute.
|
true |
On Scalable and Efficient Computation of Large Scale Optimal Transport
Scalable optimal transport generative model neural ODE
Optimal Transport (OT) naturally arises in many machine learning applications, where we need to handle cross-modality data from multiple sources. Yet the heavy computational burden limits its wide-spread uses. To address the scalability issue, we propose an implicit generative learning-based framework called SPOT (Scalable Push-forward of Optimal Transport). Specifically, we approximate the optimal transport plan by a pushforward of a reference distribution, and cast the optimal transport problem into a minimax problem. We then can solve OT problems efficiently using primal dual stochastic gradient-type algorithms. We also show that we can recover the density of the optimal transport plan using neural ordinary differential equations. Numerical experiments on both synthetic and real datasets illustrate that SPOT is robust and has favorable convergence behavior. SPOT also allows us to efficiently sample from the optimal transport plan, which benefits downstream applications such as domain adaptation.
|
true |
Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data
deep learning theory domain adaptation theory unsupervised learning theory semi-supervised learning theory
Self-training algorithms, which train a model to fit pseudolabels predicted by another previously-learned model, have been very successful for learning with unlabeled data using neural networks. However, the current theoretical understanding of self-training only applies to linear models. This work provides a unified theoretical analysis of self-training with deep networks for semi-supervised learning, unsupervised domain adaptation, and unsupervised learning. At the core of our analysis is a simple but realistic “expansion” assumption, which states that a low-probability subset of the data must expand to a neighborhood with large probability relative to the subset. We also assume that neighborhoods of examples in different classes have minimal overlap. We prove that under these assumptions, the minimizers of population objectives based on self-training and input-consistency regularization will achieve high accuracy with respect to ground-truth labels. By using off-the-shelf generalization bounds, we immediately convert this result to sample complexity guarantees for neural nets that are polynomial in the margin and Lipschitzness. Our results help explain the empirical successes of recently proposed self-training algorithms which use input consistency regularization.
|
false |
Stability analysis of SGD through the normalized loss function
stability neural networks generalization bounds normalized loss
We prove new generalization bounds for stochastic gradient descent for both the convex and non-convex case. Our analysis is based on the stability framework. We analyze stability with respect to the normalized version of the loss function used for training. This leads to investigating a form of angle-wise stability instead of euclidean stability in weights. For neural networks, the measure of distance we consider is invariant to rescaling the weights of each layer. Furthermore, we exploit the notion of on-average stability in order to obtain a data-dependent quantity in the bound. This data dependent quantity is seen to be more favorable when training with larger learning rates in our numerical experiments.This might help to shed some light on why larger learning rates can lead to better generalization in some practical scenarios.
|
true |
JAX-SPH: A Differentiable Smoothed Particle Hydrodynamics Framework
Smoothed Particle Hydrodynamics JAX Lagrangian Simulations Differentiable Solver
Particle-based fluid simulations have emerged as a powerful tool for solving the Navier-Stokes equations, especially in cases that include intricate physics and free surfaces. The recent addition of machine learning methods to the toolbox for solving such problems is pushing the boundary of the quality vs. speed tradeoff of such numerical simulations. In this work, we lead the way to Lagrangian fluid simulators compatible with deep learning frameworks, and propose JAX-SPH - a Smoothed Particle Hydrodynamics (SPH) framework implemented in JAX. JAX-SPH builds on the code for dataset generation from the LagrangeBench project (Toshev et al., 2023) and extends this code in multiple ways: (a) integration of further key SPH algorithms, (b) restructuring the code toward a Python package, (c) verification of the gradients through the solver, and (d) demonstration of the utility of the gradients for solving inverse problems as well as a Solver-in-the-Loop application. Our code is available at https://github.com/tumaer/jax-sph.
|
false |
Wat zei je? Detecting Out-of-Distribution Translations with Variational Transformers
Bayesian Deep Learning Uncertainty NMT Transformer
We detect out-of-training-distribution sentences in Neural Machine Translation using the Bayesian Deep Learning equivalent of Transformer models. For this we develop a new measure of uncertainty designed specifically for long sequences of discrete random variables—i.e. words in the output sentence. Our new measure of uncertainty solves a major intractability in the naive application of existing approaches on long sentences. We use our new measure on a Transformer model trained with dropout approximate inference. On the task of German-English translation using WMT13 and Europarl, we show that with dropout uncertainty our measure is able to identify when Dutch source sentences, sentences which use the same word types as German, are given to the model instead of German.
|
true |
Adaptive Procedural Task Generation for Hard-Exploration Problems
reinforcement learning curriculum learning procedural generation task generation
We introduce Adaptive Procedural Task Generation (APT-Gen), an approach to progressively generate a sequence of tasks as curricula to facilitate reinforcement learning in hard-exploration problems. At the heart of our approach, a task generator learns to create tasks from a parameterized task space via a black-box procedural generation module. To enable curriculum learning in the absence of a direct indicator of learning progress, we propose to train the task generator by balancing the agent's performance in the generated tasks and the similarity to the target tasks. Through adversarial training, the task similarity is adaptively estimated by a task discriminator defined on the agent's experiences, allowing the generated tasks to approximate target tasks of unknown parameterization or outside of the predefined task space. Our experiments on the grid world and robotic manipulation task domains show that APT-Gen achieves substantially better performance than various existing baselines by generating suitable tasks of rich variations.
|
true |
Multi-Lattice Sampling of Quantum Field Theories via Neural Operator-based Flows
Lattice field theory; neural operators; continuous normalizing flow;
We consider the problem of sampling discrete field configurations $\phi$ from the Boltzmann distribution $[d\phi] Z_1^{-1} e^{-S_1[\phi]}$, where $S_1$ is the lattice-discretization of the continuous Euclidean action $\mathcal S_1$ of some quantum field theory. Since such densities arise as the approximation of the underlying functional density $[\mathcal D\phi(x)] \mathcal Z_1^{-1} e^{-\mathcal S_1[\phi(x)]}$, we frame the task as an instance of operator learning. In particular, we propose to approximate a time-dependent operator $\mathcal V_t$ whose time integral provides a mapping between the functional distributions of the free theory $[\mathcal D\phi(x)] \mathcal Z_0^{-1} e^{-\mathcal S_{0}[\phi(x)]}$ and of the target theory $[\mathcal D\phi(x)]\mathcal Z_1^{-1}e^{-\mathcal S_1[\phi(x)]}$. Once a particular lattice is chosen, the operator $\mathcal V_t$ can be discretized to a finite dimensional, time-dependent vector field $V_t$ which in turn induces a continuous normalizing flow between finite dimensional distributions over the chosen lattice. This flow can then be trained to be a diffeormorphism between the discretized free and target theories $[d\phi] Z_0^{-1} e^{-S_{0}[\phi]}$, $[d\phi] Z_1^{-1}e^{-S_1[\phi]}$. We run experiments on the 2-dimensional $\phi^4$-theory to explore to what extent such operator-based flow architectures generalize to lattice sizes they were not trained on and show that pretraining on smaller lattices can lead to speedup over training directly on the target lattice size.
|
true |
Learnability for the Information Bottleneck
representation learning learnability information bottleneck
Compressed representations generalize better (Shamir et al., 2010), which may be crucial when learning from limited or noisy labeled data. The Information Bottleneck (IB) method (Tishby et al. (2000)) provides an insightful and principled approach for balancing compression and prediction in representation learning. The IB objective I(X; Z) − βI(Y ; Z) employs a Lagrange multiplier β to tune this trade-off. However, there is little theoretical guidance for how to select β. There is also a lack of theoretical understanding about the relationship between β, the dataset, model capacity, and learnability. In this work, we show that if β is improperly chosen, learning cannot happen: the trivial representation P(Z|X) = P(Z) becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β varies. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, providing theoretical guidance for selecting β. We further show that IB-learnability is determined by the largest confident, typical, and imbalanced subset of the training examples. We give a practical algorithm to estimate the minimum β for a given dataset. We test our theoretical results on synthetic datasets, MNIST, and CIFAR10 with noisy labels, and make the surprising observation that accuracy may be non-monotonic in β.
|
false |
Training Invertible Linear Layers through Rank-One Perturbations
Parameter Perturbation Reparameterization Invertible Neural Networks Normalizing Flows Rank-one update
Many types of neural network layers rely on matrix properties such as invertibility or orthogonality.
Retaining such properties during optimization with gradient-based stochastic optimizers is a challenging task, which is usually addressed by either reparameterization of the affected parameters or by directly optimizing on the manifold.
This work presents a novel approach for training invertible linear layers. In lieu of directly optimizing
the network parameters, we train rank-one perturbations and add them to the actual weight matrices infrequently. This P$^{4}$Inv update allows keeping track of inverses and determinants without ever explicitly computing them. We show how such invertible blocks improve the mixing and thus the mode separation of the resulting normalizing flows. Furthermore, we outline how the P$^4$ concept can be utilized to retain properties other than invertibility.
|
true |
A Novel ML Model for Numerical Simulations Leveraging Fourier Neural Operators
Numerical Simulations Fourier Neural Operators Deep Learning
Numerical simulations for reservoir management for energy recovery optimization involve solving partial differential equations across numerical grids, providing detailed insights into fluid flow, heat transfer, and other critical reservoir behaviors. However, their computational demands often hinder practical implementation due to lengthy runtimes and resource-intensive processes.
In this paper, we propose a deep learning methodology to address these challenges. Our approach leverages a neural operator that directly parameterizes the integral kernel in Fourier space. By doing so, we facilitate swift and efficient predictions, effectively reducing the computational burden associated with multiple numerical simulations.
The robustness of the proposed approach has been evaluated for simulations of the steam injection process in high-viscosity oil reservoirs, an advanced thermal recovery method used to extract heavy oil from underground reservoirs.
Key features of the proposed methodology are that it slashes computational time from hours to seconds, making it feasible for real-time reservoir management decisions. We use input data and corresponding output fields from five numerical models for the steam injection process as training data. This allows the ML model to learn from the complete evolution of the process across diverse simulations.
During inference, the ML model relies on the first 10 time steps of numerical simulation results. It then predicts the subsequent 40 time steps in an autoregressive manner, capturing temporal dependencies effectively.
The ML model accurately forecasts simulation outcomes at the numerical grid level, with error rates consistently below 10 percent.
Beyond reservoir simulation, our approach holds promise for other fields, including computational fluid dynamics (CFD), structural engineering, and weather forecasting.
|
false |
From Molecular Dynamics to MeshGraphNets
GNN Graph Network Mesh-based simulations
In this blog, we discuss the MeshGraphNets paper and its predecessor paper through the lens of the graph-learning paradigm. We claim that molecular dynamics and smoothed particle hydrodynamics are the ancestors of all graph-based, learned particle simulators and show how graph-based approaches naturally extend to meshes. Then, we compare MeshGraphNets to other approaches, both graph-based and not. Last but not least, we conclude by presenting the strengths and weaknesses of the model, directions for future work, and a code snippet of the core algorithm written in JAX.
|
false |
Towards Neural No-Resource Language Translation: A Comparative Evaluation of Approaches
Computational Linguistics Machine Translation Fine Tuning Large Language Models (LLMs)
No-resource languages—those with minimal or no digital representation—pose unique challenges for machine translation (MT). Unlike low-resource languages, which rely on limited but existent corpora, no-resource languages often have fewer than 100 sentences available for training. This work explores the problem of no-resource translation through three distinct workflows: fine-tuning of translation-specific models, in-context learning with large language models (LLMs) using chain-of-reasoning prompting, and direct prompting without reasoning. Using Owens Valley Paiute as a case study, we demonstrate that no-resource translation demands fundamentally different approaches from low-resource scenarios, as traditional approaches to machine translation, such as those that work for low-resource languages, fail. Empirical results reveal that, although traditional approaches fail, the in-context learning capabilities of general-purpose large language models enable no-resource language translation that outperforms low-resource translation approaches and rivals human translations (BLEU 0.45-0.6); specifically, chain-of-reasoning prompting outperforms other methods for larger corpora, while direct prompting exhibits advantages in smaller datasets. As these approaches are language-agnostic, they have potential to be generalized to translation tasks from a wide variety of no-resource languages without expert input. These findings establish no-resource translation as a distinct paradigm requiring innovative solutions, providing practical and theoretical insights for language preservation.
|
true |
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning
Encoder layer fusion Transformer Sequence-to-sequence learning Machine translation Summarization Grammatical error correction
Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq) models, which has proven effective on various NLP tasks. However, it is still not entirely clear why and when EncoderFusion should work. In this paper, our main contribution is to take a step further in understanding EncoderFusion. Many of previous studies believe that the success of EncoderFusion comes from exploiting surface and syntactic information embedded in lower encoder layers. Unlike them, we find that the encoder embedding layer is more important than other intermediate encoder layers. In addition, the uppermost decoder layer consistently pays more attention to the encoder embedding layer across NLP tasks. Based on this observation, we propose a simple fusion method, SurfaceFusion, by fusing only the encoder embedding layer for the softmax layer. Experimental results show that SurfaceFusion outperforms EncoderFusion on several NLP benchmarks, including machine translation, text summarization, and grammatical error correction. It obtains the state-of-the-art performance on WMT16 Romanian-English and WMT14 English-French translation tasks. Extensive analyses reveal that SurfaceFusion learns more expressive bilingual word embeddings by building a closer relationship between relevant source and target embeddings. Source code is freely available at https://github.com/SunbowLiu/SurfaceFusion.
|
true |
On-Premises LLM Deployment Demands a Middle Path: Preserving Privacy Without Sacrificing Model Confidentiality
LLM on-premises deployment deployment security
Current LLM customization typically relies on two deployment strategies: closed-source APIs, which require users to upload private data to external servers, and open-weight models, which allow local fine-tuning but pose misuse risks. In this paper, we argue that (1) deploying closed-source LLMs within user-controlled infrastructure (*on-premises deployment*) enhances data privacy and mitigates misuse risks, and (2) a well-designed on-premises deployment must ensure model confidentiality---by preventing model theft---and offer privacy-preserving customization.
Prior research on small models has explored securing only the output layer within hardware-secured devices to balance confidentiality and customization efficiency. However, we show that this approach is insufficient for defending large-scale LLMs against distillation attacks. We therefore introduce a {semi-open deployment framework} that secures only a few, carefully chosen layers, achieving distillation resistance comparable to fully secured models while preserving fine-tuning flexibility. Through extensive experiments, we show that securing bottom layers significantly reduces functional extraction risks. Our findings demonstrate that privacy and confidentiality can coexist, paving the way for secure on-premises AI deployment that balances usability and protection.
|
false |
Statestream: A toolbox to explore layerwise-parallel deep neural networks
model-parallel parallelization software platform
Building deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network’s architecture. The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components. Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units. For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner. In contrast, all elements of a biological neural network are processed in parallel. In this paper, we define a class of networks between these two extreme cases. These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing. Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections. We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity. We layout basic properties and discuss major challenges for layerwise-parallel networks. Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks.
|
false |
Decentralized SGD with Asynchronous, Local and Quantized Updates
distributed machine learning SGD decentralized algorithms quantization
The ability to scale distributed optimization to large node counts has been one of the main enablers of recent progress in machine learning. To this end, several techniques have been explored, such as asynchronous, quantized and decentralized communication--which significantly reduce the impact of communication and synchronization, as well as the ability for nodes to perform several local model updates before communicating--which reduces the frequency of communication.
In this paper, we show that these techniques, which have so far largely been considered independently, can be jointly leveraged to minimize distribution cost for training neural network models via stochastic gradient descent (SGD).
We consider a setting with minimal coordination: we have a large number of nodes on a communication graph, each with a local subset of data, performing independent SGD updates onto their local models. After some number of local updates, each node chooses an interaction partner uniformly at random from its neighbors, and averages a (possibly quantized) version of its local model with the neighbor's model.
Our first contribution is in proving that, even under such a relaxed setting, SGD can still be guaranteed to converge under standard assumptions. The proof is based on a new connection with parallel load-balancing processes, and improves existing techniques by handling decentralization, asynchrony, quantization, and local updates, into a single framework, and bounding their impact.
On the practical side, we implement variants of our algorithm and deploy them onto distributed environments, and show that they can successfully converge and scale for large-scale neural network training tasks, matching or even slightly improving the accuracy of previous methods.
|
true |
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics
poisoning attack policy gradient vulnerability of RL deep RL
Poisoning attacks on Reinforcement Learning (RL) systems could take advantage of RL algorithm’s vulnerabilities and cause failure of the learning. However, prior works on poisoning RL usually either unrealistically assume the attacker knows the underlying Markov Decision Process (MDP), or directly apply the poisoning methods in supervised learning to RL. In this work, we build a generic poisoning framework for online RL via a comprehensive investigation of heterogeneous poisoning models in RL. Without any prior knowledge of the MDP, we propose a strategic poisoning algorithm called Vulnerability-Aware Adversarial Critic Poison (VA2C-P), which works for on-policy deep RL agents, closing the gap that no poisoning method exists for policy-based RL agents. VA2C-P uses a novel metric, stability radius in RL, that measures the vulnerability of RL algorithms. Experiments on multiple deep RL agents and multiple environments show that our poisoning algorithm successfully prevents agents from learning a good policy or teaches the agents to converge to a target policy, with a limited attacking budget.
|
false |
The Risks and Rewards of Invariant Risk Minimization
Machine Learning Out-Of-Distribution Generalization Causality
Spurious correlations are one of the most prominent pain points for building and deploying machine learning models. Invariant Risk Minimization (IRM) is a learning algorithm designed to mitigate the effect of spurious features and perform well despite shifts in the test distribution. In this blog post, we discuss the motivation and details of IRM as well as it's criticisms and shortcomings.
|
true |
AutoBasisEncoder: Pre-trained Neural Field Basis via Autoencoding for Operator Learning
operator learning auto-encoding neural fields pre-training basis
We introduce AutoBasisEncoder, a novel framework designed for operator learn-
ing – the task of learning to map from one function to another. This approach au-
tonomously discovers a basis of functions optimized for the target function space
and utilizes this pre-trained basis for efficient operator learning. By introducing
an intermediary auto-encoding task to the popular DeepONet framework, AutoBa-
sisEncoder disentangles the learning of the basis functions and of the coefficients,
simplifying the operator learning process. Initially, the framework learns basis
functions through auto-encoding, followed by leveraging this basis to predict the
coefficients of the target function. Preliminary experiments indicate that Auto-
BasisEncoder’s basis functions exhibit superior suitability for operator learning
and function reconstruction compared to DeepONet. These findings underscore
the potential of AutoBasisEncoder to enhance the landscape of operator learning
frameworks
|
true |
Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs
symmetry equivariance mesh geometric convolution
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs). Such GCNs utilize isotropic kernels and are therefore insensitive to the relative orientation of vertices and thus to the geometry of the mesh as a whole. We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels. Since the resulting features carry orientation information, we introduce a geometric message passing scheme defined by parallel transporting features over mesh edges. Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
|
true |
Making Sense of Reinforcement Learning and Probabilistic Inference
Reinforcement learning Bayesian inference Exploration
Reinforcement learning (RL) combines a control problem with statistical estimation: The system dynamics are not known to the agent, but can be learned through experience. A recent line of research casts ‘RL as inference’ and suggests a particular framework to generalize the RL problem as probabilistic inference. Our paper surfaces a key shortcoming in that approach, and clarifies the sense in which RL can be coherently cast as an inference problem. In particular, an RL agent must consider the effects of its actions upon future rewards and observations: The exploration-exploitation tradeoff. In all but the most simple settings, the resulting inference is computationally intractable so that practical RL algorithms must resort to approximation. We demonstrate that the popular ‘RL as inference’ approximation can perform poorly in even very basic problems. However, we show that with a small modification the framework does yield algorithms that can provably perform well, and we show that the resulting algorithm is equivalent to the recently proposed K-learning, which we further connect with Thompson sampling.
|
false |
WAVEQ: GRADIENT-BASED DEEP QUANTIZATION OF NEURAL NETWORKS THROUGH SINUSOIDAL REGULARIZATION
waveq layers sinusoidal regularizer neural networks bitwidth period deep quantization quantized weights minima quantization levels
Deep quantization of neural networks below eight bits can lead to superlinear benefits in storage and compute efficiency. However, homogeneously quantizing all the layers to the same level does not account for the distinction of the layers and their individual properties. Heterogenous assignment of bitwidths to individual layers is attractive but opens an exponentially large non-contiguous hyperparameter space (${Available Bitwidths}^{\# Layers}$). As such finding the bitwidth while also quantizing the network to those levels becomes a major challenge. This paper addresses this challenge through a sinusoidal regularization mechanism, dubbed WaveQ. Adding our parametrized sinusoidal regularizer enables us to not only find the quantized weights but also learn the bitwidth of the layers by making the period of the sinusoidal regularizer a trainable parameter. In addition, the sinusoidal regularizer itself is designed to align its minima on the quantization levels. With these two innovations, during training, stochastic gradient descent uses the form of the sinusoidal regularizer and its minima to push the weights to the quantization levels while it is also learning the period which will determine the bitwidth of each layer separately. As such WaveQ is a gradient-based mechanism that jointly learns the quantized weights as well as the heterogeneous bitwidths. We show how WaveQ balance compute efficiency and accuracy, and provide a heterogeneous bitwidth assignment for quantization of a large variety of deep networks (AlexNet, CIFAR-10, MobileNet, ResNet-18, ResNet-20, SVHN, and VGG-11) that virtually preserves the accuracy. WaveQ is versatile and can also be used with predetermined bitwidths by fixing the period of the sinusoidal regularizer. In this case. WaveQ enhances quantized training algorithms (DoReFa and WRPN) with about 4.8% accuracy improvements on average, and outperforms multiple state-of-the-art techniques. Finally, WaveQ applied to quantizing transformers
|
false |
Implicit bias of gradient descent for mean squared error regression with wide neural networks
Implicit bias overparametrized neural network cubic spline interpolation spatially adaptive smoothing spline effective capacity
We investigate gradient descent training of wide neural networks and the corresponding implicit bias in function space. For 1D regression, we show that the solution of training a width-$n$ shallow ReLU network is within $n^{- 1/2}$ of the function which fits the training data and whose difference from initialization has smallest 2-norm of the second derivative weighted by $1/\zeta$. The curvature penalty function $1/\zeta$ is expressed in terms of the probability distribution that is utilized to initialize the network parameters, and we compute it explicitly for various common initialization procedures. For instance, asymmetric initialization with a uniform distribution yields a constant curvature penalty, and thence the solution function is the natural cubic spline interpolation of the training data. While similar results have been obtained in previous works, our analysis clarifies important details and allows us to obtain significant generalizations. In particular, the result generalizes to multivariate regression and different activation functions. Moreover, we show that the training trajectories are captured by trajectories of spatially adaptive smoothing splines with decreasing regularization strength.
|
false |
Improving Search Through A3C Reinforcement Learning Based Conversational Agent
Subjective search Reinforcement Learning Conversational Agent Virtual user model A3C Context aggregation
We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent. Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities. Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming. We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent. We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user. We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes. Our experiments show that the agent learns to achieve higher rewards and better states.
|
true |
Evolving RL: Discovering New Activation Functions using LLMs
Reinforcement Learning Evolutionary Search Large Language Models LLM Hypothesis Generation
Deep Reinforcement Learning (DRL) has traditionally inherited activation functions from supervised learning, despite fundamental differences in learning dynamics and objectives. We present EvolveAct, a novel framework that leverages large language models and evolutionary search to automatically discover optimal activation functions for specific RL tasks. Our method combines genetic programming with code Large Language Models (LLMs) to explore a rich space of mathematical functions, optimizing for stability and performance in DRL training. Experimental results across multiple environments show that the discovered activation functions consistently outperform standard choices such as ReLU and TanH, improving final performance on the Minatar suite by 37.25% and 28.3% on the Brax suite on average. By jointly optimizing over multiple diverse environments, we discover activation functions that demonstrate strong generalization capabilities across different RL domains. This research provides a foundation for automating fundamental architectural choices in deep reinforcement learning systems.
|
false |
Nonvacuous Loss Bounds with Fast Rates for Neural Networks via Conditional Information Measures
bounds training set fast rates neural networks framework conditional information density literature nonvacuous loss
We present a framework to derive bounds on the test loss of randomized learning algorithms for the case of bounded loss functions. This framework leads to bounds that depend on the conditional information density between the the output hypothesis and the choice of the training set, given a larger set of data samples from which the training set is formed. Furthermore, the bounds pertain to the average test loss as well as to its tail probability, both for the PAC-Bayesian and the single-draw settings. If the conditional information density is bounded uniformly in the size $n$ of the training set, our bounds decay as $1/n$, which is referred to as a fast rate. This is in contrast with the tail bounds involving conditional information measures available in the literature, which have a less benign $1/\sqrt{n}$ dependence. We demonstrate the usefulness of our tail bounds by showing that they lead to estimates of the test loss achievable with several neural network architectures trained on MNIST and Fashion-MNIST that match the state-of-the-art bounds available in the literature.
|
true |
PointSAGE: Mesh-independent superresolution approach to fluid flow predictions
Computational Fluid Dynamics Superresolution Point cloud PointSAGE
Computational Fluid Dynamics (CFD) serves as a powerful tool for simulating fluid flow across diverse industries. High-resolution CFD simulations offer valuable insights into fluid behavior and flow patterns. As resolution increases, computational data requirements and time rise proportionately, posing a persistent challenge in CFD. Recent efforts focus on accurately predicting fine-mesh simulations from coarse-mesh data, employing deep learning techniques such as UNets to address this challenge. Existing methods face limitations with unstructured meshes, due to their inability to convolute. Incorporating geometry/mesh information during training brings drawbacks like increased data requirements, and challenges in generalization to unseen geometries. To address these concerns, we propose a novel framework, PointSAGE a mesh-independent network that leverages the unordered, mesh-less nature of Pointcloud to learn the complex fluid flow and directly predict fine simulations, completely neglecting mesh information. With an adaptable framework, PointSAGE accurately predicts fine data across diverse point cloud sizes, regardless of the training dataset's dimension. Evaluations of various datasets and scenarios demonstrate notable results, showcasing a significant acceleration in computational time for generating fine simulations compared to standard CFD.
|
true |
ECONOMIC HYPERPARAMETER OPTIMIZATION WITH BLENDED SEARCH STRATEGY
HYPERPARAMETER OPTIMIZATION COST
We study the problem of using low cost to search for hyperparameter configurations in a large search space with heterogeneous evaluation cost and model quality.
We propose a blended search strategy to combine the strengths of global and local search, and prioritize them on the fly with the goal of minimizing the total cost spent in finding good configurations. Our approach demonstrates robust performance for tuning both tree-based models and deep neural networks on a large AutoML benchmark, as well as superior performance in model quality, time, and resource consumption for a production transformer-based NLP model fine-tuning task.
|
true |
Galerkin meets Laplace: Fast uncertainty estimation in neural PDEs
Deep Neural PDE Deep Galerkin PDE PINNS Laplace Approximation Uncertainty Bayesian
The solution of partial differential equations (PDEs) by deep neural networks
trained to satisfy the differential operator has become increasingly popular. While
these approaches can lead to very accurate approximations, they tend to be over-
confident and fail to capture the uncertainty around the approximation. In this
work, we propose a Bayesian treatment to the deep Galerkin method (Sirignano &
Spiliopoulos, 2018), a popular neural approach for solving parametric PDEs. In
particular, we reinterpret the deep Galerkin method as the maximum a posteriori
estimator corresponding to a likelihood term over a fictitious dataset, leading thus
to a natural definition of a posterior. Then, we propose to model such posterior via
the Laplace approximation, a fast approximation that allows us to capture mean-
ingful uncertainty in out of domain interpolation of the PDE solution and in low
data regimes with little overhead, as shown in our preliminary experiments.
|
true |
XDDPM: EXPLAINABLE DENOISING DIFFUSION PROB- ABILISTIC MODEL FOR SCIENTIFIC MODELING
explainable generative models Information Bottleneck scientific modeling
In recent years, diffusion models have emerged as powerful tools for generatively predicting high-dimensional observations across various scientific and engineering domains, including fluid dynamics, weather forecasting, and physics. Typically, researchers not only want the models to have faithful generation, but also want to explain these high-dimensional generations with accompanying signals such as measurements of force, currents, or pressure. However, such explainable generation capability is still lacking in existing diffusion models. Here we introduce Explainable Denoising Diffusion Probabilistic Model (xDDPM), a simple variant to the standard DDPM that enables the generation of samples in an explainable manner, focusing solely on generating components that are pertinent to the given signal. The key feature of xDDPM is that it trains the denoising network to exclusively denoise these relevant parts while leaving non-relevant portions noisy. It achieves this by incorporating an Information Bottleneck loss in its learning objective, which facilitates the discovery of relevant components within the samples. Our experimental results, conducted on two cell dynamics datasets and one fluid dynamics dataset, consistently demonstrate xDDPM's capability for explainable generation. For instance, when provided with force measurements on a jellyfish-like robot, xDDPM accurately generates the relevant pressure fields surrounding the robot while effectively disregarding distant fields.
|
true |
The conjugate kernel for efficient training of physics-informed deep operator networks
physics-informed machine learning operator learning neural tangent kernel
Recent work has shown that the empirical Neural Tangent Kernel (NTK) can significantly improve the training of physics-informed Deep Operator Networks (DeepONets). The NTK, however, is costly to calculate, greatly increasing the cost of training such systems. In this paper, we study the performance of the empirical Conjugate Kernel (CK) for physics-informed DeepONets, an efficient approximation to the NTK that has been observed to yield similar results. For physics-informed DeepONets, we show that the CK performance is comparable to the NTK, while significantly reducing the time complexity for training DeepONets with the NTK.
|
true |
Continuous Relaxation For The Multivariate Noncentral Hypergeometric Distribution
hypergeometric reparameterization continuous relaxation
Partitioning a set of elements into a given number of groups of a priori unknown sizes is an essential task in many applications. Due to hard constraints, it is a non-differentiable problem that prohibits its direct use in modern machine learning frameworks. Hence, previous works mostly fall back on suboptimal heuristics or simplified assumptions. The multivariate hypergeometric distribution offers a probabilistic formulation of sampling a given number of elements from multiple groups. Unfortunately, as a discrete probability distribution, it neither is differentiable. We propose a continuous relaxation for the multivariate noncentral hypergeometric distribution. We introduce an efficient and numerically stable sampling procedure that enables reparameterized gradients for the hypergeometric distribution and its integration into automatic differentiation frameworks. We additionally highlight its advantages on a weakly-supervised learning task.
|
true |
Heteroscedastic uncertainty quantification in Physics-Informed Neural Networks
Physics-informed Neural Network Uncertainty Quantification Partial Differential Equation Scientific Machine Learning
Physics-informed neural networks (PINNs) provide a machine learning framework to solve differential equations. However, PINNs do not inherently consider measurement noise or model uncertainty. In this paper, we propose the UQ-PINN which is an extension of the PINN with additional outputs to approximate the additive noise. The multi-output architecture enables approximation the mean and standard deviation over data using negative Gaussian log-likelihood loss. The performance of the UQ-PINN is demonstrated on the Poisson equation with additive noise.
|
false |
Noisy Agents: Self-supervised Exploration by Predicting Auditory Events
Audio Curiosity RL exploration
Humans integrate multiple sensory modalities (e.g., visual and audio) to build a causal understanding of the physical world. In this work, we propose a novel type of intrinsic motivation for Reinforcement Learning (RL) that encourages the agent to understand the causal effect of its actions through auditory event prediction. First, we allow the agent to collect a small amount of acoustic data and use K-means to discover underlying auditory event clusters. We then train a neural network to predict the auditory events and use the prediction errors as intrinsic rewards to guide RL exploration. We first conduct an in-depth analysis of our module using a set of Atari games. We then apply our model to audio-visual exploration using the Habitat simulator and active learning using the TDW simulator. Experimental results demonstrate the advantages of using audio signals over vision-based models as intrinsic rewards to guide RL explorations.
|
true |
Reinforcement Learning with Random Delays
Reinforcement Learning Deep Reinforcement Learning
Action and observation delays commonly occur in many Reinforcement Learning applications, such as remote control scenarios. We study the anatomy of randomly delayed environments, and show that partially resampling trajectory fragments in hindsight allows for off-policy multi-step value estimation. We apply this principle to derive Delay-Correcting Actor-Critic (DCAC), an algorithm based on Soft Actor-Critic with significantly better performance in environments with delays. This is shown theoretically and also demonstrated practically on a delay-augmented version of the MuJoCo continuous control benchmark.
|
false |
MODALS: Data augmentation that works for everyone
deep learning data augmentation latent space data modalities automated data augmentation
The usefulness of data augmentation has led to the development of specific techniques of augmentation unique to each modality of data. The techniques developed for one modality usually suit the type of data in that particular modality. For image data some commonly used augmentation techniques are rotation, cropping, applying affine transforms, random flips, contrast and color augmentations and adding Gaussian blur. More recent techniques like CutMix[3] and MixUp[4] apply data augmentation on both the image and label space. Many robust data augmentation techniques for image data already exist and are used widely. However modalities like tables and graph data don’t have as many robust augmentation techniques. Since the techniques developed for images were made taking into consideration the nature of image data, they usually cannot be applied to other modalities. If a generalized, modality agnostic framework for augmentation could be developed then standard, robust augmentation techniques can be applied across many modalities. This is exactly what the authors of the paper propose.
|
false |
XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-Domain Mixup
transfer learning deep learning
Transferring knowledge from large source datasets is an effective way to fine-tune the deep neural networks of the target task with a small sample size. A great number of algorithms have been proposed to facilitate deep transfer learning, and these techniques could be generally categorized into two groups – Regularized Learning of the target task using models that have been pre-trained from source datasets, and Multitask Learning with both source and target datasets to train a shared backbone neural network. In this work, we aim to improve the multitask paradigm for deep transfer learning via Cross-domain Mixup (XMixup). While the existing multitask learning algorithms need to run backpropagation over both the source and target datasets and usually consume a higher gradient complexity, XMixup transfers the knowledge from source to target tasks more efficiently: for every class of the target task, XMixup selects the auxiliary samples from the source dataset and augments training samples via the simple mixup strategy. We evaluate XMixup over six real world transfer learning datasets. Experiment results show that XMixup improves the accuracy by 1.9% on average. Compared with other state-of-the-art transfer learning approaches, XMixup costs much less training time while still obtains higher accuracy.
|
false |
Offline policy selection under Uncertainty
Off-policy selection reinforcement learning Bayesian inference
The presence of uncertainty in policy evaluation significantly complicates the process of policy ranking and selection in real-world settings. We formally consider offline policy selection as learning preferences over a set of policy prospects given a fixed experience dataset. While one can select or rank policies based on point estimates of their policy values or high-confidence intervals, access to the full distribution over one's belief of the policy value enables more flexible selection algorithms under a wider range of downstream evaluation metrics. We propose BayesDICE for estimating this belief distribution in terms of posteriors of distribution correction ratios derived from stochastic constraints (as opposed to explicit likelihood, which is not available). Empirically, BayesDICE is highly competitive to existing state-of-the-art approaches in confidence interval estimation. More importantly, we show how the belief distribution estimated by BayesDICE may be used to rank policies with respect to any arbitrary downstream policy selection metric, and we empirically demonstrate that this selection procedure significantly outperforms existing approaches, such as ranking policies according to mean or high-confidence lower bound value estimates.
|
true |
Transformer protein language models are unsupervised structure learners
proteins language modeling structure prediction unsupervised learning explainable
Unsupervised contact prediction is central to uncovering physical, structural, and functional constraints for protein structure determination and design. For decades, the predominant approach has been to infer evolutionary constraints from a set of related sequences. In the past year, protein language models have emerged as a potential alternative, but performance has fallen short of state-of-the-art approaches in bioinformatics. In this paper we demonstrate that Transformer attention maps learn contacts from the unsupervised language modeling objective. We find the highest capacity models that have been trained to date already outperform a state-of-the-art unsupervised contact prediction pipeline, suggesting these pipelines can be replaced with a single forward pass of an end-to-end model.
|
true |
AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors
Vision-Language model deepfake detection visual question answering prompt tuning
Deep generative models can create remarkably photorealistic fake images while raising concerns about misinformation and copyright infringement, known as deepfake threats. Deepfake detection technique is developed to distinguish between real and fake images, where the existing methods typically learn classifiers in the image domain or various feature domains. However, the generalizability of deepfake detection against emerging and more advanced generative models remains challenging. In this paper, being inspired by the zero-shot advantages of Vision-Language Models (VLMs), we propose a novel approach called AntifakePrompt, using VLMs (e.g., InstructBLIP) and prompt tuning techniques to improve the deepfake detection accuracy over unseen data. We formulate deepfake detection as a visual question answering problem, and tune soft prompts for InstructBLIP to answer the real/fake information of a query image. We conduct full-spectrum experiments on datasets from a diversity of 3 held-in and 20 held-out generative models, covering modern text-to-image generation, image editing and adversarial image attacks. These testing datasets provide useful benchmarks in the realm of deepfake detection for further research. Moreover, results demonstrate that (1) the deepfake detection accuracy can be significantly and consistently improved (from 71.06% to 92.11%, in average accuracy over unseen domains) using pretrained vision-language models with prompt tuning; (2) our superior performance is at less cost of training data and trainable parameters, resulting in an effective and efficient solution for deepfake detection.
|
false |
SEQUENCE-LEVEL FEATURES: HOW GRU AND LSTM CELLS CAPTURE N-GRAMS
GRU LSTM Sequence-level Features N-grams
Modern recurrent neural networks (RNN) such as Gated Recurrent Units (GRU) and Long Short-term Memory (LSTM) have demonstrated impressive results on tasks involving sequential data in practice. Despite continuous efforts on interpreting their behaviors, the exact mechanism underlying their successes in capturing sequence-level information have not been thoroughly understood. In this work, we present a study on understanding the essential features captured by GRU/LSTM cells by mathematically expanding and unrolling the hidden states. Based on the expanded and unrolled hidden states, we find there was a type of sequence-level representations brought in by the gating mechanism, which enables the cells to encode sequence-level features along with token-level features. Specifically, we show that the cells would consist of such sequence-level features similar to those of N-grams. Based on such a finding, we also found that replacing the hidden states of the standard cells with N-gram representations does not necessarily degrade performance on the sentiment analysis and language modeling tasks, indicating such features may play a significant role for GRU/LSTM cells.
|
false |
How to Play the PCA Game
PCA Nash Equilibrium Eigenvector Eigenvalue
The paper EigenGame: PCA as a Nash Equilibrium was published at ICLR 2021. The authors, Ian Gemp, Brian McWilliams, Claire Vernade, and Thore Graepel, introduced a decentralized algorithm for PCA via a game-theoretic analysis by showing that with the right utility functions, PCA is the same as finding the Nash equilibrium. This blog reviews the key ideas for the argument, such as the hierarchy of eigenvectors and objectives for PCA.
|
false |
Deep Graph Neural Networks with Shallow Subgraph Samplers
Graph Neural Networks Graph Sampling Network Embedding
While Graph Neural Networks (GNNs) are powerful models for learning representations on graphs, most state-of-the-art models do not have significant accuracy gain beyond two to three layers. Deep GNNs fundamentally need to address: 1). expressivity challenge due to oversmoothing, and 2). computation challenge due to neighborhood explosion. We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency --- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph. A properly sampled subgraph may exclude irrelevant or even noisy nodes, and still preserve the critical neighbor features and graph structures. The deep GNN then smooths the informative local signals to enhance feature learning, rather than oversmoothing the global graph signals into just "white noise". We theoretically justify why the combination of deep GNNs with shallow samplers yields the best learning performance. We then propose various sampling algorithms and neural architecture extensions to achieve good empirical results. Experiments on five large graphs show that our models achieve significantly higher accuracy and efficiency, compared with state-of-the-art.
|
false |
Offline Meta-Reinforcement Learning with Advantage Weighting
offline meta-reinforcement learning meta-learning reinforcement learning maml
This paper introduces the offline meta-reinforcement learning (offline meta-RL) problem setting and proposes an algorithm that performs well in this setting. Offline meta-RL is analogous to the widely successful supervised learning strategy of pre-training a model on a large batch of fixed, pre-collected data (possibly from various tasks) and fine-tuning the model to a new task with relatively little data. That is, in offline meta-RL, we meta-train on fixed, pre-collected data from several tasks and adapt to a new task with a very small amount (less than 5 trajectories) of data from the new task. By nature of being offline, algorithms for offline meta-RL can utilize the largest possible pool of training data available and eliminate potentially unsafe or costly data collection during meta-training. This setting inherits the challenges of offline RL, but it differs significantly because offline RL does not generally consider a) transfer to new tasks or b) limited data from the test task, both of which we face in offline meta-RL. Targeting the offline meta-RL setting, we propose Meta-Actor Critic with Advantage Weighting (MACAW). MACAW is an optimization-based meta-learning algorithm that uses simple, supervised regression objectives for both the inner and outer loop of meta-training. On offline variants of common meta-RL benchmarks, we empirically find that this approach enables fully offline meta-reinforcement learning and achieves notable gains over prior methods.
|
false |
Optimistic Policy Optimization with General Function Approximations
general function approximations policy optimization optimistic policy optimization neural networks track record results reinforcement various domains theoretical understanding
Although policy optimization with neural networks has a track record of achieving state-of-the-art results in reinforcement learning on various domains, the theoretical understanding of the computational and sample efficiency of policy optimization remains restricted to linear function approximations with finite-dimensional feature representations, which hinders the design of principled, effective, and efficient algorithms. To this end, we propose an optimistic model-based policy optimization algorithm, which allows general function approximations while incorporating~exploration. In the episodic setting, we establish a $\sqrt{T}$-regret that scales polynomially in the eluder dimension of the general model class. Here $T$ is the number of steps taken by the agent. In particular, we specialize such a regret to handle two nonparametric model classes; one based on reproducing kernel Hilbert spaces and another based on overparameterized neural networks.
|
true |
AI Companions Are Not The Solution To Loneliness: Design Choices And Their Drawbacks
Social AI AI governance Ethical AI
As the popularity of social AI grows, so has the number of documented harms associated with its usage. Drawing on Human-Computer Interaction (HCI) and Machine Learning (ML) literature, we frame the harms of AI companions as a $\textit{technological problem}$ and draw direct links between key technical design choices and risks for users. We argue that many of the observed harms are foreseeable and preventable consequences of these choices. In the spirit of $\textit{translational research}$, we offer concrete strategies to mitigate these harms through both regulatory and technical interventions, aiming to make our findings useful and actionable for policymakers and practitioners.
|
false |
Budget-Constrained Learning to Defer for Autoregressive Models
learning-to-defer risk control
The learning to defer (L2D) framework gives a model the choice to defer prediction to an expert based on the model's uncertainty. We assume an L2D setting for sequence outputs where a small model can defer specific outputs of the whole model prediction to a large model in effort to interweave both models throughout the prediction. We propose a Learn then test approach to tune a token-level confidence-based thresholding rejector for pre-trained predictors with statistical guarantees of being within a user-defined budget and maximizing accuracy. We use Bayesian optimization to efficiently search the space of thresholds. In the experiments, we also empirically demonstrate that this method can achieve budget control while maintaining prediction quality of prediction system in text summarization.
|
true |
NAS-Bench-ASR: Reproducible Neural Architecture Search for Speech Recognition
NAS ASR Benchmark
Powered by innovations in novel architecture design, noise tolerance techniques and increasing model capacity, Automatic Speech Recognition (ASR) has made giant strides in reducing word-error-rate over the past decade. ASR models are often trained with tens of thousand hours of high quality speech data to produce state-of-the-art (SOTA) results. Industry-scale ASR model training thus remains computationally heavy and time-consuming, and consequently has attracted little attention in adopting automatic techniques. On the other hand, Neural Architecture Search (NAS) has gained a lot of interest in the recent years thanks to its successes in discovering efficient architectures, often outperforming handcrafted alternatives. However, by changing the standard training process into a bi-level optimisation problem, NAS approaches often require significantly more time and computational power compared to single-model training, and at the same time increase complexity of the overall process. As a result, NAS has been predominately applied to problems which do not require as extensive training as ASR, and even then reproducibility of NAS algorithms is often problematic. Lately, a number of benchmark datasets has been introduced to address reproducibility issues by pro- viding NAS researchers with information about performance of different models obtained through exhaustive evaluation. However, these datasets focus mainly on computer vision and NLP tasks and thus suffer from limited coverage of application domains. In order to increase diversity in the existing NAS benchmarks, and at the same time provide systematic study of the effects of architectural choices for ASR, we release NAS-Bench-ASR – the first NAS benchmark for ASR models. The dataset consists of 8, 242 unique models trained on the TIMIT audio dataset for three different target epochs, and each starting from three different initializations. The dataset also includes runtime measurements of all the models on a diverse set of hardware platforms. Lastly, we show that identified good cell structures in our search space for TIMIT transfer well to a much larger LibriSpeech dataset.
|
false |
Deep Ensembles for Low-Data Transfer Learning
transfer learning representation learning computer vision ensembles
In the low-data regime, it is difficult to train good supervised models from scratch.
Instead practitioners turn to pre-trained models, leveraging transfer learning. Ensembling is an empirically and theoretically appealing way to construct powerful predictive models, but the predominant approach of training multiple deep networks with different random initialisations collides with the need for transfer via pre-trained weights. In this work, we study different ways of creating ensembles from pre-trained models. We show that the nature of pre-training itself is a performant source of diversity, and propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset. The approach is simple: Use nearest-neighbour accuracy to rank pre-trained models, fine-tune the best ones with a small hyperparameter sweep, and greedily construct an ensemble to minimise validation cross-entropy. When evaluated together with strong baselines on 19 different downstream tasks (the Visual Task Adaptation Benchmark), this achieves state-of-the-art performance at a much lower inference budget, even when selecting from over 2,000 pre-trained models. We also assess our ensembles on ImageNet variants and show improved robustness to distribution shift.
|
false |
Disentangling Representations of Text by Masking Transformers
disentanglement model pruning representation learning transformers
Representations in large language models such as BERT encode a range of features into a single vector, which are predictive in the context of a multitude of downstream tasks. In this paper, we explore whether it is possible to learn disentangled representations by identifying subnetworks in pre-trained models that encode distinct, complementary aspects of the representation. Concretely, we learn binary masks over transformer weights or hidden units to uncover the subset of features that correlate with a specific factor of variation. This sidesteps the need to train a disentangled model from scratch within a particular domain. We evaluate the ability of this method to disentangle representations of syntax and semantics, and sentiment from genre in the context of movie reviews. By combining this method with magnitude pruning we find that we can identify quite sparse subnetworks. Moreover, we find that this disentanglement-via-masking approach performs as well as or better than previously proposed methods based on variational autoencoders and adversarial training.
|
true |
Adaptive Federated Optimization
Federated learning optimization adaptive optimization distributed optimization
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyze their convergence in the presence of heterogeneous data for general non-convex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.
|
false |
Adapt-and-Adjust: Overcoming the Long-tail Problem of Multilingual Speech Recognition
speech recognition multilingual long-tail adapter logit adjustments
One crucial challenge of real-world multilingual speech recognition is the long-tailed distribution problem, where some resource-rich languages like English have abundant training data, but a long tail of low-resource languages have varying amounts of limited training data. To overcome the long-tail problem, in this paper, we propose Adapt-and-Adjust (A2), a transformer-based multi-task learning framework for end-to-end multilingual speech recognition. The A2 framework overcomes the long-tail problem via three techniques: (1) exploiting a pretrained multilingual language model (mBERT) to improve the performance of low-resource languages; (2) proposing dual adapters consisting of both language-specific and language-agnostic adaptation with minimal additional parameters; and (3) overcoming the class imbalance, either by imposing class priors in the loss during training or adjusting the logits of the softmax output during inference. Extensive experiments on the CommonVoice corpus show that A2 significantly outperforms conventional approaches.
|
true |
Working Memory Attack on LLMs
Working Memory Attack LLM Jailbreak Safety Alignment LLMs Robustness
In-context learning (ICL) has emerged as a powerful capability of large language models (LLMs), enabling task adaptation without parameter updates. However, this capability also introduces potential vulnerabilities that could compromise model safety and security. Drawing inspiration from neuroscience, particularly the concept of working memory limitations, we investigate how these constraints can be exploited in LLMs through ICL. We develop a novel multi-task methodology extending the neuroscience dual-task paradigm to systematically measure the impact of working memory overload. Our experiments demonstrate that progressively increasing task-irrelevant token generation before the \emph{observation task} degrades model performance, providing a quantifiable measure of working memory load. Building on these findings, we present a new attack vector that exploits working memory overload to bypass safety mechanisms in state-of-the-art LLMs, achieving high attack success rates across multiple models. We empirically validate this threat model and show that advanced models such as GPT-4, Claude-3.5 Sonnet, Claude-3 OPUS, Llama-3-70B-Instruct, Gemini-1.0-Pro, and Gemini-1.5-Pro can be successfully jailbroken, with attack success rates of up to 99.99%. Additionally, we demonstrate the transferability of these attacks, showing that higher-capability LLMs can be used to craft working memory overload attacks targeting other models. By expanding our experiments to encompass a broader range of models and by highlighting vulnerabilities in LLMs' ICL, we aim to ensure the development of safer and more reliable AI systems. We have publicly released our jailbreak code and artifacts at this [URL](https://github.com/UNHSAILLab/working-memory-attack-on-llms).
|
false |
Prior Knowledge for Few-shot Learning—Inductive Reasoning and Distribution Calibration
few-shot learning prior knowledge distribution estimation bias correction
Few-shot learning is an important technique that can improve the learning capabilities of machine intelligence and practical adaptive applications. Previous researchers apply the meta-learning strategy to endow the new model with the ability or leverage transfer learning to alleviate the challenge of data-hungry. Moreover, prior knowledge such as knowledge graphs can also be modeled under the few-shot setting. This post gives an overview of recent works about how prior knowledge can address the problem of few-shot learning, and discusses a simple and efficient few-shot learning approach that estimates the novel class distributions derived inductively from the base classes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.