document_text
stringlengths 1
4.69k
|
---|
Recent advances in machine learning have brought the field closer to computational creativity research. From a creativity research point of view, this offers the potential to study creativity in relationship with knowledge acquisition. From a machine learning perspective, however, several aspects of creativity need to be better defined to allow the machine learning community to develop and test hypotheses in a systematic way. We propose an actionable definition of creativity as the generation of out-of-distribution novelty. We assess several metrics designed for evaluating the quality of generative models on this new task. We also propose a new experimental setup. Inspired by the usual held-out validation, we hold out entire classes for evaluating the generative potential of models. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time - and thus, are novel with respect to the knowledge the model incorporates. Through extensive experiments on various types of generative models, we are able to find architectures and hyperparameter combinations which lead to out-of-distribution novelty.
|
Convolutional autoregressive models have recently demonstrated state-of-the-art performance on a number of generation tasks. While fast, parallel training methods have been crucial for their success, generation is typically implemented in a naive fashion where redundant computations are unnecessarily repeated. This results in slow generation, making such models infeasible for production environments. In this work, we describe a method to speed up generation in convolutional autoregressive models. The key idea is to cache hidden states to avoid redundant computation. We apply our fast generation method to the Wavenet and PixelCNN++ models and achieve up to 21x and 183x speedups respectively.
|
We introduce a new measure for evaluating the quality of distributions learned by Generative Adversarial Networks (GANs). This measure computes the Kullback-Leibler divergence from a GAN-generated image set to a real image set. Since our measure utilizes a GAN's whole distribution, our measure penalizes outputs lacking in diversity, and it contrasts with evaluating GANs based upon a few cherry-picked examples. We demonstrate the measure's efficacy on the MNIST, SVHN, and CIFAR-10 datasets.
|
In modern probabilistic learning, we often wish to perform automatic inference for Bayesian models. However, informative priors are often costly to elicit, and in consequence, flat priors are chosen with the hopes that they are reasonably uninformative. Yet, objective priors such as the Jeffreys and Reference would often be preferred over flat priors if deriving them was generally tractable. We overcome this problem by proposing a black-box learning algorithm for Reference prior approximations. We derive a lower bound on the mutual information between data and parameters and describe how its optimization can be made derivation-free and scalable via differentiable Monte Carlo expectations. We experimentally demonstrate the method's effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder's Reference prior.
|
In this paper we address the challenging problem of unsupervised motion flow estimation. Under the assumption that image reconstruction is a super-set of the motion flow estimation problem, we train a convolutional neural network to interpolate adjacent video frames and then compute the motion flow via region-based sensitivity analysis by backpropagation. We postulate that better interpolations should result in better motion flow estimation. We then leverage the modeling power of energy-based generative adversarial networks (EbGAN's) to improve interpolations over standard L2 loss. Preliminary experiments on the KITTI database confirm that better interpolations from EbGAN's significantly improve motion flow estimation compared to both hand-crafted features and deep networks relying on standard losses such as L2.
|
We propose a framework for training multiple neural networks simultaneously. The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others' parameters if possible -- this is the main motivation behind multi-task learning. In contrast to many deep multi-task learning models, we do not predefine a parameter sharing strategy by specifying which layers have tied parameters. Instead, our framework considers sharing for all shareable layers, and the sharing strategy is learned in a data-driven way.
|
We present observations and discussion of previously unreported phenomena discovered while training residual networks. The goal of this work is to better understand the nature of neural networks through the examination of these new empirical results. These behaviors were identified through the application of Cyclical Learning Rates (CLR) and linear network interpolation. Among these behaviors are counterintuitive increases and decreases in training loss and instances of rapid training. For example, we demonstrate how CLR can produce greater testing accuracy than traditional training despite using large learning rates.
|
How to discriminate visual stimuli based on the activity they evoke in sensory neurons is still an open challenge. To measure discriminability power, we search for a neural metric that preserves distances in stimulus space, so that responses to different stimuli are far apart and responses to the same stimulus are close. Here, we show that Restricted Boltzmann Machines (RBMs) provide such a distance-preserving neural metric. Even when learned in a unsupervised way, RBM-based metric can discriminate stimuli with higher resolution than classical metrics.
|
With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum.
|
Representation learning is one of the foundations of Deep Learning and allowed big improvements on several Machine Learning fields, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. In this work, we propose a new unsupervised and efficient method, called here Neighborhood Based Node Embeddings (NBNE), capable of generating node embeddings for very large graphs. This method is based on SkipGram and uses nodes' neighborhoods as contexts to generate representations. NBNE achieves results comparable or better to the state-of-the-art in three different datasets.
|
In generic object recognition tasks, ResNet and its improvements have broken the lowest error rate records.
ResNet enables us to make a network deeper by introducing residual learning.
Some ResNet improvements achieve higher accuracy by focusing on channels.
Thus, the network depth and channels are thought to be important for high accuracy.
In this paper, in addition to them, we pay attention to use of multiple models in data-parallel learning. We refer to it as data-parallel multi-model learning.
We observed that the accuracy increased as models concurrently used increased on some methods, particularly on the combination of PyramidNet and the stochastic depth proposed in the paper.
As a result, we confirmed that the methods outperformed the conventional methods;
on CIFAR-100, the proposed methods achieved error rates of 16.13\% and 16.18\% in contrast to PiramidNet achieving that of 18.29\% and the current state-of-the-art DenseNet-BC 17.18\%.
|
We introduce a novel method to compute a rank $m$ approximation of the inverse of the Hessian matrix, in the distributed regime. By leveraging the differences in gradients and parameters of multiple Workers, we are able to efficiently implement a distributed approximation of the Newton-Raphson method. We also present preliminary results which underline advantages and challenges of second-order methods for large stochastic optimization problems. In particular, our work suggests that novel strategies for combining gradients will provide further information on the loss surface.
|
We introduce two novel tactics for adversarial attack on deep reinforcement learning (RL) agents: strategically-timed and enchanting attack. For strategically- timed attack, our method selectively forces the deep RL agent to take the least likely action. For enchanting attack, our method lures the agent to a target state by staging a sequence of adversarial attacks. We show that both DQN and A3C agents are vulnerable to our proposed tactics of adversarial attack.
|
Identifying acoustic events from a continuously streaming audio source is of interest for many applications including environmental monitoring for basic research. In this scenario neither different event classes are known nor what distinguishes one class from another. Therefore, an unsupervised feature learning method for exploration of audio data is presented in this paper. It incorporates the two following novel contributions: First, an audio frame predictor based on a Convolutional LSTM autoencoder is demonstrated, which is used for unsupervised feature extraction. Second, a training method for autoencoders is presented, which leads to distinct features by amplifying event similarities. In comparison to standard approaches, the features extracted from the audio frame predictor trained with the novel approach show 13 % better results when used with a classifier and 36 % better results when used for clustering.
|
A popular machine learning strategy is the transfer of a representation (i.e. a feature extraction function) learned on a source task to a target task. Examples include the re-use of neural network weights or word embeddings. Our work proposes novel and general sufficient conditions for the success of this approach. If the representation learned from the source task is fixed, we identify conditions on how the tasks relate to obtain an upper bound on target task risk via a VC dimension-based argument. We then consider using the representation from the source task to construct a prior, which is fine-tuned using target task data. We give a PAC-Bayes target task risk bound in this setting under suitable conditions. We show examples of our bounds using feedforward neural networks. Our results motivate a practical approach to weight sharing, which we validate with experiments.
|
Scheduling surgeries is a challenging task due to the fundamental uncertainty of the clinical environment, as well as the risks and costs associated with under- and over-booking. We investigate neural regression algorithms to estimate the parameters of surgery case durations, focusing on the issue of heteroscedasticity. We seek to simultaneously estimate the duration of each surgery, as well as a surgery-specific notion of our uncertainty about its duration. Estimating this uncertainty can lead to more nuanced and effective scheduling strategies, as we are able to schedule surgeries more efficiently while allowing an informed and case-specific margin of error. Using surgery records from the UC San Diego Health System, we demonstrate potential improvements on the order of 18% (in terms of minutes overbooked) compared to current scheduling techniques, as well as strong baselines that do not account for heteroscedasticity.
|
Triplet networks are widely used models that are characterized by good performance in classification and retrieval tasks. In this work we propose to train a triplet network by putting it as the discriminator in Generative Adversarial Nets (GANs). We make use of the good capability of representation learning of the discriminator to increase the predictive quality of the model. We evaluated our approach on Cifar10 and MNIST datasets and observed significant improvement on the classification performance using the simple k-nn method.
|
We treat semantic segmentation where a few pixel-wise labeled samples
and a large number of unlabeled samples are available. For this
situation we propose cosegmentation loss which enables us to transfer
the knowledge of a few pixel-wise labeled samples to a large number of
unlabeled images. In the experiments, we used human-part segmentation
with a few pixel-wise labeled images and 1715 unlabeled images, and
proved that the proposed co-segmentation loss helped make effective use
of unlabeled images.
|
One of the main computational challenges in supporting minimally invasive surgery techniques is the efficient 3d reconstruction of stereo endoscopic or laparosocopic images. In this paper, a Convolutional Neural Network based approach is presented, which does not require any prior knowledge on the image acquisition technique. We have evaluated the approach on a publicly available dataset and compared to a previous deep neural network approach. The evaluation showed that the approach outperformed the previous method.
|
Deep convolutional neural networks rely on heavily optimized convolution algorithms. Winograd convolutions provide an efficient approach to performing such convolutions. Using larger Winograd convolution tiles, the convolution will become more efficient but less numerically accurate. Here we provide some approaches to mitigating this numerical inaccuracy. We will exemplify these approaches by working on a tile much larger than any previously documented: F(9x9, 5x5). Using these approaches, we will show that such a tile can be used to train modern networks and provide performance benefits.
|
The "adversarial training" methods have recently been emerging as a promising avenue of research. Broadly speaking these methods achieve efficient training as well as boosted performance via an adversarial choice of data, features, or models. However, since the inception of the Generative Adversarial Nets (GAN),
much of the attention is focussed on adversarial "models", i.e., machines learning by pursuing competing goals.
In this note we investigate the
effectiveness of several (weak) sources of adversarial "data" and "features". In particular we demonstrate:
(a) low precision classifiers can be used as a source of adversarial data-sample closer to the decision boundary
(b) training on these adversarial data-sample can give significant boost to the precision and recall compared to the non-adversarial sample.
We also document the use of these methods for improving the performance of classifiers when only limited (and sometimes no) labeled data is available.
|
We present a smooth optimisation perspective on training multilayer Feedforward Neural Networks (FNNs) in the supervised learning setting. By characterising the critical point conditions of an FNN based optimisation problem, we identify the conditions to eliminate local optima of the cost function. By studying the Hessian structure of the cost function at the global minima, we develop an approximate Newton FNN algorithm, which demonstrates promising convergence properties.
|
When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site.
|
Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepCloak. By identifying and removing unnecessary features in a DNN model, DeepCloak limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepCloak is easy to implement and computationally efficient. Experimental results show that DeepCloak can increase the performance of state-of-the-art DNN models against adversarial samples.
|
Learning a better representation with neural networks is a challenging problem, which was tackled extensively from different prospectives in the past few years. In this work, we focus on learning a representation that could be used for clustering and introduce a novel loss component that substantially improves the quality of produced clusters, is simple to apply to an arbitrary cost function, and does not require a complicated training procedure.
|
Gatys et al. (2015) recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called \emph{style transfer}. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization~(AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles.
|
In this paper, we propose a new feature extraction technique for program execution logs. First, we automatically extract complex patterns from a program's behaviour graph. Then, we embed these patterns into a continuous space by training an autoencoder. We evaluate the proposed features on a real-world malicious software detection task. We also find that the embedding space captures interpretable structures in the space of pattern parts.
|
Generative Adversarial Networks (GANs) have recently emerged as powerful generative models. GANs are trained by an adversarial process between a generative network and a discriminative network. It is theoretically guaranteed that, in the nonparametric regime, by arriving at the unique saddle point of a minimax objective function, the generative network generates samples from the data distribution. However, in practice, getting close to this saddle point has proven to be difficult, resulting in the ubiquitous problem of “mode collapse”. The root of the problems in training GANs lies on the unbalanced nature of the game being played. Here, we propose to level the playing field and make the minimax game balanced by heating the data distribution. The empirical distribution is frozen at temperature zero; GANs are instead initialized at infinite temperature, where learning is stable. By annealing the heated data distribution, we initialized the network at each temperature with the learnt parameters of the previous higher temperature. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable, and proposed an algorithm in corollary. In our experiments, the annealed GAN algorithm, dubbed beta-GAN, trained with unmodified objective function was stable and did not suffer from mode collapse.
|
Recently, the Gaussian Mixture Variational Autoencoder (GMVAE) has been introduced to handle unsupervised clustering (Dilokthanakul et al., 2016). However, the existing formulation requires the introduction of the free bits term into the objective function in order to overcome the effects of the uniform prior imposed on the latent categorical variable. By considering our choice of generative and inference models, we propose a simple variation on the GMVAE that performs well empirically without modifying the variational objective function.
|
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carried out in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) has become a commonly used benchmark environment allowing algorithms to trainon various Atari 2600 games. In many games the state-of-the-art algorithms out-perform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles.The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining a simple unified interface. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility. A more extensive paper describing our work is available on arXiv
|
Two recently developed methods, Feedback Alignment (FA) and Direct Feedback Alignment (DFA), have been shown to obtain surprising performance on vision tasks by replacing the traditional backpropagation update with a random feedback update. However, it is still not clear what mechanisms allow learning to happen with these random updates.
In this work we argue that DFA can be viewed as a noisy variant of a layer-wise training method we call Linear Aligned Feedback Systems (LAFS). We support this connection theoretically by comparing the update rules for the two methods. We additionally empirically verify that the random update matrices used in DFA work effectively as readout matrices, and that strong correlations exist between the error vectors used in the DFA and LAFS updates. With this new connection between DFA and LAFS we are able to explain why the "alignment" happens in DFA.
|
Deep learning has led to remarkable advances when applied to problems in which the data distribution does not change over the course of learning. In stark contrast, biological neural networks exhibit continual learning, solve a diversity of tasks simultaneously, and have no clear separation between training and evaluation phase. Furthermore, synapses in biological neurons are not simply real-valued scalars, but possess complex molecular machinery that enable non-trivial learning dynamics. In this study, we take a first step toward bringing this biological complexity into artificial neural networks. We introduce intelligent synapses that are capable of accumulating information over time, and exploiting this information to efficiently protect old memories from being overwritten as new problems are learned. We apply our framework to learning sequences of related classification problems, and show that it dramatically reduces catastrophic forgetting while maintaining computational efficiency.
|
We present Char2Wav, an end-to-end model for speech synthesis. Char2Wav has two components: a reader and a neural vocoder. The reader is an encoder-decoder model with attention. The encoder is a bidirectional recurrent neural network that accepts text or phonemes as inputs, while the decoder is a recurrent neural network (RNN) with attention that produces vocoder acoustic features. Neural vocoder refers to a conditional extension of SampleRNN which generates raw waveform samples from intermediate representations. Unlike traditional models for speech synthesis, Char2Wav learns to produce audio directly from text.
|
MU0 is a deterministic computer that can store data in memory, manipulate it using programs, enabling decision making. Neu0 is a neural computational core modeled around the same principles. We create an ensemble of Neural Networks capable of executing ARM code, and discuss generalizations of our framework. We showcase the advantage of our technique by correctly executing malformed instructions, and discuss efficient memory management techniques.
|
Advances in neural network based classifiers have accelerated the progress of automatic representation learning. Since the emergence of AlexNet, every winning submission of the ImageNet challenge has employed end-to-end representation learning, and due to the utility of good representations for transfer learning, representation learning has become as an important and distinct task from supervised learning. At present, this distinction is inconsequential, as supervised methods are state-of-the-art in learning transferable representations, which are widely transferred to tasks such as evaluating the quality of generated samples. In this work, however, we demonstrate that supervised learning is limited in its capacity for representation learning. Based on an experimentally validated assumption, we show that the existence of a set of features will hinder the learning of additional features. We also show that the total incentive to learn features in supervised learning is bounded by the entropy of the labels. We hope that our analysis will provide a rigorous motivation for further exploration of other methods for learning robust and transferable representations.
|
Learning in models with discrete latent variables is challenging due to high variance gradient estimators. Generally, approaches have relied on control variates to reduce the variance of the REINFORCE estimator. Recent work (Jang et al. 2016, Maddison et al. 2016) has taken a different approach, introducing a continuous relaxation of discrete variables to produce low-variance, but biased, gradient estimates. In this work, we combine the two approaches through a novel control variate that produces low-variance, unbiased gradient estimates. We present encouraging preliminary results on a toy problem and on learning sigmoid belief networks.
|
There is a practically unlimited amount of natural language data available. Still, recent work in text comprehension has focused on datasets which are small relative to current computing possibilities. This article is making a case for the community to move to larger data and is offering the BookTest dataset as a step in that direction.
|
We propose regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. We connect our confidence penalty to label smoothing through the direction of the KL divergence between networks output distribution and the uniform distribution. We exhaustively evaluate our proposed confidence penalty and label smoothing (uniform and unigram) on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and our confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyper-parameters.
|
Machine learning models are often used at test-time subject to constraints and trade-offs not present
at training-time. For example, a computer vision model operating on an embedded device may need to perform
real-time inference, or a translation model operating on a cell phone may wish to bound its average
compute time in order to be power-efficient. In this work we describe a mixture-of-experts model and show how to change its test-time resource-usage on a per-input basis using reinforcement learning. We test our method on a small MNIST-based example.
|
We give a procedure for explicitely computing the complete preimage of activities of a layer in a rectifier network with fully connected layers, from knowledge of the weights in the network. The most general characterization of preimages is as piecewise linear manifolds in the input space with possibly multiple branches. This work therefore complements previous demonstrations of preimages obtained by heuristic optimization and regularization algorithms Mahendran & Vedaldi (2015; 2016) We are presently empirically evaluating the procedure and it’s ability to extract complete preimages as well as the general structure of preimage manifolds.
ICLR 2017 CONFRENCE TRACK SUBMISSION: https://openreview.net/forum?id=HJcLcw9xg¬eId=HJcLcw9xg
|
We investigate deep generative models that can exchange multiple modalities bi-directionally, e.g., generating images from corresponding texts and vice versa. Recently, some studies handle multiple modalities on deep generative models. However, these models typically assume that modalities are forced to have a conditioned relation, i.e., we can only generate modalities in one direction. To achieve our objective, we should extract a joint representation that captures high-level concepts among all modalities and through which we can exchange them bi-directionally. As described herein, we propose a joint multimodal variational autoencoder (JMVAE), in which all modalities are independently conditioned on joint representation. In other words, it models a joint distribution of modalities. Furthermore, to be able to generate missing modalities from the remaining modalities properly, we develop an additional method, JMVAE-kl, that is trained by reducing the divergence between JMVAE's encoder and prepared networks of respective modalities. Our experiments show that JMVAE can generate multiple modalities bi-directionally.
|
Retrosynthesis is a technique to plan the chemical synthesis of organic molecules, for example drugs, agro- and fine chemicals. In retrosynthesis, a search tree is built by analysing molecules recursively and dissecting them into simpler molecular building blocks until one obtains a set of known building blocks. The search space is intractably large, and it is difficult to determine the value of retrosynthetic positions. Here, we propose to model retrosynthesis as a Markov Decision Process. In combination with a Deep Neural Network policy trained on 5.5 million reactions, Monte Carlo Tree Search (MCTS) can be used to evaluate positions. In exploratory studies, we demonstrate that MCTS with neural network policies outperforms the traditionally used best-first search with hand-coded heuristics.
|
We present a model that learns active learning algorithms via metalearning. For each metatask, our model jointly learns: a data representation, an item selection heuristic, and a one-shot classifier. Our model uses the item selection heuristic to construct a labeled support set for the one-shot classifier. Using metatasks based on the Omniglot and MovieLens datasets, we show that our model performs well in synthetic and practical settings.
|
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Mean Teacher achieves error rate 4.35% on SVHN with 250 labels, better than Temporal Ensembling does with 1000 labels.
|
Traditionally, classifying large hierarchical labels with more than 10000 distinct traces can only be achieved with flatten labels. Although flatten labels is feasible, it misses the hierarchical information in the labels. Hierarchical models like HSVM by \cite{vural2004hierarchical} becomes impossible to train because of the sheer number of SVMs in the whole architecture. We developed a hierarchical architecture based on neural networks that is simple to train. Also, we derived an inference algorithm that can efficiently infer the MAP (maximum a posteriori) trace guaranteed by our theorems. Furthermore, the complexity of the model is only $O(n^2)$ compared to $O(n^h)$ in a flatten model, where $h$ is the height of the hierarchy.
|
The application of machine learning to clinical data from Electronic Health Records is limited by the scarcity of meaningful labels. Here we present initial results on the application of transfer learning to this problem. We explore the transfer of knowledge from source tasks in which training labels are plentiful but of limited clinical value to more meaningful target tasks that have few labels.
|
We present an empirical study about the usage of RNN generative models for stochastic optimization in the context of de novo drug design. We study different kinds of architectures and we find models that can generate molecules with higher values than ones seen in the training set. Our results suggest that we can improve traditional stochastic optimizers, that rely on random perturbations or random sampling by using generative models trained on unlabeled data, to perform knowledge-driven optimization.
|
Efficient simulation of the Navier-Stokes equations for fluid flow is a long standing problem in applied mathematics, for which state-of-the-art methods require large compute resources. In this work, we propose a data-driven approach that leverages the approximation power of deep-learning with the precision of standard solvers to obtain fast and highly realistic simulations. Our method solves the incompressible Euler equations using the standard operator splitting method, in which a large linear system with many free-parameters must be solved. We use a Convolutional Network with a highly tailored architecture, trained using a novel unsupervised learning framework to solve the linear system. We present real-time 2D and 3D simulations that outperform recently proposed data-driven methods; the obtained results are realistic and show good generalization properties.
|
The idea of style transfer has largely only been explored in image-based tasks, which we attribute in part to the specific nature of loss functions used for style transfer. We propose a general formulation of style transfer as an extension of generative adversarial networks, by using a discriminator to regularize a generator with an otherwise separate loss function. We apply our approach to the task of learning to play chess in the style of a specific player, and present empirical evidence for the viability of our approach.
|
We investigate generative adversarial training methods to learn a transition operator for a Markov chain, where the goal is to match its stationary distribution to a target data distribution. We propose a novel training procedure that avoids sampling directly from the stationary distribution, while still capable of reaching the target distribution asymptotically. The model can start from random noise, is likelihood free, and is able to generate multiple distinct samples during a single run. Preliminary experiment results show the chain can generate high quality samples when it approaches its stationary, even with smaller architectures traditionally considered for Generative Adversarial Nets.
|
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present two classes of attacks on the VAE-GAN architecture and demonstrate them against networks trained on MNIST, SVHN, and CelebA. Our first attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our second attack moves beyond relying on the standard loss for computing the gradient and directly optimizes against differences in source and target latent representations. We additionally present an interesting visualization, which gives insight into how adversarial examples appear in generative models.
|
We use empirical methods to argue that deep neural networks (DNNs) do not achieve their performance by \textit{memorizing} training data, in spite of overly-expressive model architectures.
Instead, they learn a simple available hypothesis that fits the finite data samples.
In support of this view, we establish that there are qualitative differences when learning noise vs.~natural datasets, showing that: (1) more capacity is needed to fit noise, (2) time to convergence is longer for random labels, but \emph{shorter} for random inputs, and (3) DNNs trained on real data examples learn simpler functions than when trained with noise data, as measured by the sharpness of the loss function at convergence.
Finally, we demonstrate that for appropriately tuned explicit regularization, e.g.~dropout, we can degrade DNN training performance on noise datasets without compromising generalization on real data.
|
Adversarial examples have been shown to exist for a variety of deep learning architectures. Deep reinforcement learning has shown promising results on training agent policies directly on raw inputs such as image pixels. In this paper we present a novel study into adversarial attacks on deep reinforcement learning polices. We compare the effectiveness of the attacks using adversarial examples vs. random noise. We present a novel method for reducing the number of times adversarial examples need to be injected for a successful attack, based on the value function. We further explore how re-training on random noise and FGSM perturbations affects the resilience against adversarial examples.
|
Effective clustering can be achieved by mapping the input to an embedded space rather than clustering on the raw data itself. However, there is limited focus on transformation methods that improve clustering accuracies. In this paper, we introduce Neural Clustering, a simple yet effective unsupervised model to project data onto an embedded space where intermediate layers of a deep autoencoder are concatenated to generate high-dimensional representations. Optimization of the autoencoder via reconstruction error allows the layers in the network to learn semantic representations of different classes of data. Our experimental results yield significant improvements on other models and a robustness across different kinds of datasets.
|
domain adaptation
|
One of the long-standing challenges in Artificial Intelligence for goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential tasks has been in the form of distillation based learning wherein a single student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks. While such approaches offer a promising solution to the multi-task learning problem, they require supervision from large task-specific (expert) networks which require extensive training.
We propose a simple yet efficient multi-task learning framework which solves multiple goal-directed tasks in an online or active learning setup without the need for expert supervision.
|
The vast majority of metric learning approaches are dedicated to be applied on data described by feature vectors, with some notable exceptions such as times series and trees or graphs. The objective of this paper is to propose metric learning algorithms that consider (multi)-relational data. The proposed approach takes benefit from both the topological structure of the data and supervised labels.
|
We propose a neural network-based technique for enhancing the quality of audio signals such as speech or music by transforming inputs encoded at low sampling rates into higher-quality signals with an increased resolution in the time domain. This amounts to generating the missing samples within the low-resolution signal in a process akin to image super-resolution. On standard speech and music datasets, this approach outperforms baselines at 2x, 4x, and 6x upscaling ratios. The method has practical applications in telephony, compression, and text-to-speech generation; it can also be used to improve the scalability of recently-proposed generative models of audio.
|
Machine learning methods in general and Deep Neural Networks in particular have shown to be vulnerable to adversarial perturbations. So far this phenomenon has mainly been studied in the context of whole-image classification. In this contribution, we analyse how adversarial perturbations can affect the task of semantic segmentation. We show how existing adversarial attackers can be transferred to this task and that it is possible to create imperceptible adversarial perturbations
that lead a deep network to misclassify almost all pixels of a chosen class while leaving network prediction nearly unchanged outside this class.
|
Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a noisy and impoverished signal for end-to-end optimization. To augment reward, we consider self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. Self-supervised pre-training improves the data efficiency and returns of end-to-end reinforcement learning on Atari.
|
Neural network models have a reputation for being black boxes.
We propose a new method to better understand the roles and dynamics
of the intermediate layers.
Our method uses linear classifiers, referred to as "probes",
where a probe can only use the hidden units of a given intermediate layer
as discriminating features.
Moreover, these probes cannot affect the training phase of a model,
and they are generally added after training.
We demonstrate how this can be used to develop a better intuition
about models and to diagnose potential problems.
|
In this work, we study the topical behavior in a large scale. Both the temporal and the spatial relationships of the behavior are explored with the deep learning architectures combing the recurrent neural network (RNN) and the convolutional neural network (CNN). To make the behavioral data appropriate for the spatial learning in the CNN, several reduction steps are taken in forming the topical metrics and placing them homogeneously like pixels in the images. The experimental result shows both temporal and spatial gains when compared against a multilayer perceptron (MLP) network. A new learning framework called the spatially connected convolutional networks (SCCN) is introduced to better predict the behavior.
|
We show how to adjust for the variance introduced by dropout with corrections to weight initialization and Batch Normalization, yielding higher accuracy. Though dropout can preserve the expected input to a neuron between train and test, the variance of the input differs. We thus propose a new weight initialization by correcting for the influence of dropout rates and an arbitrary nonlinearity's influence on variance through simple corrective scalars. Since Batch Normalization trained with dropout estimates the variance of a layer's incoming distribution with some inputs dropped, the variance also differs between train and test. After training a network with Batch Normalization and dropout, we simply update Batch Normalization's variance moving averages with dropout off and obtain state of the art on CIFAR-10 and CIFAR-100 without data augmentation.
|
The method introduced in this paper aims at helping computer vision practitioners faced with an overfit problem. The idea is to replace, in a 3-branch ResNet, the standard summation of residual branches by a stochastic affine combination. The largest tested model improves on the best single shot published result on CIFAR-10 by reaching 2.86% test error. Code is available at https://github.com/xgastaldi/shake-shake
|
We introduce a novel framework for clustering that combines generalized EM
with neural networks and can be implemented as an end-to-end differentiable
recurrent neural network. It learns its statistical model directly from the data and
can represent complex non-linear dependencies between inputs. We apply our
framework to a perceptual grouping task and empirically verify that it yields the
intended behavior as a proof of concept.
|
Word embedding models offer continuous vector representations that can capture rich contextual semantics based on their word co-occurrence patterns. While these word vectors can provide very effective features used in many NLP tasks such as clustering similar words and inferring learning relationships, many challenges and open research questions remain. In this paper, we propose a solution that aligns variations of the same model (or different models) in a joint low-dimensional latent space leveraging carefully generated synthetic data points. This generative process is inspired by the observation that a variety of linguistic relationships is captured by simple linear operations in embedded space. We demonstrate that our approach can lead to substantial improvements in recovering embeddings of local neighborhoods.
|
This note describes two simple techniques to stabilize the training of Generative Adversarial Networks (GANs) on multimodal data. First, we propose a covering initialization for the generator. This initialization pre-trains the generator to match the empirical mean and covariance of its samples with those of the real training data. Second, we propose using multimodal input noise distributions. Our experiments reveal that the joint use of these two simple techniques stabilizes GAN training, and produces generators with a richer diversity of samples. Our code is available at http://pastebin.com/GmHxL0e8.
|
The concept of synthetic gradient introduced by Jaderberg et al. (2016) provides an avant-garde framework for asynchronous learning of neural network.
Their model, however, has a weakness in its construction, because the structure of their synthetic gradient has little relation to the objective function of the target task.
In this paper we introduce virtual forward-backward networks (VFBN).
VFBN is a model that produces synthetic gradient whose structure is analogous to the actual gradient of the objective function.
VFBN is the first of its kind that succeeds in decoupling deep networks like ResNet-110 (He et al., 2016) without compromising its performance.
|
The size of neural network models that deal with sparse inputs and outputs is often dominated by the dimensionality of those inputs and outputs. Large models with high-dimensional inputs and outputs are difficult to train due to the limited memory of graphical processing units, and difficult to deploy on mobile devices with limited hardware. To address these difficulties, we propose Bloom embeddings, a compression technique that can be applied to the input and output of neural network models dealing with sparse high-dimensional binary-coded instances. Bloom embeddings are computationally efficient, and do not seriously compromise the accuracy of the model up to 1/5 compression ratios. In some cases, they even improve over the original accuracy, with relative increases up to 12%. We evaluate Bloom embeddings on 7 data sets and compare it against 4 alternative methods, obtaining favorable results.
|
We present a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city \mbox{coordinates}, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent neural network using a policy gradient method. Without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to $100$ nodes. These results, albeit still quite far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems.
|
We introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. Both algorithms also yield a tractable and explicit empowerment measure, which is useful for empowerment maximizing agents. Furthermore, they scale well with function approximation and we demonstrate their applicability on a range of tasks.
|
Convolutional Neural Networks (CNNs) are compute intensive which limits their application on mobile devices. Their energy is dominated by the number of multiplies needed to perform the convolutions. Winograd’s minimal filtering algorithm (Lavin (2015)) and network pruning (Han et al. (2015)) reduce the operation count. Unfortunately, these two methods cannot be combined—because applying theWinograd transform fills in the sparsity in both the weights and the activations. We propose two modifications to Winograd-based CNNs to enable these methods to exploit sparsity. First, we prune the weights in the ”Winograd domain” (after the transform) to exploit static weight sparsity. Second, we move the ReLU operation into the ”Winograd domain” to improve the sparsity of the transformed activations. On CIFAR-10, our method reduces the number of multiplications in the VGG-nagadomi model by 10.2x with no loss of accuracy.
|
Feature representation for clustering purposes consists in building an explicit or implicit mapping of the input space onto a feature space that is easier to cluster or classify. This paper relies upon an adversarial auto-encoder as a means of building a code space of low dimensionality suitable for clustering. We impose a tunable Gaussian mixture prior over that space allowing for a simultaneous optimization scheme. We arrive at competitive unsupervised classification results on hand-written digits images (MNIST) that is customarily classified within a supervised framework.
|
This work presents evaluation of several approaches for unsupervised mapping of raw sensor data from Volvo trucks into low-dimensional representation. The overall goal is to extract general features which are suitable for more than one task. Comparison of techniques based on t-distributed stochastic neighbor embedding (t-SNE) and convolutional autoencoders (CAE) is performed in a supervised fashion over 74 different 1-vs-Rest tasks using random forest. Multiple distance metrics for t-SNE and multiple architectures for CAE were considered. The results show that t-SNE is most effective for 2D and 3D, while CAE could be recommended for 10D representations. Fine-tuning the best convolutional architecture improved low-dimensional representation to the point where it slightly outperformed the original data representation.
|
Recurrent neural networks can be trained to describe images with natural language, but it has been observed that they generalize poorly to new scenes at test time.
Here we provide an experimental framework to quantify their generalization to unseen compositions. By describing images using short structured representations, we tease apart and evaluate separately two types of generalization: (1) generalization to new images of similar scenes, and (2) generalization to unseen compositions of known entities. We quantify these two types of generalization by a large-scale experiment on the MS-COCO dataset with a state-of-the-art recurrent network, and compare to a baseline structured prediction model on top of a deep network. We find that a state-of-the-art image captioning approach is largely "blind" to new combinations of known entities (~2.3% precision@1), and achieves statistically similar precision@1 to that of a considerably simpler structured-prediction model with much smaller capacity. We therefore advocate using compositional generalization metrics to evaluate vision and language models, since generalizing to new combinations of known entities is key for understanding complex real data.
|
Common approaches to text categorization essentially rely either on n-gram counts or on word embeddings. This presents important difficulties in highly dynamic or quickly-interacting environments, where the appearance of new words and/or varied misspellings is the norm. To better deal with these issues, we propose to use the error signal of class-based language models as input to text classification algorithms. In particular, we train a next-character prediction model for any given class, and then exploit the error of such class-based models to inform a neural network classifier. This way, we shift from the 'ability to describe' seen documents to the 'ability to predict' unseen content. Preliminary studies using out-of-vocabulary splits from abusive tweet data show promising results, outperforming competitive text categorization strategies by 4-11%.
|
We are proposing to use an ensemble of diverse specialists, where speciality is defined according to the confusion matrix. Indeed, we observed that for adversarial instances originating from a given class, labeling tend to be done into a small subset of (incorrect) classes. Therefore, we argue that an ensemble of specialists should be better able to identify and reject fooling instances, with a high entropy (i.e., disagreement) over the decisions in the presence of adversaries. Experimental results obtained confirm that interpretation, opening a way to make the system more robust to adversarial examples through a rejection mechanism, rather than trying to classify them properly at any cost.
|
Large Long Short-Term Memory (LSTM) networks have tens of millions of parameters and they are very expensive to train. We present two simple ways of reducing the number of parameters in LSTM network: the first one is ”matrix factorization by design” of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the state-of the art perplexity. On the One Billion Word Benchmark we improve single model perplexity down to 24.29.
|
The artistic style of a painting is a subtle aesthetic judgment used by art historians for grouping and classifying artwork. The neural style algorithm introduced by Gatys et. al. (2016) substantially succeeds in image style transfer, the task of merging the style of one image with the content of another. This work investigates the effectiveness of a style representation derived from the neural style algorithm for classifying paintings according to their artistic style.
|
We address the problem of online model adaptation when learning representations from non-stationary data streams. For now, we focus on single hidden-layer sparse linear autoencoders (i.e. sparse dictionary learning), although in the future, the proposed approach can be extended naturally to general multi-layer autoencoders and supervised models. We propose a simple but effective online model-selection, based on alternating-minimization scheme, which involves “birth” (addition of new elements) and “death” (removal, via l1/l2 group sparsity) of hidden units representing dictionary elements, in response to changing inputs; we draw inspiration from the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with better adaptation to new environments. Empirical evaluation on both real-life and synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art non-adaptive online sparse coding of Mairal et al. (2009) in the presence of non-stationary data, especially when dictionaries are sparse.
|
Domain adversarial approaches have been at the core of many recent unsupervised domain adaptation algorithms. However, each new algorithm is presented independently with limited or no connections mentioned across the works. Instead, in this work we propose a unified view of adversarial adaptation methods. We show how to describe a variety of state-of-the-art adaptation methods within our framework and furthermore use our generalized view in order to better understand the similarities and differences between these recent approaches. In turn, this framework facilitates the development of new adaptation methods through modeling choices that combine the desirable properties of multiple existing methods. In this way, we propose a novel adversarial adaptation method that is effective yet considerably simpler than other competing methods. We demonstrate the promise of our approach by achieving state-of-the-art unsupervised adaptation results on the standard Office dataset.
|
Automatically detecting sound units of humpback whales in complex time-varying background noises is a current challenge for scientists. In this paper, we explore the applicability of Convolution Neural Network (CNN) method for this task. In the evaluation stage, we present 6 bi-class classification experimentations of whale sound detection against different background noise types (e.g., rain, wind). In comparison to classical FFT-based representation like spectrograms, we showed that the use of image-based pretrained CNN features brought higher performance to classify whale sounds and background noise.
|
In this paper, we describe a novel approach to generate conference call-for-papers using Natural Language Processing and Long Short-Term Memory network. The approach has been successfully evaluated on a publicly available dataset.
|
Natural language generation plays a critical role in spoken dialogue systems. We present a new approach to natural language generation for task-oriented dialogue using recurrent neural networks in an encoder-decoder framework. In contrast to previous work, our model uses both lexicalized and delexicalized components i.e. slot-value pairs for dialogue acts, with slots and corresponding values aligned together. This allows our model to learn from all available data including the slot-value pairing, rather than being restricted to delexicalized slots. We show that this helps our model generate more natural sentences with better grammar. We further improve our model's performance by transferring weights learnt from a pretrained sentence auto-encoder. Human evaluation of our best-performing model indicates that it generates sentences which users find more appealing.
|
Pl@ntNet is a large-scale participatory platform and information system dedicated to the production of botanical data through image-based plant identification. In June 2015, Pl@ntNet mobile front-ends moved from classical hand-crafted visual features to deep-learning based image representations. This paper gives an overview of today's Pl@ntNet architecture and discusses how the introduction of convolutional neural networks did improve the whole workflow along the years.
|
Sum-Product Networks (SPNs) are deep density estimators allowing exact and tractable inference. While up to now SPNs have been employed as black-box inference machines, we exploit them as feature extractors for unsupervised Representation
Learning. Representations learned by SPNs are rich probabilistic and hierarchical part-based features. SPNs converted into Max-Product Networks (MPNs) provide a way to decode these representations back to the original input space. In extensive experiments, SPN and MPN encoding and decoding schemes prove highly competitive for Multi-Label Classification tasks.
|
We describe a method for learning word embeddings with data-dependent dimensionality. Our Infinite Skip-Gram (iSG) and Infinite Continuous Bag-of-Words (iCBOW) are nonparametric analogs of Mikolov et al.'s (2013) well-known 'word2vec' models. Vectors are made infinite dimensional by employing techniques used by Cote & Larochelle (2016) to define a RBM with an infinite number of hidden units. We show qualitatively and quantitatively that the iSG and iCBOW are competitive with their fixed-dimension counterparts while having the ability to infer the appropriate capacity of each word representation.
|
In this work we perform outlier detection using ensembles of neural networks obtained by variational approximation of the posterior in a Bayesian neural network setting. The variational parameters are obtained by sampling from the true posterior by gradient descent. We show our outlier detection results are comparable to those obtained using other efficient ensembling methods.
|
The trend of time series characterize the intermediate upward and downward patterns of time series. Learning and forecasting the trend in time series data play an important role in many real applications, ranging from resource allocation in data centers, load schedule in smart grid and so on. Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel hybrid neural network based learning approach over time series and the associated trend sequence. TreNet leverages convolutional neural networks (CNNs) to extract salient features from local raw data of time series and uses a long-short term memory recurrent neural network (LSTM) to capture the sequential dependency in historical trend evolution. Some preliminary experimental results demonstrate the advantage of TreNet over cascade of CNN and LSTM, CNN, LSTM, Hidden Markov Model method and various kernel based
baselines on real datasets.
|
Generative adversarial networks (GANs) transform latent vectors into visually plausible images. It is generally thought that the original GAN formulation gives no out-of-the-box method to reverse the mapping, projecting images back into latent space. We introduce a simple, gradient-based technique called stochastic clipping. In experiments, for images generated by the GAN, we exactly recover their latent vector pre-images 100% of the time. Additional experiments demonstrate that this method is robust to noise. Finally, we show that even for unseen images, our method appears to recover unique encodings.
|
Improving inference speed and accuracy is one of the main objectives of current deep learning-related research. In this paper, we introduce our approach using middle output layer for this purpose. From the feasibility study using Inception-v4, we found that our approach has potential to reduce the average inference time while increasing the inference accuracy.
|
In this paper, we address the issue of training Recurrent Neural Networks with binary weights and introduce a novel Contextualized Discretization (CD) framework and showcase its effectiveness across multiple RNN architectures and two disparate tasks. We also propose a modified GRU architecture that allows harnessing the CD method and reclaim the exclusive usage of weights in $\{-1, 1\}$, which in turn reduces the number of power-two bit multiplications from $O(n^3)$ to $O(n^2)$.
|
Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception. Videos are available at http://rll.berkeley.edu/adversarial .
|
While deep neural networks have achieved state-of-the-art performance on many tasks across varied domains, they still remain black boxes whose inner workings are hard to interpret and understand. In this paper, we develop a novel method for efficiently capturing the behaviour of deep neural networks using kernels. In particular, we construct a hierarchy of increasingly complex kernels that encode individual hidden layers of the network. Furthermore, we discuss how our framework motivates a novel supervised weight initialization method that discovers highly discriminative features already at initialization.
|
Dance Dance Revolution (DDR) is a popular rhythm-based video game. Players perform steps on a dance platform in synchronization with music as directed by on-screen step charts. While many step charts are available in standardized packs, users may grow tired of existing charts, or wish to dance to a song for which no chart exists. We introduce the task of learning to choreograph. Given a raw audio track, the goal is to produce a new step chart. This task decomposes naturally into two subtasks: deciding when to place steps and deciding which steps to select. We demonstrate deep learning solutions for both tasks and establish strong benchmarks for future work.
|
Current approaches for Knowledge Distillation (KD) either directly use training data or sample from the training data distribution. In this paper, we demonstrate effectiveness of 'mismatched' unlabeled stimulus to perform KD for image classification networks. For illustration, we consider scenarios where this is a complete absence of training data, or mismatched stimulus has to be used for augmenting a small amount of training data. We demonstrate that stimulus complexity is a key factor for distillation's good performance. Our examples include use of various datasets for stimulating MNIST and CIFAR teachers.
|
In this paper, we propose a feed-forward neural style transfer network which can
transfer unseen arbitrary styles. To do that, first, we extend the fast neural style
transfer network proposed by Johnson et al. (2016) so that the network can learn
multiple styles at the same time by adding a conditional input. We call this as “a
conditional style transfer network”. Next, we add a style condition network which
generates a conditional signal from a style image directly, and train “a conditional
style transfer network with a style condition network” in an end-to-end manner.
The proposed network can generate a stylized image from a content image and a
style image in one-time feed-forward computation instantly.
|
The policy gradients of the expected return objective can react slowly to rare rewards. Yet, in some cases agents may wish to emphasize the low or high returns regardless of their probability. Borrowing from the economics and control literature, we review the risk-sensitive value function that arises from an exponential utility and illustrate its effects on an example. This risk-sensitive value function is not always applicable to reinforcement learning problems, so we introduce the particle value function defined by a particle filter over the distributions of an agent's experience, which bounds the risk-sensitive one. We illustrate the benefit of the policy gradients of this objective in Cliffworld.
|
We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. The reverse-mode procedure extends previous work by (Maclaurin et al. 2015) and offers the opportunity to insert constraints on the hyperparameters in a natural way. The forward-mode procedure is suitable for stochastic hyperparameter updates, which may significantly speedup the overall hyperparameter optimization procedure.
|
Thanks to their ability to absorb large amounts of data, Convolutional Neural Networks (CNNs) have become the state-of-the-art in various vision challenges, sometimes even on par with biological vision. CNNs rely on optimisation routines that typically require intensive computational power, thus the question of implementing CNNs on embedded architectures is a very active field of research. Of particular interest is the problem of incremental learning, where the device adapts to new observations or classes. To tackle this challenging problem, we propose to combine pre-trained CNNs with Binary Associative Memories, using product random sampling as an intermediate between the two methods. The obtained architecture requires significantly less computational power and memory usage than existing counterparts. Moreover, using various challenging vision datasets we
show that the proposed architecture is able to perform one-shot learning – even using only part of the dataset –, while keeping very good accuracy.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.